@stefano@bsd.cafe, BSD Cafe barista, developer extraordinaire and even better human being, has started working on an iOS Mastodon API / Fediverse app named #Mastoblaster
-
@grunfink @mastoblaster thank YOU for creating and maintaining snac!
-
@grunfink @mastoblaster thank YOU for creating and maintaining snac!
-
@grunfink @mastoblaster thank YOU for creating and maintaining snac!
@stefano@bsd.cafe
Interesting pricing policy.
I wonder how it will work with the Apple store (but I'm not an Apple user or developer)
@mastoblaster@mastoblaster.app
-
@grunfink @mastoblaster thank YOU for creating and maintaining snac!
@stefano @grunfink @mastoblaster if you continue like this you will make me do unthinkable - get myself an iPhone



-
@stefano @grunfink @mastoblaster if you continue like this you will make me do unthinkable - get myself an iPhone



-
@grunfink @stefano @mastoblaster ugh
Uses Apple Intelligence to generate alt text for your media
fucking “AI” slop…
-
@grunfink @stefano @mastoblaster What's snac? Google didnt help me
-
R relay@relay.infosec.exchange shared this topic
-
@grunfink @stefano @mastoblaster ugh
Uses Apple Intelligence to generate alt text for your media
fucking “AI” slop…
@mirabilos @grunfink @mastoblaster Apple Intelligence does not work very well, but it runs locally, on device. The goal of this implementation is to give people with visual impairments at least a minimal chance to get an approximate description of what appears when people do not provide alt text. I do not rule out removing it if tests show that the results are not adequate. The goal is inclusivity, and it does not fuel the general “AI hype”, precisely because it uses local resources and is targeted to that specific use case only.
-
@mirabilos @grunfink @mastoblaster Apple Intelligence does not work very well, but it runs locally, on device. The goal of this implementation is to give people with visual impairments at least a minimal chance to get an approximate description of what appears when people do not provide alt text. I do not rule out removing it if tests show that the results are not adequate. The goal is inclusivity, and it does not fuel the general “AI hype”, precisely because it uses local resources and is targeted to that specific use case only.
@stefano @mastoblaster @grunfink it’s still a slop, and there’s been studies that show it cannot be used to produce image descriptions reliably (I have a reference somewhere in my bookmarks). The "training" also does not run locally on the devices and uses tons of stolen data.
Both from a producer and consumer PoV it’s much better to leave the images undescribed, then solicit help from others to fill in an image description.
Do at least mark the slop output, so that we can block people who use it.
-
@stefano @mastoblaster @grunfink it’s still a slop, and there’s been studies that show it cannot be used to produce image descriptions reliably (I have a reference somewhere in my bookmarks). The "training" also does not run locally on the devices and uses tons of stolen data.
Both from a producer and consumer PoV it’s much better to leave the images undescribed, then solicit help from others to fill in an image description.
Do at least mark the slop output, so that we can block people who use it.
@mirabilos @mastoblaster @grunfink sure. It’s optional, and must be explicitly tapped to activate it. There’s also a clear disclaimer.
I’m not a fan - at all - and I’m still considering to remove it before the final release.