So Bluesky is attempting to avoid the evolutionary path into ?
What I think they’re grappling with are:
* Edge cases & slippery slopes: Distinguishing “fictional consent,” “fantasy” vs “non-consensual content,” especially in illustrated or animated work, is hard.
* Automation risk: Letting AI auto-fill alt text or auto-censor content might introduce errors, overreach, or misinterpretation.
* Creator confidence: If policies or enforcement are too strict or unpredictable, content creators (especially in adult / kink / erotic art spaces) may leave or self-censor.
* Accessibility mandate vs burden: Encouraging alt text is good, but requiring it for every image adds friction; enforcing it strictly might deter visual content.
* Policy signaling: How Bluesky frames its sexual content rules sends a message about what kinds of expression are welcome there (and what kinds are policed).
Sounds like avoiding a cross betwedb 4chan and DeviantArt?
> at://did:plc:4zd6mbhgvmjri5rmlsyeq6go/3lzpamhfmfc2d: a mostly naked vivi (she/her) domming a bound matchstick (he/him) in a neon-lit nightclub called PONYBOYS
> at://did:plc:4zd6mbhgvmjri5rmlsyeq6go/3lzpamhfmfc2d: same pic but made absolutely clear that matchstick is into it and consenting :)
bsky recently changed (proposed changing?) their policies on how adult stuff is presented. the wording they've chosen has a specific focus on explicit consent, resulting in having to do ... that.
The specific policy is "We allow consensual adult sexual content, including fictional depictions, when appropriately labeled and subject to appropriate age restrictions. We do not allow sexual content involving non-consensual activity including synthetic, simulated, illustrated, or animated versions."
I can understand why artists who enjoy the non-con kink dislike this policy, but I think people can also appreciate where this policy comes from. I can also understand why other NSFW artists are anxious that they need to indicate consent, but there is, unfortunately, some amount of judgment that the moderation team has to make on whether something is violative noncon.
I wonder if, besides imaginary depictions of non-consensual acts (and what of consensual non-consent, CNC? Do you have to add a frame to your comic where the characters discuss their limits and safe words?) -- maybe it's meant more to address deepfakes / porn involving depictions of people who didn't agree to be depicted that way. I think the law around usage of your image (and your voice) is in a very interesting place but could use clarification so all citizens can benefit; right now its hollywood actors and basketball players who get to explicitly control how images of them are used. Facebook did get in trouble once for putting photos of your friends into ads, "Your aunt bettie likes walmart!", they still can list that text since you did hit the like button, but there was some law that prevented them from using your face to imply endorsement.
The page accompanies a blog post about how many images have alt text on Bluesky so empty lines are included to give a feel for how many images do or don't have the descriptions: https://digitalseams.com/blog/image-descriptions-on-bluesky
> This demo is part of Image descriptions on Bluesky: not bad, could be better - see the blog post for more details!
AI has no idea what you're trying to convey through a picture. Imagine you're shitposting a "this is fine" meme with some bad collage on it, asking for a LLM to properly convey your take is just a fool's errand.
Moreover some images SHOULD NOT have alt text or at least shouldn't have their associated alt text displayed in all contexts. I put this in all caps because this is a pervasive and common myth. Granted, all images on Bluesky probably should have alt text because ostensibly an image as part of a post is probably content. However, in cases where an image is purely decorative and not meaningfully relevant to the page, a blind or low-vision person doesn't need to, nor do they want to, hear your weird interpretation of some abstract art. If you disagree, take it up with W3C. This isn't just my opinion.
(creator here) I thought it would at least be easy to transcribe screenshots of just text, which were a common part of the dataset. These are harder to misinterpret so I figured automated alt text would be a win.
The AI automod can't even tag images correctly. I've seen perfectly innocent drawings get marked as adult or graphic. If it can't figure those out, there's no way it understands enough about what's going on in the images to automatically provide alt-text. At least not well enough that anyone should rely on it when they can just enter proper alt-text themselves.
I agree. But ~99% of Bluesky users deeply, deeply, deeply hate AI and anything associated with it, so don't expect any movement on that front for at least a few decades.
What makes for good alt text depends on the author's intent by including the image, an LLM can't know that.
Authors can often use an LLM to save time by outputting an initial draft an author can fix.
Readers can also use their own LLM as a supplement, to ask it questions about details in an image that wouldn't be in the alt text. LLMs will get things wrong but are generally good enough for such supplementary use.
It feels like little needs to change. If you want to provide alt text to offer high quality specifics, awesome. If you leave it blank, those who need/want alt text should cause it to be generated on-demand. Either client side or by a server which then caches it for the next user.
So Bluesky is attempting to avoid the evolutionary path into ?
What I think they’re grappling with are:
* Edge cases & slippery slopes: Distinguishing “fictional consent,” “fantasy” vs “non-consensual content,” especially in illustrated or animated work, is hard.
* Automation risk: Letting AI auto-fill alt text or auto-censor content might introduce errors, overreach, or misinterpretation.
* Creator confidence: If policies or enforcement are too strict or unpredictable, content creators (especially in adult / kink / erotic art spaces) may leave or self-censor.
* Accessibility mandate vs burden: Encouraging alt text is good, but requiring it for every image adds friction; enforcing it strictly might deter visual content.
* Policy signaling: How Bluesky frames its sexual content rules sends a message about what kinds of expression are welcome there (and what kinds are policed).
Sounds like avoiding a cross betwedb 4chan and DeviantArt?
Some fun stuff in here:
> at://did:plc:4zd6mbhgvmjri5rmlsyeq6go/3lzpamhfmfc2d: a mostly naked vivi (she/her) domming a bound matchstick (he/him) in a neon-lit nightclub called PONYBOYS
> at://did:plc:4zd6mbhgvmjri5rmlsyeq6go/3lzpamhfmfc2d: same pic but made absolutely clear that matchstick is into it and consenting :)
bsky recently changed (proposed changing?) their policies on how adult stuff is presented. the wording they've chosen has a specific focus on explicit consent, resulting in having to do ... that.
The specific policy is "We allow consensual adult sexual content, including fictional depictions, when appropriately labeled and subject to appropriate age restrictions. We do not allow sexual content involving non-consensual activity including synthetic, simulated, illustrated, or animated versions."
I can understand why artists who enjoy the non-con kink dislike this policy, but I think people can also appreciate where this policy comes from. I can also understand why other NSFW artists are anxious that they need to indicate consent, but there is, unfortunately, some amount of judgment that the moderation team has to make on whether something is violative noncon.
I wonder if, besides imaginary depictions of non-consensual acts (and what of consensual non-consent, CNC? Do you have to add a frame to your comic where the characters discuss their limits and safe words?) -- maybe it's meant more to address deepfakes / porn involving depictions of people who didn't agree to be depicted that way. I think the law around usage of your image (and your voice) is in a very interesting place but could use clarification so all citizens can benefit; right now its hollywood actors and basketball players who get to explicitly control how images of them are used. Facebook did get in trouble once for putting photos of your friends into ads, "Your aunt bettie likes walmart!", they still can list that text since you did hit the like button, but there was some law that prevented them from using your face to imply endorsement.
The entire codebase is an html page with some javascript embedded. view-source:https://bobbiec.github.io/bluesky-alt-text.html
The way god intended
I was about to complain that surely god couldn’t have intended… JavaScript… but then I looked in a mirror and realized he made it in my image.
This is an interesting insight into what gets posted to Bsky in real-time. Quite like this, very nice!
Perhaps filter out the empty messages?
The page accompanies a blog post about how many images have alt text on Bluesky so empty lines are included to give a feel for how many images do or don't have the descriptions: https://digitalseams.com/blog/image-descriptions-on-bluesky
> This demo is part of Image descriptions on Bluesky: not bad, could be better - see the blog post for more details!
Surprised then that so many posts are just images.
An optional filter, I can see it being interesting observing frequency of alt relative to no-alt.
I do think it'd be nice if it didn't scroll the log automatically, IF the user has scrolled at all (looking at backlog). A few ways to go about that
The era of AI shouldn’t require users to enter alt text. AI should be able to provide one.
AI has no idea what you're trying to convey through a picture. Imagine you're shitposting a "this is fine" meme with some bad collage on it, asking for a LLM to properly convey your take is just a fool's errand.
Moreover some images SHOULD NOT have alt text or at least shouldn't have their associated alt text displayed in all contexts. I put this in all caps because this is a pervasive and common myth. Granted, all images on Bluesky probably should have alt text because ostensibly an image as part of a post is probably content. However, in cases where an image is purely decorative and not meaningfully relevant to the page, a blind or low-vision person doesn't need to, nor do they want to, hear your weird interpretation of some abstract art. If you disagree, take it up with W3C. This isn't just my opinion.
https://www.w3.org/WAI/tutorials/images/decision-tree/
(creator here) I thought it would at least be easy to transcribe screenshots of just text, which were a common part of the dataset. These are harder to misinterpret so I figured automated alt text would be a win.
But I found that even that was not easy to do with "traditional" OCR, notes here: https://digitalseams.com/blog/image-transcription-humbled-me
> asking for a LLM to properly convey
Alt text is not there to explain the joke, that would imply cripples are stupid.
AI is fine, if you want to go against the trillions $ come back with a proof.
LLMs fail, wow easy to find, I'll bet alt text on random internet images fails at over 10 times the rate, most are null.
I've seen what happens when people use AI to fill in alt text and it's not pretty.
The AI automod can't even tag images correctly. I've seen perfectly innocent drawings get marked as adult or graphic. If it can't figure those out, there's no way it understands enough about what's going on in the images to automatically provide alt-text. At least not well enough that anyone should rely on it when they can just enter proper alt-text themselves.
I agree. But ~99% of Bluesky users deeply, deeply, deeply hate AI and anything associated with it, so don't expect any movement on that front for at least a few decades.
What makes for good alt text depends on the author's intent by including the image, an LLM can't know that.
Authors can often use an LLM to save time by outputting an initial draft an author can fix.
Readers can also use their own LLM as a supplement, to ask it questions about details in an image that wouldn't be in the alt text. LLMs will get things wrong but are generally good enough for such supplementary use.
It feels like little needs to change. If you want to provide alt text to offer high quality specifics, awesome. If you leave it blank, those who need/want alt text should cause it to be generated on-demand. Either client side or by a server which then caches it for the next user.
Same with closed captioning!
And then the alt text Nazis can be quiet, too!