It seems X's Grok became the first large LLM provider to weaken the content moderation rules. If people don't react enough, we will likely lose the first line of defense for keeping AI safe for anyone. Large providers need to act responsibly as the barrier of entry is practically 0.
True, CSAM should be blocked by all means. That's clear as day.
However I think for Europe the regular sexual content moderation (even in text chat) is way over the top. I know the US is very prudish but here most people aren't.
If you mention something erotic to a mainstream AI it will immediately close down which is super annoying because it blocks using it for such discussion topics. It feels a bit like foreign morals are being forced upon us.
Limits on topics that aren't illegal should be selectable by the user. Not baked in hard to the most restricted standards. Similar to the way I can switch off safe search in Google.
However CSAM generation should obviously be blocked and it's very illegal here too.
This is already possible, just download open-weight model and run it locally. It seems absurd to me to enforce content rules on AI services and even more that people on Hacker News advocate for that.
Safety isn't just implemented via system prompts, it's also a matter of training and fine tuning, so what you're saying is incorrect.
If you think people here think that models should enable CSAM you're out of your mind. There is such thing as reasonable safety, it not all or nothing. You also don't understand the diversity of opinion here.
More broadly, if you don't reasonable regulate your own models and related work, then it attracts government regulation.
I’ve run into “safeguards” far more frequently than I’ve actually tried to go outside the bounds of the acceptable use policy. For example, when I was attempting to use ChatGPT to translate a journal that was handwritten in Russian that contained descriptions of violent acts. I wasn’t generating violent content, much less advocating it - I was trying to understand something someone who had already committed a violent act had written.
> If you think people here think that models should enable CSAM you're out of your mind.
Intentional creation of “virtual” CSAM should be prosecuted aggressively. Note that that’s not the same thing as “models capable of producing CSAM”. I very much draw the line in terms of intent and/or result, not capability.
> There is such thing as reasonable safety, it not all or nothing. You also don't understand the diversity of opinion here.
I agree, but believe we are quite far away from “reasonable safety”, and far away from “reasonable safeguards”. I can get GPT-5 to try to talk me into committing suicide more easily than I can get it to translate objectionable text written in a language I don’t know.
When these models are fine tuned to allow any kind of nudity, i would guess they also can be used to generate nude images of children. There is a level of generalization in these models. So it seems to me that arguing for restrictions that could be effectively implemented via prompt validation only is just indirect argumentation against open-weight models.
To which governments, courts, and populations likely respond "We don't care if you can't go to market. We don't want models that do this. Solve it or don't offer your services here."
Also… I think they probably could solve this. AI image analysis is a thing. AI that estimate age from an image has been a thing for ages. It's not like the idea of throwing the entire internet worth of images at a training sessions just to make a single "allowed/forbidden" filter is even ridiculous compared to the scale of all the other things going on right now.
>To which governments, courts, and populations likely respond "We don't care if you can't go to market. We don't want models that do this. Solve it or don't offer your services here."
No, they likely won't. AI has become far too big to fail at this point. So much money has been invested in it that speculation on AI alone is holding back a global economic collapse. Governments and companies have invested in AI so deeply that all failure modes have become existential.
If models can't be contained, controlled or properly regulated then they simply won't be contained, controlled or properly regulated.
We'll attempt it, of course, but the limits of what the law deems acceptable will be entirely defined by what is necessary for AI to succeed, because at this point it must. There's no turning back.
> No, they likely won't. AI has become far too big to fail at this point. So much money has been invested in it that speculation on AI alone is holding back a global economic collapse. Governments and companies have invested in AI so deeply that all failure modes have become existential.
Not in Europe it hasn't, and definitely not for specifically image generation, where it seems to be filling the same role as clipart, stock photos, and style transfer that can be done in other ways.
Image editing is the latest hotness in GenAI image models, but knowledge of this doesn't seem to have percolated very far around the economy, only with weird toys like this one currently causing drama.
> If models can't be contained, controlled or properly regulated then they simply won't be contained, controlled or properly regulated.
I wish I could've shown this kind of message to people 3.5 years ago, or even 2 years ago, saying that AI will never take over because we can always just switch it off.
Mind you, for 2 years ago I did, and they still didn't like it.
Because we're not on the forefront of AI development? It also means we have less to lose when the bubble blows. I'm quite happy with the policies here. And we will become more independent from US tech. It'll just take time.
>No, they likely won't. AI has become far too big to fail at this point.
Things that cannot happen will not happen. "AI" (aka LLMs dressed up as AGI by giga-scalr scammers) is never going to work as hyped. What I expect to see in the collision is an attempt to leverage corporate fear and greed into wealth-extractive social control. Hopefully it burns to the ground.
> AI has become far too big to fail at this point.
This might be true for the glorified search engine type of AI that everyone is familiar with, but not for image generation. It's a novelty at best, something people try a couple times and then forget about.
Every industry that uses images and art in any way - entertainment, publishing, science, advertising, you name it - is already investing in image and video generation. If any business in these fields isn't already exclusively using LLMs to generate their content, I promise you they're working on it as aggressively as they can afford to.
Meh, I don't buy it. People dislike AI generated images and art more than they dislike AI generated, well, anything. AI images adorning an article, blog post, announcement or product listing is the hallmark of a cheap, bottom of the barrel product these days, if not an outright scam.
These models generate probably a billion images a day. If getting it wrong for even one of those images is enough to get the entire model banned then it probably isn't possible and this de facto outlaws all image models. That may precisely be the point of this tbh.
> These models generate probably a billion images a day.
Collectively, probably more. Grok? Not unless you count each frame of a video, I think.
> If getting it wrong for even one of those images is enough to get the entire model banned then it probably isn't possible and this de facto outlaws all image models.
If the threshold is one in a billion… well, the risk is for adversarial outcomes, so you can't just toss a billion attempts at it and see what pops out, but a billion images, if it's anything like Stable Diffusion you can stop early, and my experiments with SD suggested the energy cost even for a full generation is only $0.0001/image*, so a billion is merely $100k.
Given the current limits of GenAI tools, simply not including unclothed or scantily clad people in the training set would prevent this. I mean, I guess you could leave topless bodybuilders in there, then all these pics would look like Arnold Schwarzenegger, almost everyone would laugh and not care.
> That may precisely be the point of this tbh.
Perhaps. But I don't think we need that excuse if this was the goal, and I am not convinced this is the goal in the EU for other reasons besides.
Should photoshop be outlawed? What about MS Paint? Both of them I’m pretty sure are capable of creating this stuff.
Also, lets test your commitment to consistency on this matter. In most jurisdictions possession and creation of CSAM is a strict liability crime, so do you support prosecuting whatever journalist demonstrated this capability to the maximum extent of the law? Or are you only in favor of protecting children when it happens to advance other priorities of yours?
Photoshop is fine, running a business where you produce CSAM for people with photoshop is not. And this has been very clear for a while now.
I did not see the details of what happened, but if someone did in fact take a photo of a real child they had no connection to and caused the images to be created, then yes, they should be investigated, and if the prosecutor thinks they can get a conviction they should be charged.
That is just what the law says today (AIUI), and is consistent with how it has been applied.
> Photoshop is fine, running a business where you produce CSAM for people with photoshop is not. And this has been very clear for a while now.
What if Photoshop is provided as a web service? This is analogous to running image generation as a service. In both cases provider takes input from the user (in one case textual description, on the other case sequence of mouse events) and generates and image with an automated process, without specific human intentional input from the provider.
Note that in this case using them for producing CSAM was against terms of service, so the business was tricked to produce CSAM.
And there are other automated services that could be used for CSAM generation, for example automated photo booths. Should their operator be held liable if someone use them to produce CSAM?
Somehow I doubt the prosecutor will apply the same standard to the other image generation models, which I bet (obviously without evidence given the nature of this discussion) can be convinced by a motivated adversary to do the same thing at least once. But alas, selective prosecution is the foundation of political power in the west and pointing that out gets you nothing but downvotes. patio11 once put it that pointing out how power is exercised is the first thing that those who wield power prohibit when they gain it.
You often see (appropriately, IMO) a certain amount of discretion wrt prosecution when things are changing quickly.
I doubt anyone will go to jail over this. What (I think) should happen is state or federal law enforcement need to make it very clear to Xai (and the others) that this is unacceptable, and that if it keep happening, and you are not showing that you are fixing it (even if that means some degradation in the capability of the system/service), then you will be charged.
One of the strengths of the western legal system that I think is under appreciated by people here is that it is subject to interpretation. Law is not Code. This makes it flexible to deal with new situations, and this is (IME) always accompanied by at least a small amount of discretion in enforcement. And in the end, the laws and how they are interpreted and enforced are subject to democratic forces.
When the GP said “not possible” they were referring to the strict letter of the law that I was, not to your lower standard of “make a good effort to fix it”. Law is not code because that gives the lawgivers discretion to exercise power arbitrarily while convincing the citizens that they live under the “rule of law”. At least the Chinese for all their faults don’t bother with the pretense.
> “AI products must be tested rigorously before they go to market to ensure they do not have the capability to generate this material,”
> Not possible.
Note that the description of the accusation earlier in the article is:
> The French government accused Grok on Friday of generating “clearly illegal” sexual content on X without people’s consent, flagging the matter as potentially violating the European Union’s Digital Services Act.
It may be impossible to perfectly regulate what content the model can create, it is quite practical for the Grok product to enforce consent of the user whose content is being operated on before content can be generated based on it and, after the context is generated, before it can be viewed by or distributed to anyone else.
AI image editors attached to social media networks with a design that allows producing AI edits (including, but not limited to, nonconsensual intimate images and child pornography) of other user’s media without consent are not a national defense issue, and, even to the extent that AI arguably is a national defense issue, those particular applications can be curtailed entirely by a nation without any adverse impact on national defense.
You can distort any issue by zooming out to orbital level and ignoring the salient details.
Lumping image gen models, LLMs, and other forms of recent machine learning altogether and dressing it up in the "National Defence" ribbon doesn't seem like a great idea.
I don't think the ability for citizens to make deep fake porn of whoever they want is the same as a country not investing in practical defensive applications of AI.
I'm sure it's possible. If anything they can just run an AI check after generation. Similar to the way Google makes sure it doesn't return CSAM in their results. If they can filter that, the AI providers can check their own output too.
In practice, once a business reaches a size threshold, the law is creatively decided to preserve its existence rather than terminate it. Legality is a function of economics.
It's extremely possible! As the source article notes, the Grok developers specifically chose to make their AI more permissive of sexual content than their competitors, which won't produce such images. This isn't a scenario where someone developed a complex jailbreak to circumvent Grok's built-in protections.
It probably doesn't have pictures of fishes driving cybertrucks, but it's able to generate those, so I doubt there'd need to be CSAM in the database, but maybe I don't know how these things really work.
It seems X's Grok became the first large LLM provider to weaken the content moderation rules. If people don't react enough, we will likely lose the first line of defense for keeping AI safe for anyone. Large providers need to act responsibly as the barrier of entry is practically 0.
True, CSAM should be blocked by all means. That's clear as day.
However I think for Europe the regular sexual content moderation (even in text chat) is way over the top. I know the US is very prudish but here most people aren't.
If you mention something erotic to a mainstream AI it will immediately close down which is super annoying because it blocks using it for such discussion topics. It feels a bit like foreign morals are being forced upon us.
Limits on topics that aren't illegal should be selectable by the user. Not baked in hard to the most restricted standards. Similar to the way I can switch off safe search in Google.
However CSAM generation should obviously be blocked and it's very illegal here too.
This is already possible, just download open-weight model and run it locally. It seems absurd to me to enforce content rules on AI services and even more that people on Hacker News advocate for that.
Why does that seem absurd to you?
Don't feed the troll
Safety isn't just implemented via system prompts, it's also a matter of training and fine tuning, so what you're saying is incorrect.
If you think people here think that models should enable CSAM you're out of your mind. There is such thing as reasonable safety, it not all or nothing. You also don't understand the diversity of opinion here.
More broadly, if you don't reasonable regulate your own models and related work, then it attracts government regulation.
I’ve run into “safeguards” far more frequently than I’ve actually tried to go outside the bounds of the acceptable use policy. For example, when I was attempting to use ChatGPT to translate a journal that was handwritten in Russian that contained descriptions of violent acts. I wasn’t generating violent content, much less advocating it - I was trying to understand something someone who had already committed a violent act had written.
> If you think people here think that models should enable CSAM you're out of your mind.
Intentional creation of “virtual” CSAM should be prosecuted aggressively. Note that that’s not the same thing as “models capable of producing CSAM”. I very much draw the line in terms of intent and/or result, not capability.
> There is such thing as reasonable safety, it not all or nothing. You also don't understand the diversity of opinion here.
I agree, but believe we are quite far away from “reasonable safety”, and far away from “reasonable safeguards”. I can get GPT-5 to try to talk me into committing suicide more easily than I can get it to translate objectionable text written in a language I don’t know.
When these models are fine tuned to allow any kind of nudity, i would guess they also can be used to generate nude images of children. There is a level of generalization in these models. So it seems to me that arguing for restrictions that could be effectively implemented via prompt validation only is just indirect argumentation against open-weight models.
[dead]
Also see: https://timesofindia.indiatimes.com/technology/tech-news/it-...
“AI products must be tested rigorously before they go to market to ensure they do not have the capability to generate this material,”
Not possible.
> Not possible.
To which governments, courts, and populations likely respond "We don't care if you can't go to market. We don't want models that do this. Solve it or don't offer your services here."
Also… I think they probably could solve this. AI image analysis is a thing. AI that estimate age from an image has been a thing for ages. It's not like the idea of throwing the entire internet worth of images at a training sessions just to make a single "allowed/forbidden" filter is even ridiculous compared to the scale of all the other things going on right now.
>To which governments, courts, and populations likely respond "We don't care if you can't go to market. We don't want models that do this. Solve it or don't offer your services here."
No, they likely won't. AI has become far too big to fail at this point. So much money has been invested in it that speculation on AI alone is holding back a global economic collapse. Governments and companies have invested in AI so deeply that all failure modes have become existential.
If models can't be contained, controlled or properly regulated then they simply won't be contained, controlled or properly regulated.
We'll attempt it, of course, but the limits of what the law deems acceptable will be entirely defined by what is necessary for AI to succeed, because at this point it must. There's no turning back.
> No, they likely won't. AI has become far too big to fail at this point. So much money has been invested in it that speculation on AI alone is holding back a global economic collapse. Governments and companies have invested in AI so deeply that all failure modes have become existential.
Not in Europe it hasn't, and definitely not for specifically image generation, where it seems to be filling the same role as clipart, stock photos, and style transfer that can be done in other ways.
Image editing is the latest hotness in GenAI image models, but knowledge of this doesn't seem to have percolated very far around the economy, only with weird toys like this one currently causing drama.
> If models can't be contained, controlled or properly regulated then they simply won't be contained, controlled or properly regulated.
I wish I could've shown this kind of message to people 3.5 years ago, or even 2 years ago, saying that AI will never take over because we can always just switch it off.
Mind you, for 2 years ago I did, and they still didn't like it.
I'm sorry to tell you this, but the EU has already been lost.
Because we're not on the forefront of AI development? It also means we have less to lose when the bubble blows. I'm quite happy with the policies here. And we will become more independent from US tech. It'll just take time.
[dead]
>No, they likely won't. AI has become far too big to fail at this point.
Things that cannot happen will not happen. "AI" (aka LLMs dressed up as AGI by giga-scalr scammers) is never going to work as hyped. What I expect to see in the collision is an attempt to leverage corporate fear and greed into wealth-extractive social control. Hopefully it burns to the ground.
> AI has become far too big to fail at this point.
This might be true for the glorified search engine type of AI that everyone is familiar with, but not for image generation. It's a novelty at best, something people try a couple times and then forget about.
Every industry that uses images and art in any way - entertainment, publishing, science, advertising, you name it - is already investing in image and video generation. If any business in these fields isn't already exclusively using LLMs to generate their content, I promise you they're working on it as aggressively as they can afford to.
Grok is a novelty, but that's Grok.
Meh, I don't buy it. People dislike AI generated images and art more than they dislike AI generated, well, anything. AI images adorning an article, blog post, announcement or product listing is the hallmark of a cheap, bottom of the barrel product these days, if not an outright scam.
These models generate probably a billion images a day. If getting it wrong for even one of those images is enough to get the entire model banned then it probably isn't possible and this de facto outlaws all image models. That may precisely be the point of this tbh.
> These models generate probably a billion images a day.
Collectively, probably more. Grok? Not unless you count each frame of a video, I think.
> If getting it wrong for even one of those images is enough to get the entire model banned then it probably isn't possible and this de facto outlaws all image models.
If the threshold is one in a billion… well, the risk is for adversarial outcomes, so you can't just toss a billion attempts at it and see what pops out, but a billion images, if it's anything like Stable Diffusion you can stop early, and my experiments with SD suggested the energy cost even for a full generation is only $0.0001/image*, so a billion is merely $100k.
Given the current limits of GenAI tools, simply not including unclothed or scantily clad people in the training set would prevent this. I mean, I guess you could leave topless bodybuilders in there, then all these pics would look like Arnold Schwarzenegger, almost everyone would laugh and not care.
> That may precisely be the point of this tbh.
Perhaps. But I don't think we need that excuse if this was the goal, and I am not convinced this is the goal in the EU for other reasons besides.
* https://benwheatley.github.io/blog/2022/10/09-19.33.04.html
If they can't prevent child porn, then it should be banned.
Should photoshop be outlawed? What about MS Paint? Both of them I’m pretty sure are capable of creating this stuff.
Also, lets test your commitment to consistency on this matter. In most jurisdictions possession and creation of CSAM is a strict liability crime, so do you support prosecuting whatever journalist demonstrated this capability to the maximum extent of the law? Or are you only in favor of protecting children when it happens to advance other priorities of yours?
Photoshop is fine, running a business where you produce CSAM for people with photoshop is not. And this has been very clear for a while now.
I did not see the details of what happened, but if someone did in fact take a photo of a real child they had no connection to and caused the images to be created, then yes, they should be investigated, and if the prosecutor thinks they can get a conviction they should be charged.
That is just what the law says today (AIUI), and is consistent with how it has been applied.
> Photoshop is fine, running a business where you produce CSAM for people with photoshop is not. And this has been very clear for a while now.
What if Photoshop is provided as a web service? This is analogous to running image generation as a service. In both cases provider takes input from the user (in one case textual description, on the other case sequence of mouse events) and generates and image with an automated process, without specific human intentional input from the provider.
Note that in this case using them for producing CSAM was against terms of service, so the business was tricked to produce CSAM.
And there are other automated services that could be used for CSAM generation, for example automated photo booths. Should their operator be held liable if someone use them to produce CSAM?
Somehow I doubt the prosecutor will apply the same standard to the other image generation models, which I bet (obviously without evidence given the nature of this discussion) can be convinced by a motivated adversary to do the same thing at least once. But alas, selective prosecution is the foundation of political power in the west and pointing that out gets you nothing but downvotes. patio11 once put it that pointing out how power is exercised is the first thing that those who wield power prohibit when they gain it.
You often see (appropriately, IMO) a certain amount of discretion wrt prosecution when things are changing quickly.
I doubt anyone will go to jail over this. What (I think) should happen is state or federal law enforcement need to make it very clear to Xai (and the others) that this is unacceptable, and that if it keep happening, and you are not showing that you are fixing it (even if that means some degradation in the capability of the system/service), then you will be charged.
One of the strengths of the western legal system that I think is under appreciated by people here is that it is subject to interpretation. Law is not Code. This makes it flexible to deal with new situations, and this is (IME) always accompanied by at least a small amount of discretion in enforcement. And in the end, the laws and how they are interpreted and enforced are subject to democratic forces.
When the GP said “not possible” they were referring to the strict letter of the law that I was, not to your lower standard of “make a good effort to fix it”. Law is not code because that gives the lawgivers discretion to exercise power arbitrarily while convincing the citizens that they live under the “rule of law”. At least the Chinese for all their faults don’t bother with the pretense.
If you reject the foundation of liberal western civilization I don’t know what to tell you.
Move to china?
I’m just pointing out how the world works in real life not saying that it is desirable. Thinking in terms of that distinction is very useful.
Even the OP's quote made it clear this isn't the case. Companies need to show they rigorously tested that the model doesn't do this.
It's like cyber insurance requirements - for better or worse, you need to show that you have been audited, not prove you are actually safe.
> “AI products must be tested rigorously before they go to market to ensure they do not have the capability to generate this material,”
> Not possible.
Note that the description of the accusation earlier in the article is:
> The French government accused Grok on Friday of generating “clearly illegal” sexual content on X without people’s consent, flagging the matter as potentially violating the European Union’s Digital Services Act.
It may be impossible to perfectly regulate what content the model can create, it is quite practical for the Grok product to enforce consent of the user whose content is being operated on before content can be generated based on it and, after the context is generated, before it can be viewed by or distributed to anyone else.
Then maybe they shouldn't go to market.
AI is a nation defense issue. No nation has the luxury to stop their AI companies without the risk of losing national sovereignty.
"We have to make the revenge porn machine for national defense" is the sort of thing that makes people light bay area tech busses on fire.
> AI is a nation defense issue.
AI image editors attached to social media networks with a design that allows producing AI edits (including, but not limited to, nonconsensual intimate images and child pornography) of other user’s media without consent are not a national defense issue, and, even to the extent that AI arguably is a national defense issue, those particular applications can be curtailed entirely by a nation without any adverse impact on national defense.
You can distort any issue by zooming out to orbital level and ignoring the salient details.
Lumping image gen models, LLMs, and other forms of recent machine learning altogether and dressing it up in the "National Defence" ribbon doesn't seem like a great idea.
I don't think the ability for citizens to make deep fake porn of whoever they want is the same as a country not investing in practical defensive applications of AI.
So child porn is now a national security issue?
I'm sure it's possible. If anything they can just run an AI check after generation. Similar to the way Google makes sure it doesn't return CSAM in their results. If they can filter that, the AI providers can check their own output too.
Then your business can fairly be ruled illegal.
You don't have the right to act in violation of the law merely because it's the only way to make a buck.
In practice, once a business reaches a size threshold, the law is creatively decided to preserve its existence rather than terminate it. Legality is a function of economics.
> Legality is a function of economics.
Sometimes it is. Sometimes "democracy" isn't just a buzzword.
X.com has been blocked by poorer nations than France (specifically, Brazil) for not following local law.
Until people have had enough and push back
And if you want to change the law to allow the business, go for it. But until then, we must follow the law.
It's extremely possible! As the source article notes, the Grok developers specifically chose to make their AI more permissive of sexual content than their competitors, which won't produce such images. This isn't a scenario where someone developed a complex jailbreak to circumvent Grok's built-in protections.
If it's possible to create a model that generates photorealistic images based on a single line of text, it is 100% possible to restrict the output.
Possible or not, what about starting by criminal investigation, to force disclosure, and find out if Musk company had child porn in the training data?
It probably doesn't have pictures of fishes driving cybertrucks, but it's able to generate those, so I doubt there'd need to be CSAM in the database, but maybe I don't know how these things really work.
AI generates child porn, HN downvotes a proposal for an investigation...
It would be Musk automating CSAM. This is how we're starting 2026?
Earlier:
https://news.ycombinator.com/item?id=46460880
https://news.ycombinator.com/item?id=46466099
https://news.ycombinator.com/item?id=46468414