SpaceX owners do not care. If they were risk-averse, they would have dumped SpaceX like it was toxic waste.
In a broader sense, this "I bet Oracle shareholders hate their bad PR" attitude is really zero-sum. It's pervasive on HN, and we rarely ever see bad PR snowball beyond niche discussions. I want $BIGCORP to collapse as much as the next guy, but the outrage-derived comments don't seem to reflect the market's response.
I'm kind of curious what precedent this will set. It's pretty easy to create deepfake sexual content already and has been for years now. The grok thing is absurdly easy though on some level. To actually get full blown sexual content though I think is substantially more difficult and probably falls into the realm of hacking almost, that being said, it is of course possible. But it makes me wonder where the line lives for something like this. The people who did this are obviously scum of course and deserve to be punished, but by that argument facebook/instagram/whatsapp/whatever group-chats of leaked sex tapes have probably done far more damage.
To me both companies Adobe and xAI have products for digital image creation and manipulation. I phrased my comment in my own terms. I didn't copy it from anywhere else or use AI to come up with it. I could have added more details on each of these products and how they are similar, but I think that takes away from the message in my post and makes it harder to digest than me being able to boil down the essence of my argument down to a single sentence. I expected the intended audience for my post to be able to see how they were equivalent for themselves, but it turned out that there were audience members who don't understand how Grok is used.
Typing a description of an image is comparable to finding a button in a menu regarding ease/enablement. The difference is actually that Grok decides explicitly to not add even simple (if imperfect) guards, which requires no real burden on their part, which is morally and spiritually hideous.
It really is not the same at all. One is purely from the users mind and the other is an explicit part of the product. Photoshop could also add simple guards too, but this type of authoritarian is not appreciated by users.
>I hope they experience consequences.
I hope people who want a censored model can use one and people who want uncensored models can use uncensored ones.
And it had blatantly insufficient safety guarding that. I saw photos of 4th graders, labeled accordingly, that Grok happily modified. Given the model does have access to the text content of posts it's replying to, that's wildly negligent. This is not a simple brush tool we're talking about, and refusing requests to strip clothes off people is one of the most basic measures that could be taken here.
Do you really think that when people make things that have risks associated with the use or misuse of those tools that they have no responsibility to mitigate those risks or prevent misuse?
Yes. I do not believe tool creators should limit or censor their users. I do not think word processors should ban you for writing something bad about the government or against a policy the company does not like. I do not think a paint program should ban the user because they drew a penis and that could cause for brand damage if people knew they used a company's program for that. I don't believe that web browsers should prevent users from committing copyright infringement if they see you going on a site known to host pirated material. I don't believe your operating system should lock you out and delete all of your files if it detects that you might be developing malware.
I think it gives too much control to businesses which do not have a near exact market replacements and let them dictate too much of culture.
There's nothing stopping any tool maker from doing those things. In fact, they do those things all the time! So if tool makers and tools are all limiting use in a self-serving way anyway, why should we not also expect them to limit use in a way that protects children from sexual exploitation through the use of those tools? I respect the principle but I think this is an idealistic extreme and not really based in any practicality or realism.
It may be legal to add such features, but that doesn't mean I think it is a good thing. This is a modern problem. Shovel manufacturers were never able to have power over what you could use shovels for so the idea of making laws around it made no sense. It's possible to be in a world where to print out a political flyer you have to find a politically aligned operating system and install a politically aligned web browser to go to a politically aligned web retailer who will sell you a politically aligned printer using a politically aligned payment processor with politically aligned banks. And if no one is aligned with you offering one of those services I guess you are just out of luck. This example isn't even touching the point that AI filters are not perfect and will flag false positives.
>they do those things all the time
They actually don't. It's highly irregular, usually only when the law requires such censorship functionality do they get included in products.
>idealistic extreme
It is not an extreme position. Practically every other tool other than AI is not locked down. AI is by far the exception here and is a step back from the freedom of everything else.
Adobe doesn't provide tools explicitly designed to enable the creation of child pornography — in fact their tools try to prevent its creation — and they don't profit from the sale of it. But, of course, Musk fanboys can be reliably counted upon to support profiteering from child sexual abuse in any form.
"Something you don't like" as a description for the deliberate sexualization of children for profit, as if it's not an objective moral harm, is telling on yourself here. Just because the loudest leaders in Silicon Valley have been trying to convince every one of their sycophants that sexually abusing kids is no big deal doesn't mean the rest of us who are normal have bought into it.
Reducing safety filters doesn't mean that it's explicitly designed for child pornography. This is like thinking free speech is designed for child pornography.
You seem to be leaning heavily on analogy, which is inherently flawed. The entire point of analogy is that you are comparing two different things without actually comparing them - just declaring them equal. It's a weak rhetorical tool for petty arguments.
I am leaning on analogy as a strategy to try and ground other's thinking about this article since I believe they do not universally hold the idea that tools should micromanage what people are allowed to do with them. I assuming that readers are able to understand how making images via traditional digital tools and via AI tools is the same thing. If I just want to share my own view it I would go on about how it is wrong to add deliberate censorship tools into tools and how letting British people force American companies to censor things is wrong.
I bet shareholders of SpaceX are thrilled to be exposed to this for no reason
SpaceX owners do not care. If they were risk-averse, they would have dumped SpaceX like it was toxic waste.
In a broader sense, this "I bet Oracle shareholders hate their bad PR" attitude is really zero-sum. It's pervasive on HN, and we rarely ever see bad PR snowball beyond niche discussions. I want $BIGCORP to collapse as much as the next guy, but the outrage-derived comments don't seem to reflect the market's response.
You would lose this bet.
I'm kind of curious what precedent this will set. It's pretty easy to create deepfake sexual content already and has been for years now. The grok thing is absurdly easy though on some level. To actually get full blown sexual content though I think is substantially more difficult and probably falls into the realm of hacking almost, that being said, it is of course possible. But it makes me wonder where the line lives for something like this. The people who did this are obviously scum of course and deserve to be punished, but by that argument facebook/instagram/whatsapp/whatever group-chats of leaked sex tapes have probably done far more damage.
[flagged]
I don't expect this kind of dramatic internet-forum-style false equivalence on HN.
Argue your opinion on its own terms, rather than simply pointing to something else and saying "they're the same".
To me both companies Adobe and xAI have products for digital image creation and manipulation. I phrased my comment in my own terms. I didn't copy it from anywhere else or use AI to come up with it. I could have added more details on each of these products and how they are similar, but I think that takes away from the message in my post and makes it harder to digest than me being able to boil down the essence of my argument down to a single sentence. I expected the intended audience for my post to be able to see how they were equivalent for themselves, but it turned out that there were audience members who don't understand how Grok is used.
does photoshop have a “make porn of this person” button? does it have a “make me csam” button?
Nor did Grok. You had to prompt what you wanted to have happen.
Typing a description of an image is comparable to finding a button in a menu regarding ease/enablement. The difference is actually that Grok decides explicitly to not add even simple (if imperfect) guards, which requires no real burden on their part, which is morally and spiritually hideous.
I hope they experience consequences.
It really is not the same at all. One is purely from the users mind and the other is an explicit part of the product. Photoshop could also add simple guards too, but this type of authoritarian is not appreciated by users.
>I hope they experience consequences.
I hope people who want a censored model can use one and people who want uncensored models can use uncensored ones.
And it had blatantly insufficient safety guarding that. I saw photos of 4th graders, labeled accordingly, that Grok happily modified. Given the model does have access to the text content of posts it's replying to, that's wildly negligent. This is not a simple brush tool we're talking about, and refusing requests to strip clothes off people is one of the most basic measures that could be taken here.
It's called the "brush" tool.
Do you really think that when people make things that have risks associated with the use or misuse of those tools that they have no responsibility to mitigate those risks or prevent misuse?
Yes. I do not believe tool creators should limit or censor their users. I do not think word processors should ban you for writing something bad about the government or against a policy the company does not like. I do not think a paint program should ban the user because they drew a penis and that could cause for brand damage if people knew they used a company's program for that. I don't believe that web browsers should prevent users from committing copyright infringement if they see you going on a site known to host pirated material. I don't believe your operating system should lock you out and delete all of your files if it detects that you might be developing malware.
I think it gives too much control to businesses which do not have a near exact market replacements and let them dictate too much of culture.
There's nothing stopping any tool maker from doing those things. In fact, they do those things all the time! So if tool makers and tools are all limiting use in a self-serving way anyway, why should we not also expect them to limit use in a way that protects children from sexual exploitation through the use of those tools? I respect the principle but I think this is an idealistic extreme and not really based in any practicality or realism.
It may be legal to add such features, but that doesn't mean I think it is a good thing. This is a modern problem. Shovel manufacturers were never able to have power over what you could use shovels for so the idea of making laws around it made no sense. It's possible to be in a world where to print out a political flyer you have to find a politically aligned operating system and install a politically aligned web browser to go to a politically aligned web retailer who will sell you a politically aligned printer using a politically aligned payment processor with politically aligned banks. And if no one is aligned with you offering one of those services I guess you are just out of luck. This example isn't even touching the point that AI filters are not perfect and will flag false positives.
>they do those things all the time
They actually don't. It's highly irregular, usually only when the law requires such censorship functionality do they get included in products.
>idealistic extreme
It is not an extreme position. Practically every other tool other than AI is not locked down. AI is by far the exception here and is a step back from the freedom of everything else.
one time i mentioned peter thiel and dang immediately responded scolding me for not engaging in constructive debate.
Adobe doesn't provide tools explicitly designed to enable the creation of child pornography — in fact their tools try to prevent its creation — and they don't profit from the sale of it. But, of course, Musk fanboys can be reliably counted upon to support profiteering from child sexual abuse in any form.
"Something you don't like" as a description for the deliberate sexualization of children for profit, as if it's not an objective moral harm, is telling on yourself here. Just because the loudest leaders in Silicon Valley have been trying to convince every one of their sycophants that sexually abusing kids is no big deal doesn't mean the rest of us who are normal have bought into it.
Reducing safety filters doesn't mean that it's explicitly designed for child pornography. This is like thinking free speech is designed for child pornography.
You seem to be leaning heavily on analogy, which is inherently flawed. The entire point of analogy is that you are comparing two different things without actually comparing them - just declaring them equal. It's a weak rhetorical tool for petty arguments.
I am leaning on analogy as a strategy to try and ground other's thinking about this article since I believe they do not universally hold the idea that tools should micromanage what people are allowed to do with them. I assuming that readers are able to understand how making images via traditional digital tools and via AI tools is the same thing. If I just want to share my own view it I would go on about how it is wrong to add deliberate censorship tools into tools and how letting British people force American companies to censor things is wrong.
It’s more like emailing google for CASM and suing google for emailing it back.
Or in the near future, “Why are we suing the robot company ! Bob told the robot to kill the child!”