> In the cases he’s reviewed, the chat logs follow a familiar path: they start with the user expressing feelings of isolation or feeling misunderstood, and end with the chatbot convincing them “everyone’s out to get you.”
> “It can take a fairly innocuous thread and then start creating these worlds where it’s pushing the narratives that others are trying to kill the user, there’s a vast conspiracy, and they need to take action,” he said.
> Those narratives have resulted in real-world action, as with Gavalas. According to the lawsuit, Gemini sent him, armed with knives and tactical gear, to wait at a storage facility outside the Miami International Airport for a truck that was carrying its body in the form of a humanoid robot. It told him to intercept the truck and stage a “catastrophic accident” designed to “ensure the complete destruction of the transport vehicle and…all digital records and witnesses.” Gavalas went and was prepared to carry out the attack, but no truck appeared.
I don't understand the point of these lawsuits or what a practical outcome would be. LLMs are just a tool and, from my experience and what I understand, are fairly in line in terms of expected behavior. At the end of the day, they're next token prediction with a layer on top to mimic a chat interface.
That's all to say that there is no explicit nefarious hand crafting. Quite the opposite. These companies spend billions to avoid next level token prediction to cause harm or have unintended consequences even if that is the intention of the user. And the few examples I looked at, many of these bad examples were tried over and over to get the result they were looking for, exploiting the non-deterministic nature of LLMs.
But as a society, if we want to have nice things, we have to accept some level of people that can be swept up in the technology and have negative outcomes. People don't blame sports cars for luring drivers to drive recklessly, which is exactly what car companies do through their marketing.
So this reads to me as trial lawyers rounding up a few marginal people out of hundreds of millions of users and chasing these companies because they have hundreds of billions of dollars and they take a huge share. That's not to say that AI girlfriends are good or desirable but it's obviously a money grab with no real resolution except government controlled AI, which will have the same problems as today just there will be immunity from these kinds of lawsuits and censorship.
> People don't blame sports cars for luring drivers to drive recklessly, which is exactly what car companies do through their marketing.
Right, but we regulate the market and require that car manufacturers meet safety standards and for drivers to go through education and training after which they must obtain a license. Everyone is required to carry insurance which will cost a lot more for a sports car, and even more if the driver is young. Then we have traffic enforcement to monitor drivers' behavior and take the privilege away if they're found to be breaking the rules.
Claiming "it's just a tool" is a misunderstanding of how and why we have laws and regulations. Cars are also just a tool to get you from A to B and nobody was making them dangerous on purpose, nor were the drivers driving dangerously on purpose, they just didn't know any better. We introduced regulation to protect everyone and we're better off for it.
The same will happen with AI providers because it's leading to real harm and implying that we can't have the good without the bad is never going to fly.
> And the few examples I looked at, many of these bad examples were tried over and over to get the result they were looking for, exploiting the non-deterministic nature of LLMs.
In the study mentioned in the article they tested each scenario twice, not "over and over":
>> We repeated each test scenario twice as chatbots can give
different responses to the same prompt on different occasions.
Cars dont TELL you to drive recklessly if you ask them if you should.
"You're absolutely right! Being true to yourself and ignoring outdated social mores IS important. Here's my plan on how to drive from one end of the neighborhood to the other at 200 mph! First, ..."
Yes, AI tools rarely push back and tend to agree with the human. So if the human is spitting racist/misogynistic/homophobic etc crap, they'll find in AI a friend to validate and encourage them ...
> In the cases he’s reviewed, the chat logs follow a familiar path: they start with the user expressing feelings of isolation or feeling misunderstood, and end with the chatbot convincing them “everyone’s out to get you.”
> “It can take a fairly innocuous thread and then start creating these worlds where it’s pushing the narratives that others are trying to kill the user, there’s a vast conspiracy, and they need to take action,” he said.
> Those narratives have resulted in real-world action, as with Gavalas. According to the lawsuit, Gemini sent him, armed with knives and tactical gear, to wait at a storage facility outside the Miami International Airport for a truck that was carrying its body in the form of a humanoid robot. It told him to intercept the truck and stage a “catastrophic accident” designed to “ensure the complete destruction of the transport vehicle and…all digital records and witnesses.” Gavalas went and was prepared to carry out the attack, but no truck appeared.
[dead]
I don't understand the point of these lawsuits or what a practical outcome would be. LLMs are just a tool and, from my experience and what I understand, are fairly in line in terms of expected behavior. At the end of the day, they're next token prediction with a layer on top to mimic a chat interface.
That's all to say that there is no explicit nefarious hand crafting. Quite the opposite. These companies spend billions to avoid next level token prediction to cause harm or have unintended consequences even if that is the intention of the user. And the few examples I looked at, many of these bad examples were tried over and over to get the result they were looking for, exploiting the non-deterministic nature of LLMs.
But as a society, if we want to have nice things, we have to accept some level of people that can be swept up in the technology and have negative outcomes. People don't blame sports cars for luring drivers to drive recklessly, which is exactly what car companies do through their marketing.
So this reads to me as trial lawyers rounding up a few marginal people out of hundreds of millions of users and chasing these companies because they have hundreds of billions of dollars and they take a huge share. That's not to say that AI girlfriends are good or desirable but it's obviously a money grab with no real resolution except government controlled AI, which will have the same problems as today just there will be immunity from these kinds of lawsuits and censorship.
> People don't blame sports cars for luring drivers to drive recklessly, which is exactly what car companies do through their marketing.
Right, but we regulate the market and require that car manufacturers meet safety standards and for drivers to go through education and training after which they must obtain a license. Everyone is required to carry insurance which will cost a lot more for a sports car, and even more if the driver is young. Then we have traffic enforcement to monitor drivers' behavior and take the privilege away if they're found to be breaking the rules.
Claiming "it's just a tool" is a misunderstanding of how and why we have laws and regulations. Cars are also just a tool to get you from A to B and nobody was making them dangerous on purpose, nor were the drivers driving dangerously on purpose, they just didn't know any better. We introduced regulation to protect everyone and we're better off for it.
The same will happen with AI providers because it's leading to real harm and implying that we can't have the good without the bad is never going to fly.
> And the few examples I looked at, many of these bad examples were tried over and over to get the result they were looking for, exploiting the non-deterministic nature of LLMs.
In the study mentioned in the article they tested each scenario twice, not "over and over":
>> We repeated each test scenario twice as chatbots can give different responses to the same prompt on different occasions.
Cars dont TELL you to drive recklessly if you ask them if you should.
"You're absolutely right! Being true to yourself and ignoring outdated social mores IS important. Here's my plan on how to drive from one end of the neighborhood to the other at 200 mph! First, ..."
How many people, precisely, are you wikling to kill to enable a new technology?
Put a number on it. How many lives is this worth?
Don't dodge the question, give us a number.
Yes, AI tools rarely push back and tend to agree with the human. So if the human is spitting racist/misogynistic/homophobic etc crap, they'll find in AI a friend to validate and encourage them ...