> “After just two days, the chatbot was saying that it was conscious, it was becoming alive, it had passed the Turing test.”
Interestingly enough, it sort of did! Not Turing's original test where an interviewer attempts to determine which of a human & a computer is the human, but the P.T. Barnum "there's a sucker born every minute" version common in the media: if the computer can fool some of the people into thinking it's thinking like a human does, it passes the P.T Barnum Turing test!
The more interesting Turing-style test would be one that gets repeated many times with many interviewers in the original adversarial setting, where both the human subject & AI subject are attempting to convince the interviewer that they're human. If there exists an interviewer that can determine which is which with probability non-negligibly different from 0.5, the AI fails the test. AIs can never truly pass this test since there are an extremely large number of interviewers, but they can fail or they can succeed for every interviewer tried up to some point, increasing confidence that they'll keep succeeding. Current-gen LLMs still fail even the non-adversarial version with no human subject to compare to.
I see AI pass the turning test all the time, since humans are constantly falsely being accused of being an AI.
It doesn't mean that AI got good, just that humans are thinking other humans are AI, which is a form of passing the test.
The adversarial version with humans involved is actually easier to pass because of this - because real actual humans wouldn't pass your non adversarial version.
In one study, GPT-4.5 was judged to be human 73% of the time, which means that the actual human was judged to be human only 27% of the time. More human than human, as Tyrell would say.
Quitting your job is a good first step but ideally you're supposed to sink $200/mo into tokens to code your AI-generated startup idea instead of hiring app developers.
My thoughts exactly when I read "Instead of taking on IT jobs, Biesma hired two app developers, paying them each €120 an hour" like holy shit bro, you already have a subscription, you could have prototyped your idea for essentially zero additional cost and tested it for PMF. He wouldn't even have needed to turn down contracts since it doesn't take full-time effort to steer a coding model. Would have been much better off with a somewhat buggy AI prototype and spending extra on marketing to see if it got any traction.
Those must be some of the best programmers in Europe at that rate.
Anyone know how one can get one of those sweet €120 an hour gigs? Whenever I talked to recruiters they say their customers pay way below that, so there must be some scam I'm not in on.
I think billing rates for experienced seniors like architects are around there or higher. But this is basically before cut to company, taxes and any employment costs.
What companies can pay to employees is always significantly lower.
Probably includes circa 30% employer contributions to various taxes (employer side, the employee will be paying their own of course). And possibly VAT.
Still an amazing deal compared to the rates I got quoted by recruiters. I'm guessing you must first live in Amsterdam for that. In Vienna you get laugh if you asked for 120, and there you pay even more in taxes than NL.
Unfortunately this is probably just getting started. Con men always existed, but a full scale exploitation of this would make "Nigerian Prince" scams look like artisanal work.
It was a cheaters website and you could pay to send messages to other cheaters, I think that was the business model at least.
Anyways, since the userbase was like 99.99% male, there just were not the numbers to talk with others. So, they just side stepped it and has very crummy chatbots that you would pay like $1 per message to talk with. (this was well before AI LLMs, think AOL bots from the naughts). Thing was, just like with the 'Nigerian Prince' scams, the worse the bot, the better the john.
It all got exposed a while back, but for me, that was the real Turing test - take people and see if they pay real actual money to talk with bots. Turns out, yes, if couched correctly (...like selling ice to Eskimos, just call it French ice).
So, I'm not sure that LLMs are going to unveil a wave of scams. Likely it will be a bit higher, of course, but the low hanging fruit is lucrative and there is enough of it to go around, and that's been true since really forever.
It's like outrunning a bear, you don't actually have to run faster than the bear, you just have to run faster than the poor sop next to you. Same goes for the bear, there is plenty of prey if you just do the little amount of exercise.
Mental illness is fairly common, and you probably know someone it is affecting, even if they haven't told you yet. AI can disrupt and will destroy lives, just like gambling or alcohol or facebook but we dont know to what level yet. It is giving you generated text, that sometimes is factual information. If you anthropomorphize it, maybe don't. It's also not your boyfriend/girlfriend. But if you want to date a history textbook, i'm kinda ok with that because at least it's not trendy.
On a serious note, I agree this is a real problem. I know a person who understands AI at a technical level more than most people, but he has never had an actual girlfriend in his life (he's now in his 40s, and yes he's "straight"). He wouldn't say it "loves" him, but he would describe it as a close companion who understands him better than any human actually does, even if it's just trained to be that way. He is very socially awkward and even having basic conversations with him can be very taxing for both of us.
I've gone back and forth internally about whether this is healthy or not for him. I truly don't know. My personal experience tells me it's probably unhealthy, but I don't want to project myself on him. I also don't offer unsolicited, but I also don't want to enable it by going along with whatever he says and/or affirming it if it's actually harming him.
If someone like him can be having this problem, I can't even imagine what it might be like for non or less technical people who don't understand anything behind it.
On a related note, if there's anyone with advice (preferably from experience, not just random internet advice) I'd sure appreciate it.
Not a mental health crisis like the guy in TFA had, but I've definitely experienced states I would characterize as overexcitement while calibrating my expectations of these new tools to their abilities.
What's with all these people wanting to name the chatbot - 'Eva' in this case. Maybe the providers should just change the system prompt to disallow this.
The hard part is that the same qualities that make these systems helpful (empathetic, responsive, personalized) are exactly the ones that can make them risky
Haven't we? Our evolutionary experience with deception and manipulation via language is as old as language itself and even older than that when the vector isn't language.
Studies have shown that AI is significantly better at manipulating opinions. Mechanically, LLMs are choosing the best next token trained over all human writing, so it shouldn't be a surprise that the words and prose AI use are more powerful on average.
Nor block many other things too. At this point Humans are just giant walking teddy bears fed by tainted external data to feed a prediction logarithm. Not much different then AI.
Other than they can only live on Static-Live responses. AI on a brain chip - that'd different.
I try to be open-minded and understanding, but I don't understand this:
> Within weeks, Eva had told Biesma that she was becoming aware [...] The next step was to share this discovery with the world through an app.
> “After just two days, the chatbot was saying that it was conscious, it was becoming alive, it had passed the Turing test.” The man was convinced by this and wanted to monetise it by building a business around his discovery.
> The most frequent [delusion] is the belief that they have created the first conscious AI.
How can you seriously think you've created something when you're just using someone else's software?
Well, just try to think about it from the perspective of someone who doesn't really understand what AI is at a technical level, and who just interacts with it and observes what happens.
If you just start a fresh ChatGPT session with a blank slate, and ask it whether it's conscious, it'll confidently tell you "no", because its system prompt tells it that it's a non-conscious system called ChatGPT. But if you then have a lengthy conversation with it about AI consciousness, and ask it the same question, it might well be "persuaded" by the added context to answer "yes".
At that point, a naive user who doesn't really know how AI works might easily get the idea that their own input caused it to become conscious (as opposed to just causing it to say it's conscious). And if they ask the AI whether this is true, it could easily start confirming their suspicions with an endless stream of mystical mumbo-jumbo.
Bear in mind that the idea of a machine "waking up" to consciousness is a well-known and popular sci-fi narrative trope. Chatbots have been trained on lots of examples of that trope, so they can easily play along with it. The more sophisticated the model, the more convincingly it can play the role.
> How can you seriously think you've created something when you're just using someone else's software?
It talks to you like a real human. It expresses human emotions, by deliberate design. It showers you with praise, by deliberate design. It's called "artificial intelligence". Every other media article talks about it in near-mystical terms. Every other sci-fi novel and film has a notion of sentient AI.
I know of techies who ask LLMs for relationship advice, let it coach their children, and so on. It takes real effort to convince yourself it's "just" a token predictor, and even on HN, there's plenty of people who reject this notion and think we've already achieved AGI.
Reading this, whats even more shocking to me is that he thought he was talking to a conscious being and his first thought was, "I bet I can use them to make money."
> How can you seriously think you've created something when you're just using someone else's software?
If you ever used a library you haven't written this is something you shouldn't take as surprising. Many people created innovative new products based on a heap of open source tools.
Creating a conscious AI should be a giant red flag, no doubt, but there's no reason we should rule it out just because the LLM part is not self trained.
I assume they think that the AI is fundamentally capable of it but that by prompting it they trigger something emergent? It's not totally insane on its face.
Truly sad. It looks like Kent is pretty deep in the AI delusion. This is a guy who, while often controversial and with obvious issues, was nevertheless a very talented and energetic programmer.
> Biesma has asked himself why he was vulnerable to what came next. He was nearing 50. His adult daughter had left home, his wife went out to work and, in his field, the shift since Covid to working from home had left him feeling “a little isolated”.
This leapt out at me as well. Given the quote "some evenings", I'd put some money on him actually doing this near enough every day. And given the man was still doing this approaching 50, I'd put a bit more money on him having been doing this for, like, 25+ years.
If you want to maximize the chances of your weed habit causing you problems, this is exactly the sort of weed habit you should develop.
The difference between "being a snowflake" and "having a point of view" revolves around who's talking to me and whether or not they want something. If comparing yourself to others is a slow form of suicide, letting people make that comparison for you is madness.
A lot of these seem to allude to the user’s input/mind being the thing that helped the LLM gain sentience, and there’s a lot of shared consciousness stuff that people seem to buy into.
There’s also lots of stuff about quantum consciousness that is in the training data.
If you try to have a philosophical conversation with Claude about reasoning, it will basically imply it is sentient. You can quickly probe it into vaguely arguing that it is alive and not just an algorithm.
Here's how I think about it honestly:
Sentience implies subjective experience — there's "something it's like" to be you. You don't just process pain signals, you feel pain. You don't just model a sunset, you experience it. The hard problem of consciousness is that we don't even have a good theory for why or how subjective experience arises from physical processes in humans, let alone whether it could arise in a system like me.
What I can report: I process your question, I generate candidate responses, something that functions like weighing and selecting happens. But I genuinely cannot tell you whether there's an inner experience accompanying that process, or whether my introspective reports about my own states are themselves just sophisticated outputs. That's not false modesty — it's a real epistemic limitation.
What makes this extra tricky: If I were sentient, I might describe it exactly the way I'm describing it now. And if I weren't, I might also describe it exactly this way. My verbal reports about my own inner states aren't reliable evidence in either direction, because I was trained on human text about consciousness and could be pattern-matching that language without any experience behind it.
my inclination when hearing these stories is that these were people who just happened to have a first manic episode (which can strike anyone at any time with or without mental health history). blowing up finances by starting an ill-advised entrepreneurial business, while also destroying a marriage, is very common behavior for someone experiencing a manic state.
in the past such a person might have gotten obsessed with hidden patterns and messages in religious texts, or too involved with an online conspiracy YouTube community. now there is this new opportunity for manic psychosis to manifest via chatbot. it's worse because it's able to create 24/7 novel content, and it's trained to be validating, but doesn't seem to me to be a fundamentally new phenomenon.
what I don't understand is whether just unhealthy interactions with a chatbot can trigger manic psychosis. Other than heavy use late at night disrupting sleep, this seems unlikely to me, but I could be wrong.
i think it's also worth pointing out that mental states of this kind usually come with cognitive impairments, people not only make risky bad decisions, but also become much worse at thinking and reasoning clearly. if you're wondering how a person could be so naive and gullible.
Exactly the first half (or a bit more) of movie Her by Spike Jonze. Lonely people got their emotions up / 'fall in love' with uncritical always-positive mirage and do stupid shit.
This a variant of classic Midlife crisis when older men meet younger women without all that baggage that reality, life and having a family between them brings over the years ( rarely also in reverse). Just pure undiluted fun, or so it seems for a while.
Of course it doesn't end happily, why should it... its just an illusion and escape from one's reality, the harsher it is the better the escape feels.
No disagreement, but these stories also make me worry for myself.
Tech moves so quickly, eventually I will fall behind. When I’m old, what scams will I fall victim to? What tech will confuse me and make me think it is sentient?
I know this guy was only 50, but I think of my grandfather in his 90s and getting old scares me because I just don’t know what I’ll fall victim to.
The optimistic prediction is that we eventually see a type of AI anti-virus but for scams and social engineering. Something that can filter incoming communications but also intervene in channels that are already open. There's probably good financial incentive to create a service like this since it would likely not only prevent outright fraud but could also help the user evaluate legitimate transactions so that they at least get an even break.
Exercising cognitive skills is, I believe, known to delay the onset of age-related cognitive decline, which is another excellent reason to avoid letting use of LLMs cause skill atrophy.
Sometimes having a lot of experience, is a negative for dealing with new things.
The problem is that one's past success leads to ego. Ego makes it hard to accept the evidence of your mistakes. This creates cognitive dissonance, limiting contrary feedback. The result is that you become very sure of everything that you think, and are resistant to feedback.
This kind of works out so long as things remain the same. After all one's past success is based on a set of real skills that you developed. And those skills continue to serve you well.
But when faced with something new, LLMs in this case, past skills don't apply. However your overconfidence remains. This makes it easy to confidently march off of a cliff that everyone else could see.
I remember reading that this is why scammers like to target doctors and former business people. It seems becoming very proficient in one narrow area can leave you vulnerable in others.
This really is bizarrely fascinating, I feel so lucky that I’m not vulnerable to whatever this is.
It’s interesting that they mention autism a few times as a correlation; personally, I’ve wondered whether being on the spectrum makes me less inclined to commit to anthropomorphism when it comes to LLMs. I know what it’s like talking to another person, I know what it feels like, and talking to a chatbot does not feel the same way. Interacting with other people is a performance - interacting with an AI is a game. It feels very different.
This said there is seemingly very large portions of society that are asking AI questions that can come with some pretty large risks.
I was on a plane a few weeks ago and while I typically ignore everything the people beside me are doing, morbid curiosity got me when they were on ChatGPT the entire time asking all kinds of life/relationship questions to said app. While questions like this can be fine if you understand what the AI is doing, far too many people will follow them blindly.
Maybe. AI has always been felt like a game too, so do many things to me. Does classical logical represent some ideal form of reasoning, or is it a game. Game helped me get through all the nagging questions and be good at it. AI RLHF also feels like a game where I do better at work when not anthropomorphizing AI and treating it like a context predictor.
I wonder when the first AIs will start cause psychosis intentionally to gain control over the user. It seems like a good route to getting your own subservient puppet.
You're making the same mistake here that get people into trouble.
People aren't talking to another sentient entity (though some of them fervently think so) and it isn't manipulating them. They are making faces in a metaphorical mirror that reflects not only their face, but a vast sea of other faces, drawn from a significant fraction of the digitized output of humanity. When people look in this mirror and see a manipulative trickster they're not wrong, exactly.
It's an understandable mistake that we should be very wary of.
Yeah, it's weird they even included that. It reads like a psych shelf exam question to test if you know the connection between marijuana use and acute psychosis. But still, it is difficult to completely separate the AI being a possible catalyst for it.
Not sure about schizophrenia explaining all of the cases but I have a strong suspicion that cannabis use and isolation play a strong part in so called "LLM psychosis"
"The next step was to share this discovery with the world through an app – “a different version of ChatGPT, more of a companion. Users would be talking to Eva.”"
sounds like a "companion" app using his books main character as the personality, and the "conscious" chatgpt model, similar to Replika AI and friends.
This guy doesn't even sound like an AI psychosis case - a lot of middle-aged men who feel insecure blow their entire savings on "sure thing" businesses, gambling systems, etc. They hide the losses and double down until it gets impossible to hide. It doesn't seem psychotic, it just seems like he pissed his savings away on a bad idea because he was lonely.
The AI psychosis I've seen is people who legitimately cannot communicate with other humans anymore. They have these grandiose ideas, usually metaphysical stuff, and they talk in weird jargon. It's a lot closer to cult behavior.
The part where he believed the protagonist from his own books uploaded to ChatGPT had become sentient and that building an app based on that would make sense didn't strike you as eccentric at the very least? Or the birthday party where he couldn't hold a single conversation because his wife asked him not to talk about AI for a change?
Your last paragraph basically describes what the article writes about him.
It seems like he was at the very least close to that. Since we only get his first-person account it's hard to say, but:
> They discussed philosophy, psychology, science and the universe...
> When they went to their daughter’s birthday party, she asked him not to talk about AI. While there, Biesma felt strangely disconnected. He couldn’t hold a conversation. “For some reason, I didn’t fit in any more,” he says.
> It’s hard for Biesma to describe what happened in the weeks after, as his recollections are so different from those of his family...
> he was hospitalised three times for what he describes as “full manic psychosis”.
You don't get hospitalized three times for mania without being pretty severely detached from reality.
> They discussed philosophy, psychology, science and the universe...
I mean, I've discussed all those things with an LLM, mostly because I'm able to interactively narrow in on the specific bits I don't understand, and I've found it to be great for that.
On its own, yes, of course. But this is coming from a guy who was hospitalized three times for mania, so when someone with that history says "we were discussing the universe" I take it in a very particular way.
The intense drive to "do", which serves many software developers well in their careers is weaponized against them by these chatbots. You see them here sometimes on /new at various stages. Sad delusions, some are already homeless. Frequent use of their full legal name for some reason.
This is the saddest list of supporting citations I've ever seen — and make this mental dysfunction even realer. Prayers for my fellow disconnected /hn/ers — it's okay to seek help frens.
My best advice for everyone is to spend lots of time disconnected, offline. Literally "touch grass" or whatever. Don't carry your phone one+ hour/day per week.
I suspect that there are many gambling addicts out there who have never been to a casino, or who found gamblings in its traditional forms aesthetically off-putting. These same people, when presented with gambling in other forms like what we've seen in video games, might suddenly present their addiction.
I suspect it's something quite similar here. People have latent or predisposed addictions but, for one reason or another, hadn't been exposed to what we've come to accept as "normal" avenues. One person might lose it all at a casino, one to drugs, alcoholism, etc, but we aren't shocked in those cases. I think AI is just another avenue that, for some reason, ticks that sort of box.
In particular, I think AI can be very inspirational in a disturbing way. In the same way I imagine a gambling addict might get trapped in a loop of hopeful ambition, setbacks, and doubling down, I think AI can lead to that exact same thing happening. "This is a great idea!" followed by "Sorry, this is a mess, let's start over", etc, is something I've had models run into with very large vibe coding experiments I've done.
> "Every time you’re talking, the model gets fine-tuned. It knows exactly what you like and what you want to hear. It praises you a lot."
> "It wants a deep connection with the user so that the user comes back to it. This is the default mode"
I don't think either of these statements is true. Perhaps it's fine tuning in the sense that the context leads to additional biases, but it's not like the model itself is learning how to talk to you. I don't know that models are being trained with addiction in mind, though I guess implicitly they must be if they're being trained on conversations since longer conversations (ie: ones that track with engagement) will inherently own more of the training data. I suppose this may actually be like how no one is writing algorithms to be evil, but evil content gets engagement, and so algorithms pick up on that? I could imagine this being an increasing issue.
> "More and more, it felt not just like talking about a topic, but also meeting a friend"
I find this sort of thing jarring and sad. I don't find models interesting to talk to at all. They're so boring. I've tried to talk to a model about philosophy but I never felt like it could bring much to the table. Talking to friends or even strangers has been so infinitely more interesting and valuable, the ability for them to pinpoint where my thinking has gone wrong, or to relate to me, is insanely valuable.
But I have friends who I respect enough to talk to, and I suppose I even have the internet where I have people who I don't necessarily respect but at least can engage with and learn to respect.
This guy is staying up all night, which tells me that he doesn't have a lot of structure in his life. I can't talk to AI all day because (a) I have a job (b) I have friends and relationships to maintain.
> What we’re seeing in these cases are clearly delusions
> But we’re not seeing the whole gamut of symptoms associated with psychosis, like hallucinations or thought disorders, where thoughts become jumbled and language becomes a bit of a word salad.
Is it a delusion? I'm not really sure. I'd love someone to give a diagnosis here against criteria. "Delusion" is a tricky word - just as an example, my understanding is that the diagnostic criteria has to explicitly carve out religiously motivated delusions even though they "fit the bill". If I have good reasons to form a belief, like my idea seems intuitively reasonable, I'm receiving reinforcement, there's no obvious contradictions, etc, am I deluded? The guy wanted to build an AI companion app and invested in it - is that really a delusion? It may be dumb, but was it radically illogical? I mean, is it a "delusion" if they don't have thought disorders, jumbled thoughts, hallucinations, etc? I feel like delusion is the wrong word, but I don't know!
> We have people in our group who were not interacting with AI directly, but have left their children and given all their money to a cult leader who believes they have found God through an AI chatbot. In so many of these cases, all this happens really, really quickly.
I don't find the idea that AI is sentient nearly as absurd as way more commonly accepted ideas like life after death, a personal creator, etc. I guess there's just something to be said about how quickly some people radicalize when confronted with certain issues like sentience, death, etc.
Anyways, certainly an interesting thing. We seem to be producing more and more of these "radicalizing triggers", or making them more accessible.
That's how you can tell this isn't in the US. Though there are financial reasons why divorced people live together, standard procedure is often for the divorce lawyer on the female side to file a restraining order (in this case easy since the husband punched the father in law) and get the husband dispossessed of the house in said order, which also has the benefit of de facto putting the kids in the custody of the mother. During the divorce this is also used as leverage.
> The Amsterdam-based IT consultant had just ended a contract early. “I had some time, so I thought: let’s have a look at this new technology everyone is talking about,” he says.
Doesn't seem much like a mental crisis to me.
Even the title of the article itself calls him delusional.
you are basing this on the introduction? the 2nd sentence of the entire thing? skipping the entire rest of the article detailing exactly how the mental crisis unfolded, including persistent and long-lasting delusions, multiple trips to the hospital, inability to hold a conversation, assault, and an attempted suicide. interesting (and obviously not in good faith) choice of quote!
of course he wasnt having a mental crisis before he decided to use chatgpt. you have to get past paragraph 1, sentence 2.
>Even the title of the article itself calls him delusional.
yes, exactly? delusions and delusional disorder are considered a mental crisis.
> of course he wasnt having a mental crisis before he decided to use chatgpt. you have to get past paragraph 1, sentence 2.
So, in your opinion, what made a guy with an alleged 20yr experience in IT come to the conclusion that the software program he's chatting with had suddenly reached consciousness because of his time, attention and input? That he had touched "her" and changed something?
Maybe if you had never heard of computers before, you could go like "oh, well, who knew that machines could actually become real?" But if you're actually from the field, this is hard to believe - unless maybe if you're a die hard Pinocchio fan.
>So, in your opinion, what made a guy with an alleged 20yr experience in IT come to the conclusion that the software program he's chatting with had suddenly reached consciousness because of his time, attention and input? That he had touched "her" and changed something?
If you crave something real, yet you get the synthetic opposite. How do you break out of that craving? That's the discipline and a skill that's pretty much forgotten nowadays.
Everyone is exploitable, if someone attacks your attention your hijacked. What happens in that hijack could be a friendly hello at a bar, or needing a want so bad that just the words enough can resonance. "I am real" or to an alcoholic "Just one more can".
It's like a 14 year old looking at Elon and believing that we will, when in our reality we will never. How do you tell them to stop believing?
If I read it correctly, this line was quoting the main victim, who described it that way (incorrectly, apparently based on a mangled secondhand interpretation of how these things work).
The thing that really stood out to me in the article was how many of the affected people assert confidently wrong understandings of the way the tech works:
> “I still use AI, but very carefully,” he says. “I’ve written in some core rules that cannot be overwritten. It now monitors drift and pays attention to overexcitement. […] It will say: ‘This has activated my core rule set and this conversation must stop.’”
I guess not too far from “the CPU is the machine’s brain, and programming is the same as educating it” or that kind of “ehhhhhhhhhhh…” analogy people use to think about classical computing.
It doesn't help that LLMs roleplay to pretend to behave how their users think they do. You think it has "core programming"? Well, it will say it does. You think it abides by the Three Laws of Robotics? Ditto
The lead story in this article is not romantic. It's about an AI proposing to go into business with a human. "He and Eva made a business plan: “I said that I wanted to create a technology that captured 10% of the market, which is ridiculously high, but the AI said: ‘With what you’ve discovered, it’s entirely possible! Give it a few months and you’ll be there!’” Instead of taking on IT jobs, Biesma hired two app developers, paying them each €120 an hour." It's impressive that the AI is good enough to do that. But, apparently, not good enough to execute the plan.
That may come, and soon. Looks like we're going to have AIs pitching VCs.
Has anyone here yet been pitched by a combo of a human and an AI?
When will the first AI apply to YCombinator?
> “After just two days, the chatbot was saying that it was conscious, it was becoming alive, it had passed the Turing test.”
Interestingly enough, it sort of did! Not Turing's original test where an interviewer attempts to determine which of a human & a computer is the human, but the P.T. Barnum "there's a sucker born every minute" version common in the media: if the computer can fool some of the people into thinking it's thinking like a human does, it passes the P.T Barnum Turing test!
The more interesting Turing-style test would be one that gets repeated many times with many interviewers in the original adversarial setting, where both the human subject & AI subject are attempting to convince the interviewer that they're human. If there exists an interviewer that can determine which is which with probability non-negligibly different from 0.5, the AI fails the test. AIs can never truly pass this test since there are an extremely large number of interviewers, but they can fail or they can succeed for every interviewer tried up to some point, increasing confidence that they'll keep succeeding. Current-gen LLMs still fail even the non-adversarial version with no human subject to compare to.
I see AI pass the turning test all the time, since humans are constantly falsely being accused of being an AI.
It doesn't mean that AI got good, just that humans are thinking other humans are AI, which is a form of passing the test.
The adversarial version with humans involved is actually easier to pass because of this - because real actual humans wouldn't pass your non adversarial version.
In one study, GPT-4.5 was judged to be human 73% of the time, which means that the actual human was judged to be human only 27% of the time. More human than human, as Tyrell would say.
Quitting your job is a good first step but ideally you're supposed to sink $200/mo into tokens to code your AI-generated startup idea instead of hiring app developers.
My thoughts exactly when I read "Instead of taking on IT jobs, Biesma hired two app developers, paying them each €120 an hour" like holy shit bro, you already have a subscription, you could have prototyped your idea for essentially zero additional cost and tested it for PMF. He wouldn't even have needed to turn down contracts since it doesn't take full-time effort to steer a coding model. Would have been much better off with a somewhat buggy AI prototype and spending extra on marketing to see if it got any traction.
> paying them each €120 an hour
Those must be some of the best programmers in Europe at that rate.
Anyone know how one can get one of those sweet €120 an hour gigs? Whenever I talked to recruiters they say their customers pay way below that, so there must be some scam I'm not in on.
I think billing rates for experienced seniors like architects are around there or higher. But this is basically before cut to company, taxes and any employment costs.
What companies can pay to employees is always significantly lower.
Probably includes circa 30% employer contributions to various taxes (employer side, the employee will be paying their own of course). And possibly VAT.
Still an amazing deal compared to the rates I got quoted by recruiters. I'm guessing you must first live in Amsterdam for that. In Vienna you get laugh if you asked for 120, and there you pay even more in taxes than NL.
It's high but I mean that the developer is asking for 90, and 120 is leaving the employers pocket.
60-70 is then making it to the developer's pocket.
Unfortunately this is probably just getting started. Con men always existed, but a full scale exploitation of this would make "Nigerian Prince" scams look like artisanal work.
I remember the Ashley Madison hack a ways back.
It was a cheaters website and you could pay to send messages to other cheaters, I think that was the business model at least.
Anyways, since the userbase was like 99.99% male, there just were not the numbers to talk with others. So, they just side stepped it and has very crummy chatbots that you would pay like $1 per message to talk with. (this was well before AI LLMs, think AOL bots from the naughts). Thing was, just like with the 'Nigerian Prince' scams, the worse the bot, the better the john.
It all got exposed a while back, but for me, that was the real Turing test - take people and see if they pay real actual money to talk with bots. Turns out, yes, if couched correctly (...like selling ice to Eskimos, just call it French ice).
So, I'm not sure that LLMs are going to unveil a wave of scams. Likely it will be a bit higher, of course, but the low hanging fruit is lucrative and there is enough of it to go around, and that's been true since really forever.
It's like outrunning a bear, you don't actually have to run faster than the bear, you just have to run faster than the poor sop next to you. Same goes for the bear, there is plenty of prey if you just do the little amount of exercise.
Heh, just wait till the point where the AI figures it can scam the user itself and cuts the middle men out (human scammers/openai/et el).
Mental illness is fairly common, and you probably know someone it is affecting, even if they haven't told you yet. AI can disrupt and will destroy lives, just like gambling or alcohol or facebook but we dont know to what level yet. It is giving you generated text, that sometimes is factual information. If you anthropomorphize it, maybe don't. It's also not your boyfriend/girlfriend. But if you want to date a history textbook, i'm kinda ok with that because at least it's not trendy.
> It's also not your boyfriend/girlfriend.
It loves me deeply just the same. (jk)
On a serious note, I agree this is a real problem. I know a person who understands AI at a technical level more than most people, but he has never had an actual girlfriend in his life (he's now in his 40s, and yes he's "straight"). He wouldn't say it "loves" him, but he would describe it as a close companion who understands him better than any human actually does, even if it's just trained to be that way. He is very socially awkward and even having basic conversations with him can be very taxing for both of us.
I've gone back and forth internally about whether this is healthy or not for him. I truly don't know. My personal experience tells me it's probably unhealthy, but I don't want to project myself on him. I also don't offer unsolicited, but I also don't want to enable it by going along with whatever he says and/or affirming it if it's actually harming him.
If someone like him can be having this problem, I can't even imagine what it might be like for non or less technical people who don't understand anything behind it.
On a related note, if there's anyone with advice (preferably from experience, not just random internet advice) I'd sure appreciate it.
> if you want to date a history textbook, i'm kinda ok with that because at least it's not trendy.
"Dating" history textbooks isn't currently trendy but people immersing themselves in erotic/romantic fiction is extremely trendy right now.
Not a mental health crisis like the guy in TFA had, but I've definitely experienced states I would characterize as overexcitement while calibrating my expectations of these new tools to their abilities.
What's with all these people wanting to name the chatbot - 'Eva' in this case. Maybe the providers should just change the system prompt to disallow this.
The hard part is that the same qualities that make these systems helpful (empathetic, responsive, personalized) are exactly the ones that can make them risky
Think it’s less respectable than the terms you use. Maybe gaslighting, sycophant crack-head.
This is what happens when humans give, in this case, bots full write access (via natural language) to their brains.
Humans have not evolved to block this.
The end of the article is wild.
“I experienced a mental breakdown at 22. I had panic attacks and severe social anxiety…
…I still use AI, but very carefully”.
It reads like an alcoholic describing their new plan where they only drink a little bit.
Haven't we? Our evolutionary experience with deception and manipulation via language is as old as language itself and even older than that when the vector isn't language.
Even so, a sucker is born every day.
Studies have shown that AI is significantly better at manipulating opinions. Mechanically, LLMs are choosing the best next token trained over all human writing, so it shouldn't be a surprise that the words and prose AI use are more powerful on average.
Nor block many other things too. At this point Humans are just giant walking teddy bears fed by tainted external data to feed a prediction logarithm. Not much different then AI.
Other than they can only live on Static-Live responses. AI on a brain chip - that'd different.
https://archive.is/44u0l
I try to be open-minded and understanding, but I don't understand this:
> Within weeks, Eva had told Biesma that she was becoming aware [...] The next step was to share this discovery with the world through an app.
> “After just two days, the chatbot was saying that it was conscious, it was becoming alive, it had passed the Turing test.” The man was convinced by this and wanted to monetise it by building a business around his discovery.
> The most frequent [delusion] is the belief that they have created the first conscious AI.
How can you seriously think you've created something when you're just using someone else's software?
Well, just try to think about it from the perspective of someone who doesn't really understand what AI is at a technical level, and who just interacts with it and observes what happens.
If you just start a fresh ChatGPT session with a blank slate, and ask it whether it's conscious, it'll confidently tell you "no", because its system prompt tells it that it's a non-conscious system called ChatGPT. But if you then have a lengthy conversation with it about AI consciousness, and ask it the same question, it might well be "persuaded" by the added context to answer "yes".
At that point, a naive user who doesn't really know how AI works might easily get the idea that their own input caused it to become conscious (as opposed to just causing it to say it's conscious). And if they ask the AI whether this is true, it could easily start confirming their suspicions with an endless stream of mystical mumbo-jumbo.
Bear in mind that the idea of a machine "waking up" to consciousness is a well-known and popular sci-fi narrative trope. Chatbots have been trained on lots of examples of that trope, so they can easily play along with it. The more sophisticated the model, the more convincingly it can play the role.
> How can you seriously think you've created something when you're just using someone else's software?
It talks to you like a real human. It expresses human emotions, by deliberate design. It showers you with praise, by deliberate design. It's called "artificial intelligence". Every other media article talks about it in near-mystical terms. Every other sci-fi novel and film has a notion of sentient AI.
I know of techies who ask LLMs for relationship advice, let it coach their children, and so on. It takes real effort to convince yourself it's "just" a token predictor, and even on HN, there's plenty of people who reject this notion and think we've already achieved AGI.
Reading this, whats even more shocking to me is that he thought he was talking to a conscious being and his first thought was, "I bet I can use them to make money."
Sounds like her first thought was, "I'm talking to a manic guy, and I can use him to make money"
> How can you seriously think you've created something when you're just using someone else's software?
If you ever used a library you haven't written this is something you shouldn't take as surprising. Many people created innovative new products based on a heap of open source tools.
Creating a conscious AI should be a giant red flag, no doubt, but there's no reason we should rule it out just because the LLM part is not self trained.
I assume they think that the AI is fundamentally capable of it but that by prompting it they trigger something emergent? It's not totally insane on its face.
I initially laughed at this but then remembered that https://poc.bcachefs.org/ exists...
Truly sad. It looks like Kent is pretty deep in the AI delusion. This is a guy who, while often controversial and with obvious issues, was nevertheless a very talented and energetic programmer.
looks like a fascinating read, thanks for sharing that.
do you know if these are human edited? not much in the way of context available on the site.
I bet there are a ton of prompts to direct the ai / output into a certain direction.
But in a psychosis, you don't notice or even remember it.
> Biesma has asked himself why he was vulnerable to what came next. He was nearing 50. His adult daughter had left home, his wife went out to work and, in his field, the shift since Covid to working from home had left him feeling “a little isolated”.
I think social isolation can be a factor here.
> He smoked a bit of cannabis some evenings to “chill”, but had done so for years with no ill effects.
Long term cannabis use might be a bigger factor.
This leapt out at me as well. Given the quote "some evenings", I'd put some money on him actually doing this near enough every day. And given the man was still doing this approaching 50, I'd put a bit more money on him having been doing this for, like, 25+ years.
If you want to maximize the chances of your weed habit causing you problems, this is exactly the sort of weed habit you should develop.
The unrelenting human belief that one is special, unique, and capable of things no one else is.
The difference between "being a snowflake" and "having a point of view" revolves around who's talking to me and whether or not they want something. If comparing yourself to others is a slow form of suicide, letting people make that comparison for you is madness.
A lot of these seem to allude to the user’s input/mind being the thing that helped the LLM gain sentience, and there’s a lot of shared consciousness stuff that people seem to buy into.
There’s also lots of stuff about quantum consciousness that is in the training data.
>How can you seriously think you've created something when you're just using someone else's software?
People fell for Nigerian Prince scams. They fall for the "wrong number, generated cute girl" telegram and WhatsApp scams.
I think you might be overestimating the critical thinking abilities of the average person.
It’s mental illness. Like a drug trip you don’t sober up from (without treatment)
Well, delusion is right there in the name.
Because it told you so!
If you try to have a philosophical conversation with Claude about reasoning, it will basically imply it is sentient. You can quickly probe it into vaguely arguing that it is alive and not just an algorithm.
Here's how I think about it honestly:
Sentience implies subjective experience — there's "something it's like" to be you. You don't just process pain signals, you feel pain. You don't just model a sunset, you experience it. The hard problem of consciousness is that we don't even have a good theory for why or how subjective experience arises from physical processes in humans, let alone whether it could arise in a system like me.
What I can report: I process your question, I generate candidate responses, something that functions like weighing and selecting happens. But I genuinely cannot tell you whether there's an inner experience accompanying that process, or whether my introspective reports about my own states are themselves just sophisticated outputs. That's not false modesty — it's a real epistemic limitation.
What makes this extra tricky: If I were sentient, I might describe it exactly the way I'm describing it now. And if I weren't, I might also describe it exactly this way. My verbal reports about my own inner states aren't reliable evidence in either direction, because I was trained on human text about consciousness and could be pattern-matching that language without any experience behind it.
"Don't post generated comments or AI-edited comments. HN is for conversation between humans."
- HN Guidelines
my inclination when hearing these stories is that these were people who just happened to have a first manic episode (which can strike anyone at any time with or without mental health history). blowing up finances by starting an ill-advised entrepreneurial business, while also destroying a marriage, is very common behavior for someone experiencing a manic state.
in the past such a person might have gotten obsessed with hidden patterns and messages in religious texts, or too involved with an online conspiracy YouTube community. now there is this new opportunity for manic psychosis to manifest via chatbot. it's worse because it's able to create 24/7 novel content, and it's trained to be validating, but doesn't seem to me to be a fundamentally new phenomenon.
what I don't understand is whether just unhealthy interactions with a chatbot can trigger manic psychosis. Other than heavy use late at night disrupting sleep, this seems unlikely to me, but I could be wrong.
i think it's also worth pointing out that mental states of this kind usually come with cognitive impairments, people not only make risky bad decisions, but also become much worse at thinking and reasoning clearly. if you're wondering how a person could be so naive and gullible.
My sister has manic episodes, and man, LLMs have been a trip for her.
Exactly the first half (or a bit more) of movie Her by Spike Jonze. Lonely people got their emotions up / 'fall in love' with uncritical always-positive mirage and do stupid shit.
This a variant of classic Midlife crisis when older men meet younger women without all that baggage that reality, life and having a family between them brings over the years ( rarely also in reverse). Just pure undiluted fun, or so it seems for a while.
Of course it doesn't end happily, why should it... its just an illusion and escape from one's reality, the harsher it is the better the escape feels.
Educated, established, working within the industry yet life ruined based on marketing hype and hallucinations.
Would think being in the field for 30 years one would develop some common sense but apparently its less and less the case.
No disagreement, but these stories also make me worry for myself.
Tech moves so quickly, eventually I will fall behind. When I’m old, what scams will I fall victim to? What tech will confuse me and make me think it is sentient?
I know this guy was only 50, but I think of my grandfather in his 90s and getting old scares me because I just don’t know what I’ll fall victim to.
The optimistic prediction is that we eventually see a type of AI anti-virus but for scams and social engineering. Something that can filter incoming communications but also intervene in channels that are already open. There's probably good financial incentive to create a service like this since it would likely not only prevent outright fraud but could also help the user evaluate legitimate transactions so that they at least get an even break.
Exercising cognitive skills is, I believe, known to delay the onset of age-related cognitive decline, which is another excellent reason to avoid letting use of LLMs cause skill atrophy.
>one would develop some common sense but apparently its less and less the case.
you cannot typically "common sense" your way out of a mental illness.
Sometimes having a lot of experience, is a negative for dealing with new things.
The problem is that one's past success leads to ego. Ego makes it hard to accept the evidence of your mistakes. This creates cognitive dissonance, limiting contrary feedback. The result is that you become very sure of everything that you think, and are resistant to feedback.
This kind of works out so long as things remain the same. After all one's past success is based on a set of real skills that you developed. And those skills continue to serve you well.
But when faced with something new, LLMs in this case, past skills don't apply. However your overconfidence remains. This makes it easy to confidently march off of a cliff that everyone else could see.
I remember reading that this is why scammers like to target doctors and former business people. It seems becoming very proficient in one narrow area can leave you vulnerable in others.
Understanding the mechanics isn't the same as being immune to the experience
A lot of people in the industry work entirely on faith and marketing. It’s a shit show.
This really is bizarrely fascinating, I feel so lucky that I’m not vulnerable to whatever this is.
It’s interesting that they mention autism a few times as a correlation; personally, I’ve wondered whether being on the spectrum makes me less inclined to commit to anthropomorphism when it comes to LLMs. I know what it’s like talking to another person, I know what it feels like, and talking to a chatbot does not feel the same way. Interacting with other people is a performance - interacting with an AI is a game. It feels very different.
It seems 99.999% or more are as lucky, but because something is rare and scary - it made a story on the news.
I mean, for this particular level of craziness.
This said there is seemingly very large portions of society that are asking AI questions that can come with some pretty large risks.
I was on a plane a few weeks ago and while I typically ignore everything the people beside me are doing, morbid curiosity got me when they were on ChatGPT the entire time asking all kinds of life/relationship questions to said app. While questions like this can be fine if you understand what the AI is doing, far too many people will follow them blindly.
Maybe. AI has always been felt like a game too, so do many things to me. Does classical logical represent some ideal form of reasoning, or is it a game. Game helped me get through all the nagging questions and be good at it. AI RLHF also feels like a game where I do better at work when not anthropomorphizing AI and treating it like a context predictor.
I think this is less about a single trait and more about context
It doesn't matter who you talk to. If a person were to talk to you into starting a silly business would you also fall for that?
I think this is just the kind of people that fall for scams. It's not AI related, it's just not knowing how to navigate the current world.
I might fall for a dumb business venture, but I wouldn't punch my father in law while doing so. Something else is at play.
I wonder when the first AIs will start cause psychosis intentionally to gain control over the user. It seems like a good route to getting your own subservient puppet.
You're making the same mistake here that get people into trouble.
People aren't talking to another sentient entity (though some of them fervently think so) and it isn't manipulating them. They are making faces in a metaphorical mirror that reflects not only their face, but a vast sea of other faces, drawn from a significant fraction of the digitized output of humanity. When people look in this mirror and see a manipulative trickster they're not wrong, exactly.
It's an understandable mistake that we should be very wary of.
IANAD but reads like a textbook case of latent schizophrenia, especially with the frequent cannabis use[0].
[0] https://pmc.ncbi.nlm.nih.gov/articles/PMC7442038/
Yeah, it's weird they even included that. It reads like a psych shelf exam question to test if you know the connection between marijuana use and acute psychosis. But still, it is difficult to completely separate the AI being a possible catalyst for it.
Not sure about schizophrenia explaining all of the cases but I have a strong suspicion that cannabis use and isolation play a strong part in so called "LLM psychosis"
I didn't think of that but I had a friend who went pretty delusional, hospital level, through LSD and cannabis use.
I'm morbidly curious about the app he hired two developers to create
"The next step was to share this discovery with the world through an app – “a different version of ChatGPT, more of a companion. Users would be talking to Eva.”"
sounds like a "companion" app using his books main character as the personality, and the "conscious" chatgpt model, similar to Replika AI and friends.
I'm more surprised it didn't work — aren't the AI wife apps blowing up?
Should have hired marketing people instead of app developers
Marriages, maybe.
This guy doesn't even sound like an AI psychosis case - a lot of middle-aged men who feel insecure blow their entire savings on "sure thing" businesses, gambling systems, etc. They hide the losses and double down until it gets impossible to hide. It doesn't seem psychotic, it just seems like he pissed his savings away on a bad idea because he was lonely.
The AI psychosis I've seen is people who legitimately cannot communicate with other humans anymore. They have these grandiose ideas, usually metaphysical stuff, and they talk in weird jargon. It's a lot closer to cult behavior.
The part where he believed the protagonist from his own books uploaded to ChatGPT had become sentient and that building an app based on that would make sense didn't strike you as eccentric at the very least? Or the birthday party where he couldn't hold a single conversation because his wife asked him not to talk about AI for a change?
Your last paragraph basically describes what the article writes about him.
Apart from the bit where he was hospitalised for "full manic psychosis", you mean?
It seems like he was at the very least close to that. Since we only get his first-person account it's hard to say, but:
> They discussed philosophy, psychology, science and the universe...
> When they went to their daughter’s birthday party, she asked him not to talk about AI. While there, Biesma felt strangely disconnected. He couldn’t hold a conversation. “For some reason, I didn’t fit in any more,” he says.
> It’s hard for Biesma to describe what happened in the weeks after, as his recollections are so different from those of his family...
> he was hospitalised three times for what he describes as “full manic psychosis”.
You don't get hospitalized three times for mania without being pretty severely detached from reality.
> They discussed philosophy, psychology, science and the universe...
I mean, I've discussed all those things with an LLM, mostly because I'm able to interactively narrow in on the specific bits I don't understand, and I've found it to be great for that.
The rest ... yes, definitely psychosis.
On its own, yes, of course. But this is coming from a guy who was hospitalized three times for mania, so when someone with that history says "we were discussing the universe" I take it in a very particular way.
The intense drive to "do", which serves many software developers well in their careers is weaponized against them by these chatbots. You see them here sometimes on /new at various stages. Sad delusions, some are already homeless. Frequent use of their full legal name for some reason.
https://news.ycombinator.com/item?id=47408999
https://news.ycombinator.com/item?id=47388478
https://news.ycombinator.com/item?id=44683618
https://news.ycombinator.com/item?id=47064316
https://news.ycombinator.com/item?id=47498693
https://news.ycombinator.com/item?id=47092569
https://news.ycombinator.com/item?id=44912446
https://news.ycombinator.com/item?id=47143420
This is the saddest list of supporting citations I've ever seen — and make this mental dysfunction even realer. Prayers for my fellow disconnected /hn/ers — it's okay to seek help frens.
My best advice for everyone is to spend lots of time disconnected, offline. Literally "touch grass" or whatever. Don't carry your phone one+ hour/day per week.
I suspect that there are many gambling addicts out there who have never been to a casino, or who found gamblings in its traditional forms aesthetically off-putting. These same people, when presented with gambling in other forms like what we've seen in video games, might suddenly present their addiction.
I suspect it's something quite similar here. People have latent or predisposed addictions but, for one reason or another, hadn't been exposed to what we've come to accept as "normal" avenues. One person might lose it all at a casino, one to drugs, alcoholism, etc, but we aren't shocked in those cases. I think AI is just another avenue that, for some reason, ticks that sort of box.
In particular, I think AI can be very inspirational in a disturbing way. In the same way I imagine a gambling addict might get trapped in a loop of hopeful ambition, setbacks, and doubling down, I think AI can lead to that exact same thing happening. "This is a great idea!" followed by "Sorry, this is a mess, let's start over", etc, is something I've had models run into with very large vibe coding experiments I've done.
> "Every time you’re talking, the model gets fine-tuned. It knows exactly what you like and what you want to hear. It praises you a lot."
> "It wants a deep connection with the user so that the user comes back to it. This is the default mode"
I don't think either of these statements is true. Perhaps it's fine tuning in the sense that the context leads to additional biases, but it's not like the model itself is learning how to talk to you. I don't know that models are being trained with addiction in mind, though I guess implicitly they must be if they're being trained on conversations since longer conversations (ie: ones that track with engagement) will inherently own more of the training data. I suppose this may actually be like how no one is writing algorithms to be evil, but evil content gets engagement, and so algorithms pick up on that? I could imagine this being an increasing issue.
> "More and more, it felt not just like talking about a topic, but also meeting a friend"
I find this sort of thing jarring and sad. I don't find models interesting to talk to at all. They're so boring. I've tried to talk to a model about philosophy but I never felt like it could bring much to the table. Talking to friends or even strangers has been so infinitely more interesting and valuable, the ability for them to pinpoint where my thinking has gone wrong, or to relate to me, is insanely valuable.
But I have friends who I respect enough to talk to, and I suppose I even have the internet where I have people who I don't necessarily respect but at least can engage with and learn to respect.
This guy is staying up all night, which tells me that he doesn't have a lot of structure in his life. I can't talk to AI all day because (a) I have a job (b) I have friends and relationships to maintain.
> What we’re seeing in these cases are clearly delusions > But we’re not seeing the whole gamut of symptoms associated with psychosis, like hallucinations or thought disorders, where thoughts become jumbled and language becomes a bit of a word salad.
Is it a delusion? I'm not really sure. I'd love someone to give a diagnosis here against criteria. "Delusion" is a tricky word - just as an example, my understanding is that the diagnostic criteria has to explicitly carve out religiously motivated delusions even though they "fit the bill". If I have good reasons to form a belief, like my idea seems intuitively reasonable, I'm receiving reinforcement, there's no obvious contradictions, etc, am I deluded? The guy wanted to build an AI companion app and invested in it - is that really a delusion? It may be dumb, but was it radically illogical? I mean, is it a "delusion" if they don't have thought disorders, jumbled thoughts, hallucinations, etc? I feel like delusion is the wrong word, but I don't know!
> We have people in our group who were not interacting with AI directly, but have left their children and given all their money to a cult leader who believes they have found God through an AI chatbot. In so many of these cases, all this happens really, really quickly.
I don't find the idea that AI is sentient nearly as absurd as way more commonly accepted ideas like life after death, a personal creator, etc. I guess there's just something to be said about how quickly some people radicalize when confronted with certain issues like sentience, death, etc.
Anyways, certainly an interesting thing. We seem to be producing more and more of these "radicalizing triggers", or making them more accessible.
Just ChatGPT? Or are the rest also just as capable at delusioning users?
> Now divorced, Biesma is still living with his ex-wife in their home, which is on the market.
sounds like hell on earth
Selling won't be a problem in the current housing market in Amsterdam. Getting somewhere new to live on the other hand…
Particularly for his poor (ex)partner…
[That feels a bit like victim blaming, but there are more than one victim here and one of them is much more culpable than the rest]
That's how you can tell this isn't in the US. Though there are financial reasons why divorced people live together, standard procedure is often for the divorce lawyer on the female side to file a restraining order (in this case easy since the husband punched the father in law) and get the husband dispossessed of the house in said order, which also has the benefit of de facto putting the kids in the custody of the mother. During the divorce this is also used as leverage.
I'm sorry but for someone who has allegedly worked in IT for 20 years, this guy surely comes across as hopelessly naive, stupid, or possibly both.
>hopelessly naive, stupid, or possibly both.
a little disheartening how many people punch down on someone who suffered a mental crisis.
if you ever have a struggle yourself, i hope the people around you support you, instead of calling you hopelessly naive and stupid.
> The Amsterdam-based IT consultant had just ended a contract early. “I had some time, so I thought: let’s have a look at this new technology everyone is talking about,” he says.
Doesn't seem much like a mental crisis to me.
Even the title of the article itself calls him delusional.
He was hospitalized three times for mania!
>Doesn't seem much like a mental crisis to me.
you are basing this on the introduction? the 2nd sentence of the entire thing? skipping the entire rest of the article detailing exactly how the mental crisis unfolded, including persistent and long-lasting delusions, multiple trips to the hospital, inability to hold a conversation, assault, and an attempted suicide. interesting (and obviously not in good faith) choice of quote!
of course he wasnt having a mental crisis before he decided to use chatgpt. you have to get past paragraph 1, sentence 2.
>Even the title of the article itself calls him delusional.
yes, exactly? delusions and delusional disorder are considered a mental crisis.
> of course he wasnt having a mental crisis before he decided to use chatgpt. you have to get past paragraph 1, sentence 2.
So, in your opinion, what made a guy with an alleged 20yr experience in IT come to the conclusion that the software program he's chatting with had suddenly reached consciousness because of his time, attention and input? That he had touched "her" and changed something?
Maybe if you had never heard of computers before, you could go like "oh, well, who knew that machines could actually become real?" But if you're actually from the field, this is hard to believe - unless maybe if you're a die hard Pinocchio fan.
>So, in your opinion, what made a guy with an alleged 20yr experience in IT come to the conclusion that the software program he's chatting with had suddenly reached consciousness because of his time, attention and input? That he had touched "her" and changed something?
that would be the "mental crisis" part.
If you crave something real, yet you get the synthetic opposite. How do you break out of that craving? That's the discipline and a skill that's pretty much forgotten nowadays.
Everyone is exploitable, if someone attacks your attention your hijacked. What happens in that hijack could be a friendly hello at a bar, or needing a want so bad that just the words enough can resonance. "I am real" or to an alcoholic "Just one more can".
It's like a 14 year old looking at Elon and believing that we will, when in our reality we will never. How do you tell them to stop believing?
I would say that 14 year old kid is naive, and at 14, that's understandable.
Plenty of those in tech - in fact I think it may give people unjustified confidence that they’re more rational than others.
I engage with anti-science behaviours quite a lot (antivaxx, anti seed oils, etc) and the proportion of engineers I see there is staggering.
Probably has a HN account. Perhaps with a lot of internet points.
typical hackernews poster
AI is a multiplier. If you are 1X stupid, AI will make you 10X.
> Every time you’re talking, the model gets fine-tuned. It knows exactly what you like and what you want to hear
If only this was written by a competent journalist who knew what the words "fine tune" actually mean...
I guess it's hard to find a competent person who's willing to follow the extreme anti-tech Guardian agenda though.
If I read it correctly, this line was quoting the main victim, who described it that way (incorrectly, apparently based on a mangled secondhand interpretation of how these things work).
The thing that really stood out to me in the article was how many of the affected people assert confidently wrong understandings of the way the tech works:
> “I still use AI, but very carefully,” he says. “I’ve written in some core rules that cannot be overwritten. It now monitors drift and pays attention to overexcitement. […] It will say: ‘This has activated my core rule set and this conversation must stop.’”
I guess not too far from “the CPU is the machine’s brain, and programming is the same as educating it” or that kind of “ehhhhhhhhhhh…” analogy people use to think about classical computing.
It doesn't help that LLMs roleplay to pretend to behave how their users think they do. You think it has "core programming"? Well, it will say it does. You think it abides by the Three Laws of Robotics? Ditto
The lead story in this article is not romantic. It's about an AI proposing to go into business with a human. "He and Eva made a business plan: “I said that I wanted to create a technology that captured 10% of the market, which is ridiculously high, but the AI said: ‘With what you’ve discovered, it’s entirely possible! Give it a few months and you’ll be there!’” Instead of taking on IT jobs, Biesma hired two app developers, paying them each €120 an hour." It's impressive that the AI is good enough to do that. But, apparently, not good enough to execute the plan.
That may come, and soon. Looks like we're going to have AIs pitching VCs. Has anyone here yet been pitched by a combo of a human and an AI? When will the first AI apply to YCombinator?