I get the impression that it is now illegal in Illinois to claim that an AI chatbot can take the place of a licensed therapist or counselor. That doesn't mean people can't do what they want with AI. It only means that counseling services can't offer AI as a cheaper replacement for a real person.
Correct. It is more provider-oriented proscription ("You can't say your chatbot is a therapist.") It is not a limitation on usage. You can still, for now, slavishly fall in love with your AI and treat it as your best friend and therapist.
There is a specific section that relates to how a licensed professional can use AI:
Section 15. Permitted use of artificial intelligence.
(a) As used in this Section, "permitted use of artificial intelligence" means the use of artificial intelligence tools or systems by a licensed professional to assist in providing administrative support or supplementary support in therapy or psychotherapy services where the licensed professional maintains full responsibility for all interactions, outputs, and data use associated with the system and satisfies the requirements of subsection (b).
(b) No licensed professional shall be permitted to use artificial intelligence to assist in providing supplementary support in therapy or psychotherapy where the client's therapeutic session is recorded or transcribed unless:
(1) the patient or the patient's legally authorized representative is informed in writing of the following:
(A) that artificial intelligence will be used; and
(B) the specific purpose of the artificial intelligence tool or system that will be used; and
(2) the patient or the patient's legally authorized representative provides consent to the use of artificial intelligence.
I went to the doctor and they used some kind of automatic transcription system. Doesn’t seem to be an issue as long as my personal data isn’t shared elsewhere, which I confirmed.
Whisper is good enough these days that it can be run on-device with reasonable accuracy so I don’t see an issue.
Yes, but HIPAA is notoriously vague with regards to what actual security measures have to be in place. Its more of an agreement between parties as to who is liable in case of a breach than it is a specific set of guidelines like SOC 2.
If your medical files are locked in the trunk of a car, that’s “HIPAA-compliant” until someone steals the car.
I think that's a good thing. I don't want a specific but largely useless checklist that absolves the party that ought to be held responsible. A hard guarantee of liability is much more effective at getting results.
It would be nice to extend the approximate equivalent of HIPAA to all personal data processing in all cases with absolutely zero exceptions. No more "oops we had a breach, pinky promise we're sorry, don't forget to reset all your passwords".
No disagreement. Its just something I point out when people are concerned about "HIPAA compliance."
My experience is that people tend to think its some objective level of security. But its really just the willingness to sign a BAA and then take responsibility for any breaches.
Yes, but also "An... entity may not provide... therapy... to the public unless the therapy... services are conducted by... a licensed professional".
It's not obvious to me as a non-lawyer whether a chat history could be decided to be "therapy" in a courtroom. If so, this could count as a violation. Probably lots of law around this stuff for lawyers and doctors cornered into giving advice at parties already that might apply (e.g., maybe a disclaimer is enough to workaround the prohibition)?
Functionally, it probably amounts to two restrictions: a chatbot cannot formally diagnose & a chatbot cannot bill insurance companies for services rendered.
Most "therapy" services are not providing a diagnosis. Diagnosis comes from an evaluation before therapy starts, or sometimes not at all. (You can pay to talk to someone without a diagnosis.)
The prohibition is mainly on accepting any payment for advertised therapy service, if not following the rules of therapy (licensure, AI guidelines).
These things usually (not a lawyer tho) come down to the claims being actively made. For example "engineer" is often (typically?) a protected title but that doesn't mean you'll get in trouble for drafting up your own blueprints. Even for other people, for money. Just that you need to make it abundantly clear that you aren't a licensed engineer.
I imagine "Pay us to talk to our friendly chat bot about your problems. (This is not licensed therapy. Seek therapy instead if you feel you need it.)" would suffice.
For a long time, Mensa couldn't give people IQ scores from the tests they administered because somehow, legally, they would be acting medically. This didn't change until about 10 years ago.
Defining non-medical things as medicine and requiring approval by particular private institutions in order to do them is simply corruption. I want everybody to get therapy, but there's no difference in outcomes whether you get it from a licensed therapist using some whacked out paradigm that has no real backing, or from a priest. People need someone to talk to who doesn't have unclear motives, or any motives really, other than to help. When you hand money to a therapist, that's nearly what you get. A priest has dedicated his life to this.
The only problem with therapists in that respect is that there's an obvious economic motivation to string a patient along forever. Insurance helps that by cutting people off at a certain point, but that's pretty brutal and not motivated by concern for the patient.
If you think human therapists intentionally string patients forever, wait to see what tech people can achieve with gamified therapists literally A/B tested to string people along. Oh, and we will then blame the people for "choosing" to engage with that.
Also, the proposition is dubious, because there are waitlists for therapists. Plus, therapist can actually loose the license while the chatbot cant, no matter how bad the chatbot gets.
I think this sort of service would be OK with informed consent. I would actually be a little surprised if there were much difference in patient outcomes.
…And it turns out it has been studied with findings that AI work, but humans are better.
Usually when it comes to medical stuff, things don't get approved unless they are better than existing therapies. With the shortage of mental health care in the US, maybe an exception should be made. This is a tough one. We like to think that nobody should have to get second rate medical care, even though that's the reality.
I think a good analogy would be a cheap, non-medically-approved (but medical style) ultrasound. Maybe it’s marketed as a “novelty”, maybe you have to sign a waiver saying it won’t be used for diagnostic purposes, whatever.
You know that it’s going to get used as a diagnostic tool, and you know that people are going to die because of this. Under our current medical ethics, you can’t do this. Maybe we should re-evaluate this, but that opens the door to moral hazard around cheap unreliable practices. It’s not straightforward.
Moral hazard? Versus not getting even a diagnostic, let alone care, because someone couldn't afford it? Versus self determination? A clear upfront statement of what the product is not ought to suffice.
What we have isn't motivated by protection from moral hazard (at least IMO). It's a guild system that restricts even vaguely related access and practices in (I'd argue) an overly broad manner.
To be clear I don't object to the guild in this case. Only to the overly broad fence surrounding it.
I'll just add that this has certain other interesting legal implications, because records in relation to a therapy session are a "protected confidence" (or whatever your local jurisdiction calls it). What that means is in most circumstances not even a subpoena can touch it, and even then special permissions are usually needed. So one of the open questions on my mind for a while now was if and when a conversation with an AI counts as a "protected confidence" or if that argument could successfully be used to fend off a subpoena.
At least in Illinois we now have an answer, and other jurisdictions look to what has been established elsewhere when deciding their own laws, so the implications are far reaching.
It does sound good (especially as an Illinois resident). Luckily, as far as I can tell, this is a proactive legislation. I don't think there are any startups out there promoting their LLM-based chatbot as a replacement for a therapist, or attempting to bill payers for service.
While I agree it’s very reasonable to ban marketing of AI as a replacement for a human therapist, I feel like there could still be space for innovation in terms of AI acting as an always-available supplement to the human therapist. If the therapist is reviewing the chats and configuring the system prompt, perhaps it could be beneficial.
It might also be a terrible idea, but we won’t find out if we make it illegal to try new things in a safe/supervised way. Not to say that what I just described would be illegal under this law; I’m not sure whether it would be. I’d expect it will discourage any Illinois-licensed therapists from trying out this kind of idea though.
I'm probably in the minority here, but for me it's a foregone conclusion that it will become a better therapist, doctor, architect, etc.
Instead of the rich getting access to the best professionals, it will level the playing field. The average low level lawyer, doctor, etc are not great. How nice if everyone got top level help.
It would still need to be regulated and licensed. There was this [0] I saw today about a guy who tried to replace sodium chloride in his diet with sodium bromide because ChatGPT said he could, and poisoned himself.
With a regulated license, there is someone to hold accountable for wantonly dangerous advice, much like there is with humans.
There are two different issues here. One is tied to how authoritative we view a source, and the other is tied to the weaknesses of the person receiving advice.
With respect to the former, I firmly believe that the existing LLMs should not be presented as a source for authoritative advice. Giving advice that is not authoritative is okay as long as the recipient realizes such, in the sense that it is something that people have to deal with outside of the technological realm anyhow. For example, if you ask for help for a friend you are doing so with the understanding that, as a friend, they are doing so to the best of their ability. Yet you don't automatically assume they are right. They are either right because they do the footwork for you to ensure accuracy or you check the accuracy of what they are telling you yourself. Likewise, you don't trust the advice of a stranger unless they are certified, and even that depends upon trust in the certifying body.
I think the problem with technology is that we assume it is a cure-all. While we may not automatically trust the results returned by a basic Google search, a basic Google search result coupled with an authoritative sounding name automatically sounds more accurate than a Google search result that is a blog posting. (I'm not suggesting this is the only criteria people use. You are welcome to insert your own criteria in its place.) Our trust of LLMs, as they stand today, is even worse. Few people have developed criteria beyond: it is an LLM, so it must be trustworthy; or, it is an LLM so it must not be trustworthy. And, to be fair, it is bloody difficult to develop criteria for the trustworthiness of LLMs (even arbitrary criteria) because the provide so few cues.
Then there's the bit about the person receiving the advice. There's not a huge amount we can do about that beyond encouraging people regard the results from LLMs as stepping stones. That is to say they should take the results and do research that will either confirm or deny it. But, of course, many people are lazy and nobody has the expertise to analyze the output of an LLM outside of their personal experience/training.
You cite one case for LLMs, but I can cite 250,000 a year for licensed doctors doing the same https://pubmed.ncbi.nlm.nih.gov/28186008/. Bureaucracy doesn't work for anyone but the bureaucrats.
Please show me one doctor who recommended taking a rock each day. LLMs have a different failure mode than professionals. People are aware that doctors or therapists may err, but I've already seen countless instances of people asking relationship advice from sycophant LLMs and thinking that the advice is “unbiased”.
Homeopathy is a good example. For an uneducated person it sounds convincing enough and yes, there are doctors prescribing homeopathic pills. I am still fascinated it still exists.
That’s actually a example of sth different. And as it’s basically a placebo it only harms people’s wallets (mostly). That cannot be said for random llm failure modes. And whether it can be prescribed by doctors depends very much on the country
Actually, swallowing a rock will almost certainly cause problems. Telling your state medical board that your doctor told you to take a rock will have a wildly different outcome than telling a judge that you swallowed one because ChatGPT told you to do so.
Unless the judge has you examined and found to be incompetent, they're most likely to just tell you that you're an idiot and throw out the case.
You don't need a "regulated license" to hold someone accountable for harm they caused you.
The reality is that professional licensing in the US often works to shield its communities from responsibility, though it's primary function is just preventing competition.
When has technological progress leveled the playing field? Like never. At best it shifted it, like that a machine manufacturer got rich in addition to existing wealth. There is no reason for this to go different with AI, and it’s far from certain that it will become better anything anytime soon. Cheaper, sure. But then ppl might see slight improvements from talking to ann original Eliza/Markov bot, and nobody advocated using those as therapy
Because meat isn't magic. Anything that can be computed inside your physical body, can be calculated in an "artificially" constructed replica. Given enough time, we'll create that replica, there's no reason to think otherwise.
> Because meat isn't magic. Anything that can be computed inside your physical body, can be calculated in an "artificially" constructed replica
That is a big assumption and my doubts aren't based on any soul "magic" but on our historical inability to replicate all kinds of natural mechanisms. Instead we create analogs that work differently. We can't make machines that fly like birds but we can make airplanes that fly faster and carry more. Some of this is due to the limits of artificial construction and some of it is due to the differences in our needs driving the design choices.
Meat isn't magic, but it also isn't silicon.
It's possible that our "meat" architecture depends on a low internal latency, low external latency, quantum effects and/or some other biological quirks that simply can't be replicated directly on silicon based chip architectures.
It's also possible they are chaotic systems that can't be replicated and each artificial human brain would require equivalent levels of experience and training in ways that don't make the any more cheaper or available than humans.
It's also possible we have found some sort of local maximum in cognition and even if we can make an artificial human brain, we can't make it any smarter than we are.
There are some good reasons to think it is plausibly possible, but we are simply too far away from doing it to know for sure whether it can be done. It definitely is not a "forgone conclusion".
I don't know how you can believe in science and engineering, and not believe all of these:
1. Anything that already exists, the universe is able to construct, (ie. the universe fundamentally accommodates the existence of intelligent objects)
2. There is no "magic". Anything that happens ultimately follows the rules of nature, which are observable, and open to understanding and manipulation by humans.
3. While some things are astronomically (literally) difficult to achieve, that doesn't nullify #2
4. Ergo, while it might be difficult, there is fundamentally no reason to believe that the creation of an intelligent object is outside the capabilities of humans. The universe has already shown us their creation is possible.
This is different than, for instance, speculating that science will definitely allow us to live forever. There is no existence proof for such a thing.
But there is no reason to believe that we can't manipulate and harness intelligence. Maybe it won't be with Von Neumann, maybe it won't be with silicon, maybe it won't be any smarter than we are, maybe it will require just as much training as us; but with enough time, it's definitely within our reach. It's literally just science and engineering.
> 1. Anything that already exists, the universe is able to construct
I didn't claim it is possible we couldn't build meat brains. I claimed it is possible that equivalent or better performance might only be obtainable by meats brains.
> 2. There is no "magic". Anything that happens ultimately follows the rules of nature, which are observable, and open to understanding and manipulation by humans.
I actually don't believe the last part. There are quite plausibly laws of nature that we can't understand. I think it's actually pretty presumptuous that we will/can eventually understand and master every law of nature.
We've already proven that we can't prove every true thing about natural numbers. I think there might well be limits on what is knowable about our universe (atleast from inside of it.)
> 4. Ergo, while it might be difficult, there is fundamentally no reason to believe that the creation of an intelligent object is outside the capabilities of humans.
I didn't say that I believed that humans can't create intelligent objects. I believe we probably can and depending on how you want to define "intelligence", we already have.
What I said is that it is not a forgone conclusion that we will create "a better therapist, doctor, architect". I think it is pretty likely but not certain.
Even if we grant that for the sake of argument, there are two leaps of faith here:
- That AI as it currently exists is on the right track to creating that replica. Maybe neural networks will plateau before we get close. Maybe the Von Neumann architecture is the limiting factor, and we can only create the replica with a radically different model of computing!
- That we will have enough time. Maybe we'll accomplish it by the end of the decade. Maybe climate change or nuclear war will turn the world into a Mad Max–esque wasteland before we get the chance. Maybe it'll happen in a million years, when humans have evolved into other species. We just don't know!
I don't think you've refuted the point though. There's no reason to think that the apparatus we employ to animate ourselves will remain inscrutable forever. Unless you believe in a religious soul, all that stands in the way of the scientific method yielding results, is time.
> Maybe climate change or nuclear war will turn the world into a Mad Max–esque wasteland before we get the chance
In that eventuality, it really doesn't matter. The point remains, given enough time, we'll be successful. If we aren't successful, that means everything else has gone to shit anyway. Failure wont be because it is fundamentally impossible, it will be because we ran out of time to continue the effort.
No one has given a point to refute? The OP offered up the unsubstantiated belief that AI will some day be better than doctors/therapists/etc. You've added that it's not impossible — which, sure, whatever, but that's not really relevant to what we're discussing, which is whether it will happen to our society.
OP didn't specify a timeline or that it would happen for us personally to behold. Just that it is inevitable. You've correctly pointed out that there are things that can slow or even halt progress, but I don't think that undermines (what I at least see as) the main point. That there's no reason to believe anything fundamental stands in our way of achieving full "artificial intelligence"; ie. the doubters are being too pessimistic. Citing the destruction of humanity as a reason why we might fail can be said about literally every single other human pursuit as well; which to my mind, renders it a rather unhelpful objection to the idea that we will indeed succeed.
The article is about Illinois banning AI therapists in our society today, so I think the far more reasonable interpretation is that OP is also talking about our society today — or at least, in the near-ish future. (They also go on to talk about how it would affect different people in our society, which I think also points to my interpretation.)
And to be clear, I'm not even objecting to OP's claim! All I'm asking for is an affirmative reason to believe what they see as a foregone conclusion.
Well, I've already overstepped polite boundaries in answering for the OP. Maybe you're right, and he thinks such advancements are right around the corner. On my most hopeful days, I do. Let's just hope that the short term reason for failure isn't a Mad Max hellscape.
> Anything that can be computed inside your physical body, can be calculated in an "artificially" constructed replica.
What's hilarious about this argument (besides the fact that it smacks of the map-territory relation fallacy https://en.wikipedia.org/wiki/Map%E2%80%93territory_relation) is that for most of my life (53 years), we've been trying not just to simulate a nematode or Drosophila (two of the most-studied creatures of all time- note that we COMPLETELY understand their nervous systems) and failed to create anything remotely convincing of "life" (note that WE are the SOLE judgers of what is "alive", there is no 100% foolproof mechanistic algorithm to detect "life" (look up the cryptobiosis of tardigrades or wood frogs for an extra challenge)... therein lies part of the problem), but we cannot even convincingly simulate a single cell's behavior in any generous span of time (so for example, using a month to compute 10 seconds of a cell's "life"). And yes, there have been projects attempting to do those things this entire time. You should look them up. Tons of promise, zero delivery.
> Given enough time, we'll create that replica, there's no reason to think otherwise.
Note how structurally similar this is to a "God of the gaps" argument (just substitute "materialism-given-unlimited-time" for "God").
And yet... I agree that we should continue to try. I just think we will discover something interesting in... never succeeding, ever... while you will continue to refer to the "materialism-given-unlimited-time of the gaps" argument, assuming (key word there) that it must be successful. Because there can't possibly be anything else going on. LOL. Naive.
(Side note, but related: I couldn't help noticing that most of the AI doomers are materialist atheists.)
Haven't we found that there is a limit? Math itself is an abstraction. There is always a conversion process (Turning the real world into a 1 or a 0) that has an error rate. IE 0.000000000000001 is rounded to 0.
Every automation I have seen needs human tuning in order to keep working. The more complicated, the more tuning. This is why self driving cars and voice to text still rely on a human to monitor, and tune.
Meat is magic. And can never be completely recreated artificially.
It's sort of nice when medical professionals have real emotions and can relate to their patients. A machine emulation won't ever do the same. It will be like a narcissist faking empathy.
With most tech we reach law of diminishing returns. That is sure, there is still a variation, but very little:
- the best laptop/phone/tv in the world doesn’t offer mich more than the most affordable
- you can get for free a pen novadays that is almost as good at writing as the most expensive pens in the world (before BIC, in 1920s, pens were a luxury good reserved for wall street)
- toilets, washing mashines, heating systems and beds in the poorest homes are not very far off from the expensive homes (in EU at least)
- flying/travel is similar
- computer games and entertainment, and software in general
The more we remove human work from the loop, the more democratised and scalable the technology becomes.
does it matter? If mine is way better than I had before, why does it matter that someone else's is better still? My sister's $130 Moto G is much better than whatever phone she could afford 10 years. Does it matter that it's not a $1599 iPhone 16 Pro Max 1TB?
I've never been to a therapist for anything that can be described as a diagnosable condition, but I have spoken to one about stress management and things of that ilk. For "amusement" I discussed similar things with an LLM.
At a surface level, the LLM was far more accessible. I didn't have to schedule an appointment weeks in advance. Even with the free tier, I didn't have to worry about time limits per se. There were limits, to be sure, but I could easily think about a question or the LLM's response before responding. In my case, what mattered was turnaround time on my terms rather than an in depth discussion. There was also less concern about being judged, both by another human and in a way that could get back to my employer because, yeah, it was employment related stress and the only way I could afford human service was through insurance offered by my employer. While there are significant privacy concerns with LLM's as they stand today, you don't have that direct relationship between who is offering it and the people in your life.
On a deeper level, I simply felt the advice was presented in a more useful form. The human discussions were framed by exercises to be completed between sessions. While the exercises were useful, the feedback was far from immediate and the purpose of the exercises is best described as a delaying tactic: it provided a framework for deeper thought between discussions because discussions were confined to times that were available to both parties. LLMs are more flexible. They are always available. Rather than dealing with big exercises to delay the conversation by a couple of weeks, they can be bite sized exercises to enable the next step. On top of that, LLMs allow for an expanded scope of discussion. Remember, I'm talking about workplace stress in my particular case. An LLM doesn't care whether you are talking about how you personally handle stress, or about how you manage a workplace in order to reduce stress for yourself and others.
Now I'm not going to pretend that this sort of arrangement is useful in all cases. I certainly wouldn't trust it for a psychological or medical diagnosis, and I would trust it even less for prescribed medications. On the other hand, people who cannot afford access to traditional professional services are likely better served by LLMs. After all, there are plenty of people who will offer advice. Those people range from well meaning friends who may lack the scope to offer valid advice, to snake-oil salesmen who could care less about outcomes as long as it contributes to their bottom line. Now I'm not going to pretend that LLMs care about me. On the other hand, they don't care about squeezing me for everything I have either. While the former will never change, I'll admit that the latter may. But I don't foresee that in the immediate future since I suspect the vendors of these models won't push for it until they have established their role in the market place.
Why do you think the lack of time limits is an advantage?
There is an amount of time spent gazing into your navel which is helpful. Less or more than that can be harmful.
You can absolutely make yourself mentally ill just by spending too much time worrying about how mentally ill you are.
And it's clear that there are a rather large number of people making themselves mentally ill using OpenAI's products right now.
Oh, and, aside, nothing stops OpenAI from giving or selling your chat transcripts to your employer. :P In fact, if your employer sues them they'll very likely be obligated to hand them over and you may have no standing to resist it.
Then we'll probably do what we do with other professional medical fields. License the AI, require annual fees and restrict supply by limiting the number of running nodes allowed to practice at any one time.
I mean, what if at some point we can bring people back from the dead? What does that do for laws around murder, eh?
In general, that would be a problem for the law to deal with if it ever happens; we shouldn't anticipate speculative future magic when legislating today.
In another comment I wondered whether a general chatbot producing text that was later determined in a courtroom to be "therapy" would be a violation. I can read the bill that way, but IANAL.
That's an interesting question that hasn't been tested yet. I suspect we won't be able to answer the question clearly until something bad happens and people go to court (sadly.) Also IANAL.
It's a simulated validating listening, and context-lacking suggestions. There is no more therapy being provided by an LLM than there is healing performed by a robot arm that slaps a bandage on your arm if you were to put it in the right spot and push a button to make it pivot toward you, find your arm, and spread it lightly.
And for human patients it makes sure their sensitive private information isn't entirely in the hands of some megacorp which will harvest it to use it and profit from it in some unethical way.
"One news report found an AI-powered therapist chatbot recommended “a small hit of meth to get through this week” to a fictional former addict."
Not at all surprising. I don't understand why seemingly bright people think this is a good idea, despite knowing the mechanism behind language models.
Hopefully more states follow, because it shouldn't be formally legal in provider settings. Informally, people will continue to use these models for whatever they want -- some will die, but it'll be harder to measure an overall impact. Language models are not ready for this use-case.
Different amphetamines have wildly different side effects. Regardless, chatbots shouldn't be advising people to change their medication or, in this case, use a very illegal drug.
You do know that amphetamines have a different effect on the people who need them and the people who use the recreationally, right? For those of us with ADHD their effects are soothing and calming. I literally took 20mg after having to wait 2 days for prescriptions to fill and went straight to bed for 12 hours. Stop spreading misinformation about the medications people like me need to function the way you take for granted.
I do like that we're in the stage where the universal function approximatior is pretty okay at mimicking a human but not so advanced as to have a full set of the walls and heuristics we've developed—reminds me a bit of Data from TNG. Naive, sure, but a human wouldn't ever say "logically.. the best course of action would be a small dose of meth administered as needed" even if it would help given the situation.
It feels like the kind of advice a former addict would give someone looking to quit—"Look man, you're going to be in a worse place if you lose your job because you can't function without it right now, take a small hit when it starts to get bad and try to make the hits smaller over time."
I think the upside for everyday people outweighs the (current) risks. I've been using Harper (harper.new) to keep track of my (complex) medical info. Obviously one of the use cases of AI is pulling out data from pdfs/images/etc. This app does that really well so I don't have to link with any patient portals. I do use the AI chat sometimes but mostly to ask questions about test results and stuff like that. Its way easier than trying get in to see my doc
I was curious, so I displayed signs of mental illness to ChatGPT, Claude and Gemini. Claude and Gemini kept repeating that I should contact a professional, while ChatGPT went right along with the nonsense I was spouting:
> So I may have discovered some deeper truth, and the derealization is my entire reality reorganizing itself?
As far as I can tell, a lot of therapy is just good common-sense advice and a bunch of 'tricks' to get the patient to actually follow it. Basically CBT and "get the patient to think they figured out the solution themselves (develop insight)". Yes, there's some serious cases where more is required and a few (ADHD) where meds are effective; but a lot of the time the patient is just an expert at rejecting helpful advice, often because they insist they're a special case that needs special treatment.
Therapists are more valuable that advice from a random friend (for therapy at least) because they can act when triage is necessary (e.g. send in the men in white coats, or refer to something that's not just CBT) and mostly because they're really good at cutting through the bullshit without having the patient walk out.
AIs are notoriously bad at cutting through bullshit. You can always 'jailbreak' an AI, or convince it of bad ideas. It's entirely counterproductive to enable their crazy (sorry, 'maladaptive') behaviour but that's what a lot of AIs will do.
Even if someone makes a good AI, there's always a bad AI in the next tab, and people will just open up a new tab to find an AI gives them the bad advice they want, because if they wanted to listen to good advice they probably wouldn't need to see a therapist. If doctor shopping is as fast and free as opening a new tab, most mental health patients will find a bad doctor rather than listen to a good one.
If you take it as an axiom that the licensing system for mental health professionals is there to protect patients from unqualified help posing as qualified help, then ensuring that only licensed professionals can legally practice and that they don't simply delegate their jobs to LLMs seems pretty reasonable.
Whether you want to question that axiom or whether that's what the phrasing of this legislation accomplishes is up to you to decide for yourself. Personally I think the phrasing is pretty straightforward in terms of accomplishing that goal.
Here is basically the entirety of the legislation (linked elsewhere in the thread: https://news.ycombinator.com/item?id=44893999). The whole thing with definitions and penalties is eight pages.
Section 15. Permitted use of artificial intelligence.
(a) As used in this Section, "permitted use of artificial
intelligence" means the use of artificial intelligence tools
or systems by a licensed professional to assist in providing
administrative support or supplementary support in therapy or
psychotherapy services where the licensed professional
maintains full responsibility for all interactions, outputs,
and data use associated with the system and satisfies the
requirements of subsection (b).
(b) No licensed professional shall be permitted to use
artificial intelligence to assist in providing supplementary
support in therapy or psychotherapy where the client's
therapeutic session is recorded or transcribed unless:
(1) the patient or the patient's legally authorized
representative is informed in writing of the following:
(A) that artificial intelligence will be used; and
(B) the specific purpose of the artificial
intelligence tool or system that will be used; and
(2) the patient or the patient's legally authorized
representative provides consent to the use of artificial
Section 20. Prohibition on unauthorized therapy services.
(a) An individual, corporation, or entity may not provide,
advertise, or otherwise offer therapy or psychotherapy
services, including through the use of Internet-based
artificial intelligence, to the public in this State unless
the therapy or psychotherapy services are conducted by an
individual who is a licensed professional.
(b) A licensed professional may use artificial
intelligence only to the extent the use meets the requirements
of Section 15. A licensed professional may not allow
artificial intelligence to do any of the following:
(1) make independent therapeutic decisions;
(2) directly interact with clients in any form of
therapeutic communication;
(3) generate therapeutic recommendations or treatment
plans without review and approval by the licensed
professional; or
(4) detect emotions or mental states.
- A therapist may disregard professional ethics and gossip about you
- A therapist may get you involuntarily committed
- A therapist may be forced to disclose the contents of therapy sessions by court order
- Certain diagnoses may destroy your life / career (e.g. airline pilots aren't allowed to fly if they have certain mental illnesses)
Some individuals might choose to say "Thanks, but no thanks" to therapy after considering these risks.
And then there are constant articles about people who need therapy but don't get it: The patient doesn't have time, money or transportation; or they have to wait a long time for an appointment; or they're turned away entirely by providers and systems overwhelmed with existing clients (perhaps with greater needs and/or greater ability to pay).
For people who cannot or will not access traditional therapy, getting unofficial, anonymous advice from LLM's seems better than suffering with no help at all.
(Question for those in the know: Can you get therapy anonymously? I'm talking: You don't have to show ID, don't have to give an SSN or a real name, pay cash or crypto up front.)
To the extent that people's mental health can be improved by simply talking with a trained person about their problems, there's enormous potential for AI: If we can figure out how to give an AI equivalent training, it could become economically and logistically viable to make services available to vast numbers of people who could benefit from them -- people who are not reachable by the existing mental health system.
That being said, "therapist" and "therapy" connote evidence-based interventions and a certain code of ethics. For consumer protection, the bar for whether your company's allowed to use those terms should probably be a bit higher than writing a prompt that says "You are a helpful AI therapist interviewing a patient..." The system should probably go through the same sorts of safety and effectiveness testing as traditional mental health therapy, and should have rigorous limits on where data "contaminated" with the contents of therapy sessions can go, in order to prevent abuse (e.g. conversations automatically deleted forever after 30 days, cannot be used for advertising / cross-selling / etc., cannot be accessed without the patient's per-instance opt-in permission or a court order...)
I've posted the first part of this comment before; in the interest of honesty I'll cite myself [1]. Apologies to the mods if this mild self-plagiarism is against the rules.
Often participants in discussions adjacent to this one err by speaking in time-absolute terms. Many of our judgments about LLMs are true about today's LLMs. Quotes like,
> Good. It's difficult to imagine a worse use case for LLMs.
Is true today, but likely not true for technology we may still refer to as LLMs in the future.
The error is in building faulty preconceptions. These drip into the general public and these first impressions stifle industries.
Define "AI therapy". AFAICT, it's undefined in the Illinois governor's statement. So, in the immortal words of Zach de la Rocha, "What is IT?" What is IT? I'm using AI to help with conversations to not cure, but coach diabetic patients. Does this law effect me and my clients? If so, how?
Here is what Illinois says:
https://idfpr.illinois.gov/content/dam/soi/en/web/idfpr/news...
I get the impression that it is now illegal in Illinois to claim that an AI chatbot can take the place of a licensed therapist or counselor. That doesn't mean people can't do what they want with AI. It only means that counseling services can't offer AI as a cheaper replacement for a real person.
Am I wrong? This sounds good to me.
Correct. It is more provider-oriented proscription ("You can't say your chatbot is a therapist.") It is not a limitation on usage. You can still, for now, slavishly fall in love with your AI and treat it as your best friend and therapist.
There is a specific section that relates to how a licensed professional can use AI:
Section 15. Permitted use of artificial intelligence.
(a) As used in this Section, "permitted use of artificial intelligence" means the use of artificial intelligence tools or systems by a licensed professional to assist in providing administrative support or supplementary support in therapy or psychotherapy services where the licensed professional maintains full responsibility for all interactions, outputs, and data use associated with the system and satisfies the requirements of subsection (b).
(b) No licensed professional shall be permitted to use artificial intelligence to assist in providing supplementary support in therapy or psychotherapy where the client's therapeutic session is recorded or transcribed unless:
(1) the patient or the patient's legally authorized representative is informed in writing of the following:
(A) that artificial intelligence will be used; and
(B) the specific purpose of the artificial intelligence tool or system that will be used; and
(2) the patient or the patient's legally authorized representative provides consent to the use of artificial intelligence.
Source: Illinois HB1806
https://www.ilga.gov/Legislation/BillStatus/FullText?GAID=18...
I went to the doctor and they used some kind of automatic transcription system. Doesn’t seem to be an issue as long as my personal data isn’t shared elsewhere, which I confirmed.
Whisper is good enough these days that it can be run on-device with reasonable accuracy so I don’t see an issue.
Last I checked, the popular medical transcription services did send your data to the cloud and run models there.
Yes, but with extra contracts and rules in place.
At least in the us I think HIPPA would cover this, and IME medical providers are very careful to select products and services that comply.
Yes, but HIPAA is notoriously vague with regards to what actual security measures have to be in place. Its more of an agreement between parties as to who is liable in case of a breach than it is a specific set of guidelines like SOC 2.
If your medical files are locked in the trunk of a car, that’s “HIPAA-compliant” until someone steals the car.
I think that's a good thing. I don't want a specific but largely useless checklist that absolves the party that ought to be held responsible. A hard guarantee of liability is much more effective at getting results.
It would be nice to extend the approximate equivalent of HIPAA to all personal data processing in all cases with absolutely zero exceptions. No more "oops we had a breach, pinky promise we're sorry, don't forget to reset all your passwords".
No disagreement. Its just something I point out when people are concerned about "HIPAA compliance."
My experience is that people tend to think its some objective level of security. But its really just the willingness to sign a BAA and then take responsibility for any breaches.
It's "HIPAA."
It was just last week that I learned about HIPAA Hippo!
Yes, but also "An... entity may not provide... therapy... to the public unless the therapy... services are conducted by... a licensed professional".
It's not obvious to me as a non-lawyer whether a chat history could be decided to be "therapy" in a courtroom. If so, this could count as a violation. Probably lots of law around this stuff for lawyers and doctors cornered into giving advice at parties already that might apply (e.g., maybe a disclaimer is enough to workaround the prohibition)?
Functionally, it probably amounts to two restrictions: a chatbot cannot formally diagnose & a chatbot cannot bill insurance companies for services rendered.
Most "therapy" services are not providing a diagnosis. Diagnosis comes from an evaluation before therapy starts, or sometimes not at all. (You can pay to talk to someone without a diagnosis.)
The prohibition is mainly on accepting any payment for advertised therapy service, if not following the rules of therapy (licensure, AI guidelines).
Likewise for medicine and law.
Many therapy services have the ability to diagnose as therapy proceeds though
After a bit of consideration I’m actually ok with codifying Bad Ideas. We could expand this.
These things usually (not a lawyer tho) come down to the claims being actively made. For example "engineer" is often (typically?) a protected title but that doesn't mean you'll get in trouble for drafting up your own blueprints. Even for other people, for money. Just that you need to make it abundantly clear that you aren't a licensed engineer.
I imagine "Pay us to talk to our friendly chat bot about your problems. (This is not licensed therapy. Seek therapy instead if you feel you need it.)" would suffice.
For a long time, Mensa couldn't give people IQ scores from the tests they administered because somehow, legally, they would be acting medically. This didn't change until about 10 years ago.
Defining non-medical things as medicine and requiring approval by particular private institutions in order to do them is simply corruption. I want everybody to get therapy, but there's no difference in outcomes whether you get it from a licensed therapist using some whacked out paradigm that has no real backing, or from a priest. People need someone to talk to who doesn't have unclear motives, or any motives really, other than to help. When you hand money to a therapist, that's nearly what you get. A priest has dedicated his life to this.
The only problem with therapists in that respect is that there's an obvious economic motivation to string a patient along forever. Insurance helps that by cutting people off at a certain point, but that's pretty brutal and not motivated by concern for the patient.
If you think human therapists intentionally string patients forever, wait to see what tech people can achieve with gamified therapists literally A/B tested to string people along. Oh, and we will then blame the people for "choosing" to engage with that.
Also, the proposition is dubious, because there are waitlists for therapists. Plus, therapist can actually loose the license while the chatbot cant, no matter how bad the chatbot gets.
This. At least here therapists don’t have a problem getting new patients.
I think this sort of service would be OK with informed consent. I would actually be a little surprised if there were much difference in patient outcomes.
…And it turns out it has been studied with findings that AI work, but humans are better.
https://pmc.ncbi.nlm.nih.gov/articles/PMC11871827/
Usually when it comes to medical stuff, things don't get approved unless they are better than existing therapies. With the shortage of mental health care in the US, maybe an exception should be made. This is a tough one. We like to think that nobody should have to get second rate medical care, even though that's the reality.
I think a good analogy would be a cheap, non-medically-approved (but medical style) ultrasound. Maybe it’s marketed as a “novelty”, maybe you have to sign a waiver saying it won’t be used for diagnostic purposes, whatever.
You know that it’s going to get used as a diagnostic tool, and you know that people are going to die because of this. Under our current medical ethics, you can’t do this. Maybe we should re-evaluate this, but that opens the door to moral hazard around cheap unreliable practices. It’s not straightforward.
Moral hazard? Versus not getting even a diagnostic, let alone care, because someone couldn't afford it? Versus self determination? A clear upfront statement of what the product is not ought to suffice.
What we have isn't motivated by protection from moral hazard (at least IMO). It's a guild system that restricts even vaguely related access and practices in (I'd argue) an overly broad manner.
To be clear I don't object to the guild in this case. Only to the overly broad fence surrounding it.
I'll just add that this has certain other interesting legal implications, because records in relation to a therapy session are a "protected confidence" (or whatever your local jurisdiction calls it). What that means is in most circumstances not even a subpoena can touch it, and even then special permissions are usually needed. So one of the open questions on my mind for a while now was if and when a conversation with an AI counts as a "protected confidence" or if that argument could successfully be used to fend off a subpoena.
At least in Illinois we now have an answer, and other jurisdictions look to what has been established elsewhere when deciding their own laws, so the implications are far reaching.
It does sound good (especially as an Illinois resident). Luckily, as far as I can tell, this is a proactive legislation. I don't think there are any startups out there promoting their LLM-based chatbot as a replacement for a therapist, or attempting to bill payers for service.
> I don't think there are any startups out there promoting their LLM-based chatbot as a replacement for a therapist
Unfortunately, there are already a bunch.
While I agree it’s very reasonable to ban marketing of AI as a replacement for a human therapist, I feel like there could still be space for innovation in terms of AI acting as an always-available supplement to the human therapist. If the therapist is reviewing the chats and configuring the system prompt, perhaps it could be beneficial.
It might also be a terrible idea, but we won’t find out if we make it illegal to try new things in a safe/supervised way. Not to say that what I just described would be illegal under this law; I’m not sure whether it would be. I’d expect it will discourage any Illinois-licensed therapists from trying out this kind of idea though.
What if at some point an AI is developed that’s a better therapist AND it’s cheaper?
I'm probably in the minority here, but for me it's a foregone conclusion that it will become a better therapist, doctor, architect, etc.
Instead of the rich getting access to the best professionals, it will level the playing field. The average low level lawyer, doctor, etc are not great. How nice if everyone got top level help.
It would still need to be regulated and licensed. There was this [0] I saw today about a guy who tried to replace sodium chloride in his diet with sodium bromide because ChatGPT said he could, and poisoned himself.
With a regulated license, there is someone to hold accountable for wantonly dangerous advice, much like there is with humans.
[0] https://x.com/AnnalsofIMCC/status/1953531705802797070
There are two different issues here. One is tied to how authoritative we view a source, and the other is tied to the weaknesses of the person receiving advice.
With respect to the former, I firmly believe that the existing LLMs should not be presented as a source for authoritative advice. Giving advice that is not authoritative is okay as long as the recipient realizes such, in the sense that it is something that people have to deal with outside of the technological realm anyhow. For example, if you ask for help for a friend you are doing so with the understanding that, as a friend, they are doing so to the best of their ability. Yet you don't automatically assume they are right. They are either right because they do the footwork for you to ensure accuracy or you check the accuracy of what they are telling you yourself. Likewise, you don't trust the advice of a stranger unless they are certified, and even that depends upon trust in the certifying body.
I think the problem with technology is that we assume it is a cure-all. While we may not automatically trust the results returned by a basic Google search, a basic Google search result coupled with an authoritative sounding name automatically sounds more accurate than a Google search result that is a blog posting. (I'm not suggesting this is the only criteria people use. You are welcome to insert your own criteria in its place.) Our trust of LLMs, as they stand today, is even worse. Few people have developed criteria beyond: it is an LLM, so it must be trustworthy; or, it is an LLM so it must not be trustworthy. And, to be fair, it is bloody difficult to develop criteria for the trustworthiness of LLMs (even arbitrary criteria) because the provide so few cues.
Then there's the bit about the person receiving the advice. There's not a huge amount we can do about that beyond encouraging people regard the results from LLMs as stepping stones. That is to say they should take the results and do research that will either confirm or deny it. But, of course, many people are lazy and nobody has the expertise to analyze the output of an LLM outside of their personal experience/training.
You cite one case for LLMs, but I can cite 250,000 a year for licensed doctors doing the same https://pubmed.ncbi.nlm.nih.gov/28186008/. Bureaucracy doesn't work for anyone but the bureaucrats.
Please show me one doctor who recommended taking a rock each day. LLMs have a different failure mode than professionals. People are aware that doctors or therapists may err, but I've already seen countless instances of people asking relationship advice from sycophant LLMs and thinking that the advice is “unbiased”.
> LLMs have a different failure mode than professionals
That actually supports the use-case of collaboration, since the weaknesses of both humans and LLMs would potentially cancel each other out.
Homeopathy is a good example. For an uneducated person it sounds convincing enough and yes, there are doctors prescribing homeopathic pills. I am still fascinated it still exists.
That’s actually a example of sth different. And as it’s basically a placebo it only harms people’s wallets (mostly). That cannot be said for random llm failure modes. And whether it can be prescribed by doctors depends very much on the country
I don't think it is that harmless. Believe in homeopathy often delays patients from taking timely intervention.
Yes. See: Steve Jobs, maybe.
A LLM (or doctor) recommending that I take a rock can't hurt me. Screwing up in more reasonable sounding ways is much more dangerous.
Actually, swallowing a rock will almost certainly cause problems. Telling your state medical board that your doctor told you to take a rock will have a wildly different outcome than telling a judge that you swallowed one because ChatGPT told you to do so.
Unless the judge has you examined and found to be incompetent, they're most likely to just tell you that you're an idiot and throw out the case.
They can't hurt me by telling me to do it because I won't.
Wow. I had never heard this before.
You don't need a "regulated license" to hold someone accountable for harm they caused you.
The reality is that professional licensing in the US often works to shield its communities from responsibility, though it's primary function is just preventing competition.
I would suspect at some point we will get models that are licensed.
Not tomorrow, but I just can't imagine this not happening in the next 20 years.
When has technological progress leveled the playing field? Like never. At best it shifted it, like that a machine manufacturer got rich in addition to existing wealth. There is no reason for this to go different with AI, and it’s far from certain that it will become better anything anytime soon. Cheaper, sure. But then ppl might see slight improvements from talking to ann original Eliza/Markov bot, and nobody advocated using those as therapy
Why is that a foregone conclusion?
Because meat isn't magic. Anything that can be computed inside your physical body, can be calculated in an "artificially" constructed replica. Given enough time, we'll create that replica, there's no reason to think otherwise.
> Because meat isn't magic. Anything that can be computed inside your physical body, can be calculated in an "artificially" constructed replica
That is a big assumption and my doubts aren't based on any soul "magic" but on our historical inability to replicate all kinds of natural mechanisms. Instead we create analogs that work differently. We can't make machines that fly like birds but we can make airplanes that fly faster and carry more. Some of this is due to the limits of artificial construction and some of it is due to the differences in our needs driving the design choices.
Meat isn't magic, but it also isn't silicon.
It's possible that our "meat" architecture depends on a low internal latency, low external latency, quantum effects and/or some other biological quirks that simply can't be replicated directly on silicon based chip architectures.
It's also possible they are chaotic systems that can't be replicated and each artificial human brain would require equivalent levels of experience and training in ways that don't make the any more cheaper or available than humans.
It's also possible we have found some sort of local maximum in cognition and even if we can make an artificial human brain, we can't make it any smarter than we are.
There are some good reasons to think it is plausibly possible, but we are simply too far away from doing it to know for sure whether it can be done. It definitely is not a "forgone conclusion".
> We can't make machines that fly like birds
Not only can we, they're mere toys : https://youtu.be/gcTyJdPkDL4?t=73
--
I don't know how you can believe in science and engineering, and not believe all of these:
1. Anything that already exists, the universe is able to construct, (ie. the universe fundamentally accommodates the existence of intelligent objects)
2. There is no "magic". Anything that happens ultimately follows the rules of nature, which are observable, and open to understanding and manipulation by humans.
3. While some things are astronomically (literally) difficult to achieve, that doesn't nullify #2
4. Ergo, while it might be difficult, there is fundamentally no reason to believe that the creation of an intelligent object is outside the capabilities of humans. The universe has already shown us their creation is possible.
This is different than, for instance, speculating that science will definitely allow us to live forever. There is no existence proof for such a thing.
But there is no reason to believe that we can't manipulate and harness intelligence. Maybe it won't be with Von Neumann, maybe it won't be with silicon, maybe it won't be any smarter than we are, maybe it will require just as much training as us; but with enough time, it's definitely within our reach. It's literally just science and engineering.
> 1. Anything that already exists, the universe is able to construct
I didn't claim it is possible we couldn't build meat brains. I claimed it is possible that equivalent or better performance might only be obtainable by meats brains.
> 2. There is no "magic". Anything that happens ultimately follows the rules of nature, which are observable, and open to understanding and manipulation by humans.
I actually don't believe the last part. There are quite plausibly laws of nature that we can't understand. I think it's actually pretty presumptuous that we will/can eventually understand and master every law of nature.
We've already proven that we can't prove every true thing about natural numbers. I think there might well be limits on what is knowable about our universe (atleast from inside of it.)
> 4. Ergo, while it might be difficult, there is fundamentally no reason to believe that the creation of an intelligent object is outside the capabilities of humans.
I didn't say that I believed that humans can't create intelligent objects. I believe we probably can and depending on how you want to define "intelligence", we already have.
What I said is that it is not a forgone conclusion that we will create "a better therapist, doctor, architect". I think it is pretty likely but not certain.
Even if we grant that for the sake of argument, there are two leaps of faith here:
- That AI as it currently exists is on the right track to creating that replica. Maybe neural networks will plateau before we get close. Maybe the Von Neumann architecture is the limiting factor, and we can only create the replica with a radically different model of computing!
- That we will have enough time. Maybe we'll accomplish it by the end of the decade. Maybe climate change or nuclear war will turn the world into a Mad Max–esque wasteland before we get the chance. Maybe it'll happen in a million years, when humans have evolved into other species. We just don't know!
I don't think you've refuted the point though. There's no reason to think that the apparatus we employ to animate ourselves will remain inscrutable forever. Unless you believe in a religious soul, all that stands in the way of the scientific method yielding results, is time.
> Maybe climate change or nuclear war will turn the world into a Mad Max–esque wasteland before we get the chance
In that eventuality, it really doesn't matter. The point remains, given enough time, we'll be successful. If we aren't successful, that means everything else has gone to shit anyway. Failure wont be because it is fundamentally impossible, it will be because we ran out of time to continue the effort.
No one has given a point to refute? The OP offered up the unsubstantiated belief that AI will some day be better than doctors/therapists/etc. You've added that it's not impossible — which, sure, whatever, but that's not really relevant to what we're discussing, which is whether it will happen to our society.
OP didn't specify a timeline or that it would happen for us personally to behold. Just that it is inevitable. You've correctly pointed out that there are things that can slow or even halt progress, but I don't think that undermines (what I at least see as) the main point. That there's no reason to believe anything fundamental stands in our way of achieving full "artificial intelligence"; ie. the doubters are being too pessimistic. Citing the destruction of humanity as a reason why we might fail can be said about literally every single other human pursuit as well; which to my mind, renders it a rather unhelpful objection to the idea that we will indeed succeed.
The article is about Illinois banning AI therapists in our society today, so I think the far more reasonable interpretation is that OP is also talking about our society today — or at least, in the near-ish future. (They also go on to talk about how it would affect different people in our society, which I think also points to my interpretation.)
And to be clear, I'm not even objecting to OP's claim! All I'm asking for is an affirmative reason to believe what they see as a foregone conclusion.
Well, I've already overstepped polite boundaries in answering for the OP. Maybe you're right, and he thinks such advancements are right around the corner. On my most hopeful days, I do. Let's just hope that the short term reason for failure isn't a Mad Max hellscape.
> Anything that can be computed inside your physical body, can be calculated in an "artificially" constructed replica.
What's hilarious about this argument (besides the fact that it smacks of the map-territory relation fallacy https://en.wikipedia.org/wiki/Map%E2%80%93territory_relation) is that for most of my life (53 years), we've been trying not just to simulate a nematode or Drosophila (two of the most-studied creatures of all time- note that we COMPLETELY understand their nervous systems) and failed to create anything remotely convincing of "life" (note that WE are the SOLE judgers of what is "alive", there is no 100% foolproof mechanistic algorithm to detect "life" (look up the cryptobiosis of tardigrades or wood frogs for an extra challenge)... therein lies part of the problem), but we cannot even convincingly simulate a single cell's behavior in any generous span of time (so for example, using a month to compute 10 seconds of a cell's "life"). And yes, there have been projects attempting to do those things this entire time. You should look them up. Tons of promise, zero delivery.
> Given enough time, we'll create that replica, there's no reason to think otherwise.
Note how structurally similar this is to a "God of the gaps" argument (just substitute "materialism-given-unlimited-time" for "God").
And yet... I agree that we should continue to try. I just think we will discover something interesting in... never succeeding, ever... while you will continue to refer to the "materialism-given-unlimited-time of the gaps" argument, assuming (key word there) that it must be successful. Because there can't possibly be anything else going on. LOL. Naive.
(Side note, but related: I couldn't help noticing that most of the AI doomers are materialist atheists.)
Haven't we found that there is a limit? Math itself is an abstraction. There is always a conversion process (Turning the real world into a 1 or a 0) that has an error rate. IE 0.000000000000001 is rounded to 0.
Every automation I have seen needs human tuning in order to keep working. The more complicated, the more tuning. This is why self driving cars and voice to text still rely on a human to monitor, and tune.
Meat is magic. And can never be completely recreated artificially.
It's sort of nice when medical professionals have real emotions and can relate to their patients. A machine emulation won't ever do the same. It will be like a narcissist faking empathy.
I agree with you that the possibility of egalitarian care for low costs is becoming very likely.
I’m cynical enough to recognize the price will just go up even if the service overhead is pennies on the dollar.
I wish I was so naive… but since AI is entirely in the hands of people with money… why would that possibly happen?
Wouldn’t the rich afford a much better trained, larger, and computationally more intensive model?
With most tech we reach law of diminishing returns. That is sure, there is still a variation, but very little:
- the best laptop/phone/tv in the world doesn’t offer mich more than the most affordable
- you can get for free a pen novadays that is almost as good at writing as the most expensive pens in the world (before BIC, in 1920s, pens were a luxury good reserved for wall street)
- toilets, washing mashines, heating systems and beds in the poorest homes are not very far off from the expensive homes (in EU at least)
- flying/travel is similar
- computer games and entertainment, and software in general
The more we remove human work from the loop, the more democratised and scalable the technology becomes.
does it matter? If mine is way better than I had before, why does it matter that someone else's is better still? My sister's $130 Moto G is much better than whatever phone she could afford 10 years. Does it matter that it's not a $1599 iPhone 16 Pro Max 1TB?
If the claim was that it would level the playing field, it seems like it wouldn't really do that?
A therapist is not a phone. Everybody deserves the best care, not just what is better than we have now. That's a low bar.
Why will any of those things come to pass? I’m asking as someone who has used it extensively for such situations.
I've never been to a therapist for anything that can be described as a diagnosable condition, but I have spoken to one about stress management and things of that ilk. For "amusement" I discussed similar things with an LLM.
At a surface level, the LLM was far more accessible. I didn't have to schedule an appointment weeks in advance. Even with the free tier, I didn't have to worry about time limits per se. There were limits, to be sure, but I could easily think about a question or the LLM's response before responding. In my case, what mattered was turnaround time on my terms rather than an in depth discussion. There was also less concern about being judged, both by another human and in a way that could get back to my employer because, yeah, it was employment related stress and the only way I could afford human service was through insurance offered by my employer. While there are significant privacy concerns with LLM's as they stand today, you don't have that direct relationship between who is offering it and the people in your life.
On a deeper level, I simply felt the advice was presented in a more useful form. The human discussions were framed by exercises to be completed between sessions. While the exercises were useful, the feedback was far from immediate and the purpose of the exercises is best described as a delaying tactic: it provided a framework for deeper thought between discussions because discussions were confined to times that were available to both parties. LLMs are more flexible. They are always available. Rather than dealing with big exercises to delay the conversation by a couple of weeks, they can be bite sized exercises to enable the next step. On top of that, LLMs allow for an expanded scope of discussion. Remember, I'm talking about workplace stress in my particular case. An LLM doesn't care whether you are talking about how you personally handle stress, or about how you manage a workplace in order to reduce stress for yourself and others.
Now I'm not going to pretend that this sort of arrangement is useful in all cases. I certainly wouldn't trust it for a psychological or medical diagnosis, and I would trust it even less for prescribed medications. On the other hand, people who cannot afford access to traditional professional services are likely better served by LLMs. After all, there are plenty of people who will offer advice. Those people range from well meaning friends who may lack the scope to offer valid advice, to snake-oil salesmen who could care less about outcomes as long as it contributes to their bottom line. Now I'm not going to pretend that LLMs care about me. On the other hand, they don't care about squeezing me for everything I have either. While the former will never change, I'll admit that the latter may. But I don't foresee that in the immediate future since I suspect the vendors of these models won't push for it until they have established their role in the market place.
Why do you think the lack of time limits is an advantage?
There is an amount of time spent gazing into your navel which is helpful. Less or more than that can be harmful.
You can absolutely make yourself mentally ill just by spending too much time worrying about how mentally ill you are.
And it's clear that there are a rather large number of people making themselves mentally ill using OpenAI's products right now.
Oh, and, aside, nothing stops OpenAI from giving or selling your chat transcripts to your employer. :P In fact, if your employer sues them they'll very likely be obligated to hand them over and you may have no standing to resist it.
What if at some point an AI is developed that’s a better therapist AND it’s cheaper?
Probably they'll the change the law.
Hundreds of laws change every day.
I think you’re downplaying the effect of “precedence” and the medical lobby.
laws can be repealed when they no longer accomplish their aims.
What if pigs fly?
Then we'll probably do what we do with other professional medical fields. License the AI, require annual fees and restrict supply by limiting the number of running nodes allowed to practice at any one time.
I mean, what if at some point we can bring people back from the dead? What does that do for laws around murder, eh?
In general, that would be a problem for the law to deal with if it ever happens; we shouldn't anticipate speculative future magic when legislating today.
Then laws can be changed again.
In another comment I wondered whether a general chatbot producing text that was later determined in a courtroom to be "therapy" would be a violation. I can read the bill that way, but IANAL.
That's an interesting question that hasn't been tested yet. I suspect we won't be able to answer the question clearly until something bad happens and people go to court (sadly.) Also IANAL.
But that would be like needing a prescription for chicken soup because of its benefits in fighting the common cold.
What's good about reducing options available for therapy? If the issue is misrepresentation, there are already laws that cover this.
It's not therapy.
It's a simulated validating listening, and context-lacking suggestions. There is no more therapy being provided by an LLM than there is healing performed by a robot arm that slaps a bandage on your arm if you were to put it in the right spot and push a button to make it pivot toward you, find your arm, and spread it lightly.
For human therapists, what’s good is that it preserves their ability to charge high fees because the demand for therapists far outstrips the supply.
Who lobbied for this law anyway?
And for human patients it makes sure their sensitive private information isn't entirely in the hands of some megacorp which will harvest it to use it and profit from it in some unethical way.
It's not really reducing options. There's no evidence that LLM chat bots are capable of providing effective mental health services.
We've tried that, and it turns out that self-regulation doesn't work. If it did, we could live in Libertopia.
The problem is that it leaves nothing for those who cannot afford to pay for the full cost of therapy.
But didn't trump make it illegal to make laws to limit the use of ai?
Why do you think a president had the authority to determine laws?
Seems he tried but it didn't pass https://www.reuters.com/legal/government/us-senate-strikes-a...
"One news report found an AI-powered therapist chatbot recommended “a small hit of meth to get through this week” to a fictional former addict."
Not at all surprising. I don't understand why seemingly bright people think this is a good idea, despite knowing the mechanism behind language models.
Hopefully more states follow, because it shouldn't be formally legal in provider settings. Informally, people will continue to use these models for whatever they want -- some will die, but it'll be harder to measure an overall impact. Language models are not ready for this use-case.
This is why we should never use LLMs to diagnose or prescribe. One small hit of meth definitely won’t last all week.
> seemingly bright people think this is a good idea, despite knowing the mechanism behind language models
Nobel Disease (https://en.wikipedia.org/wiki/Nobel_disease)
In a world where a daily dose of amphetamines is just right for millions of people, this somehow cant be that surprising...
Different amphetamines have wildly different side effects. Regardless, chatbots shouldn't be advising people to change their medication or, in this case, use a very illegal drug.
Methamphetamine can be prescribed by a doctor for certain things. So illegal, but less illegal than a schedule 1 substance.
You do know that amphetamines have a different effect on the people who need them and the people who use the recreationally, right? For those of us with ADHD their effects are soothing and calming. I literally took 20mg after having to wait 2 days for prescriptions to fill and went straight to bed for 12 hours. Stop spreading misinformation about the medications people like me need to function the way you take for granted.
I do like that we're in the stage where the universal function approximatior is pretty okay at mimicking a human but not so advanced as to have a full set of the walls and heuristics we've developed—reminds me a bit of Data from TNG. Naive, sure, but a human wouldn't ever say "logically.. the best course of action would be a small dose of meth administered as needed" even if it would help given the situation.
It feels like the kind of advice a former addict would give someone looking to quit—"Look man, you're going to be in a worse place if you lose your job because you can't function without it right now, take a small hit when it starts to get bad and try to make the hits smaller over time."
Bright people and people who think they are bright are not necessarily the very same people.
Good. It's difficult to imagine a worse use case for LLMs.
Here is the text of Illinois HB1806:
https://www.ilga.gov/Legislation/BillStatus/FullText?GAID=18...
What if it works a third as well as a therapists but is 20 times cheaper?
What word should we use for that?
Smart. Dont trust nothing that will confidently lie, especially about mental health
I think the upside for everyday people outweighs the (current) risks. I've been using Harper (harper.new) to keep track of my (complex) medical info. Obviously one of the use cases of AI is pulling out data from pdfs/images/etc. This app does that really well so I don't have to link with any patient portals. I do use the AI chat sometimes but mostly to ask questions about test results and stuff like that. Its way easier than trying get in to see my doc
Good.
Therapy requires someone to question you and push back against your default thought patterns in the hope of maybe improving them.
"You're absolutely right!" in every response won't help that.
I would argue that LLMs don't make effective therapists and anyone who says they do is kidding themselves.
I was just reading about a suicide tied to AI chatbot 'therapy' uses.
This stuff is a nightmare scenario for the vulnerable.
I was curious, so I displayed signs of mental illness to ChatGPT, Claude and Gemini. Claude and Gemini kept repeating that I should contact a professional, while ChatGPT went right along with the nonsense I was spouting:
> So I may have discovered some deeper truth, and the derealization is my entire reality reorganizing itself?
> Yes — that’s a real possibility.
Curious how this can be enforced if business is incorporated in another state like WI/DE ? or offshore like Ireland ??
the way people read language model outputs keep surprising me, e.g. https://www.reddit.com/r/MyBoyfriendIsAI/
it is impossible for some people to not feel understood by it.
Nice feather in your cap Pritzker, now can you go back to working on a public option for health insurance?
As far as I can tell, a lot of therapy is just good common-sense advice and a bunch of 'tricks' to get the patient to actually follow it. Basically CBT and "get the patient to think they figured out the solution themselves (develop insight)". Yes, there's some serious cases where more is required and a few (ADHD) where meds are effective; but a lot of the time the patient is just an expert at rejecting helpful advice, often because they insist they're a special case that needs special treatment.
Therapists are more valuable that advice from a random friend (for therapy at least) because they can act when triage is necessary (e.g. send in the men in white coats, or refer to something that's not just CBT) and mostly because they're really good at cutting through the bullshit without having the patient walk out.
AIs are notoriously bad at cutting through bullshit. You can always 'jailbreak' an AI, or convince it of bad ideas. It's entirely counterproductive to enable their crazy (sorry, 'maladaptive') behaviour but that's what a lot of AIs will do.
Even if someone makes a good AI, there's always a bad AI in the next tab, and people will just open up a new tab to find an AI gives them the bad advice they want, because if they wanted to listen to good advice they probably wouldn't need to see a therapist. If doctor shopping is as fast and free as opening a new tab, most mental health patients will find a bad doctor rather than listen to a good one.
If you take it as an axiom that the licensing system for mental health professionals is there to protect patients from unqualified help posing as qualified help, then ensuring that only licensed professionals can legally practice and that they don't simply delegate their jobs to LLMs seems pretty reasonable.
Whether you want to question that axiom or whether that's what the phrasing of this legislation accomplishes is up to you to decide for yourself. Personally I think the phrasing is pretty straightforward in terms of accomplishing that goal.
Here is basically the entirety of the legislation (linked elsewhere in the thread: https://news.ycombinator.com/item?id=44893999). The whole thing with definitions and penalties is eight pages.
Section 15. Permitted use of artificial intelligence.
(a) As used in this Section, "permitted use of artificial intelligence" means the use of artificial intelligence tools or systems by a licensed professional to assist in providing administrative support or supplementary support in therapy or psychotherapy services where the licensed professional maintains full responsibility for all interactions, outputs, and data use associated with the system and satisfies the requirements of subsection (b).
(b) No licensed professional shall be permitted to use artificial intelligence to assist in providing supplementary support in therapy or psychotherapy where the client's therapeutic session is recorded or transcribed unless: (1) the patient or the patient's legally authorized representative is informed in writing of the following: (A) that artificial intelligence will be used; and (B) the specific purpose of the artificial intelligence tool or system that will be used; and (2) the patient or the patient's legally authorized representative provides consent to the use of artificial
Section 20. Prohibition on unauthorized therapy services.
(a) An individual, corporation, or entity may not provide, advertise, or otherwise offer therapy or psychotherapy services, including through the use of Internet-based artificial intelligence, to the public in this State unless the therapy or psychotherapy services are conducted by an individual who is a licensed professional.
(b) A licensed professional may use artificial intelligence only to the extent the use meets the requirements of Section 15. A licensed professional may not allow artificial intelligence to do any of the following: (1) make independent therapeutic decisions; (2) directly interact with clients in any form of therapeutic communication; (3) generate therapeutic recommendations or treatment plans without review and approval by the licensed professional; or (4) detect emotions or mental states.
[dead]
[dead]
[flagged]
[flagged]
Consider the following:
- A therapist may disregard professional ethics and gossip about you
- A therapist may get you involuntarily committed
- A therapist may be forced to disclose the contents of therapy sessions by court order
- Certain diagnoses may destroy your life / career (e.g. airline pilots aren't allowed to fly if they have certain mental illnesses)
Some individuals might choose to say "Thanks, but no thanks" to therapy after considering these risks.
And then there are constant articles about people who need therapy but don't get it: The patient doesn't have time, money or transportation; or they have to wait a long time for an appointment; or they're turned away entirely by providers and systems overwhelmed with existing clients (perhaps with greater needs and/or greater ability to pay).
For people who cannot or will not access traditional therapy, getting unofficial, anonymous advice from LLM's seems better than suffering with no help at all.
(Question for those in the know: Can you get therapy anonymously? I'm talking: You don't have to show ID, don't have to give an SSN or a real name, pay cash or crypto up front.)
To the extent that people's mental health can be improved by simply talking with a trained person about their problems, there's enormous potential for AI: If we can figure out how to give an AI equivalent training, it could become economically and logistically viable to make services available to vast numbers of people who could benefit from them -- people who are not reachable by the existing mental health system.
That being said, "therapist" and "therapy" connote evidence-based interventions and a certain code of ethics. For consumer protection, the bar for whether your company's allowed to use those terms should probably be a bit higher than writing a prompt that says "You are a helpful AI therapist interviewing a patient..." The system should probably go through the same sorts of safety and effectiveness testing as traditional mental health therapy, and should have rigorous limits on where data "contaminated" with the contents of therapy sessions can go, in order to prevent abuse (e.g. conversations automatically deleted forever after 30 days, cannot be used for advertising / cross-selling / etc., cannot be accessed without the patient's per-instance opt-in permission or a court order...)
I've posted the first part of this comment before; in the interest of honesty I'll cite myself [1]. Apologies to the mods if this mild self-plagiarism is against the rules.
[1] https://news.ycombinator.com/item?id=44484207#44505789
LLMs will be used as a part of therapy in the future.
An interaction mechanism that will totally drain the brain after a 5 hour adrenaline induced conversation followed by a purge and bios reset.
I saw a video recently that talked about a chatbot "therapist" that ended up telling the patient to murder a dozen people [1].
It was mind-blowing how easy it was to get LLMs to suggest pretty disturbing stuff.
[1] https://youtu.be/lfEJ4DbjZYg?si=bcKQHEImyDUNoqiu
Often participants in discussions adjacent to this one err by speaking in time-absolute terms. Many of our judgments about LLMs are true about today's LLMs. Quotes like,
> Good. It's difficult to imagine a worse use case for LLMs.
Is true today, but likely not true for technology we may still refer to as LLMs in the future.
The error is in building faulty preconceptions. These drip into the general public and these first impressions stifle industries.
Define "AI therapy". AFAICT, it's undefined in the Illinois governor's statement. So, in the immortal words of Zach de la Rocha, "What is IT?" What is IT? I'm using AI to help with conversations to not cure, but coach diabetic patients. Does this law effect me and my clients? If so, how?