The quiet ban that could change how AI talks to you

Aug 13, 2025 - 20:48
 0  0
The quiet ban that could change how AI talks to you

As AI chatbots become ubiquitous, states are looking to put up guardrails around AI and mental health before it’s too late. With millions of people turning to AI for advice, chatbots have begun posing as free, instant therapists – a phenomenon that, right now, remains almost completely unregulated. 

In the vacuum of regulation on AI, states are stepping in to quickly erect guardrails where the federal government hasn’t. Earlier this month, Illinois Governor JB Pritzker signed a bill into law that limits the use of AI in therapy services. The bill, the Wellness and Oversight for Psychological Resources Act, blocks the use of AI to “ provide mental health and therapeutic decision-making,” while still allowing licensed mental health professionals to employ AI for administrative tasks like note taking.

The risks inherent in non-human algorithms doling out mental health guidance are myriad, from encouraging recovering addicts to have a “small hit of meth” to engaging young users so successfully that they withdraw from their peers. One recent study found that nearly a third of teens find conversations with AI as satisfying or more satisfying than real-life interactions with friends.

States pick up the slack, again

In Illinois, the new law is designed to “protect patients from unregulated and unqualified AI products, while also protecting the jobs of Illinois’ thousands of qualified behavioral health providers,” according to the Illinois Department of Financial & Professional Regulation (IDFPR), which coordinated with lawmakers on the legislation.

“The people of Illinois deserve quality healthcare from real, qualified professionals and not computer programs that pull information from all corners of the internet to generate responses that harm patients,” IDFPR Secretary Mario Treto, Jr said. Violations of the law can result in a $10,000 fine.

Illinois has a history of successfully regulating new technologies. The state’s Biometric Information Privacy Act (BIPA), which governs the use of facial recognition and other biometric systems for Illinois residents, has tripped up many tech companies accustomed to operating with regulatory impunity. That includes Meta, a company that’s now all-in on AI, including chatbots like the ones that recently made chats some users believed to be private public in an open feed.

Earlier this year, Nevada enacted its own set of new regulations on the use of AI in mental health services, blocking AI chatbots from representing themselves as “capable of or qualified to provide mental or behavioral health care.” The law also prevents schools from using AI to act as a counselor, social worker or psychologist or from performing other duties related to the mental health of students. Earlier this year, Utah added its own restrictions around the mental health applications of AI chatbots, though its regulations don’t go as far as Illinois or Nevada.

The risks are serious

In February, the American Psychological Association met with U.S. regulators to discuss the dangers of AI chatbots pretending to be therapists. The group presented its concerns to an FTC panel, citing a case last year of a 14-year-old in Florida who died by suicide after becoming obsessed with a chatbot made bt the company Character.AI. 

“They are actually using algorithms that are antithetical to what a trained clinician would do,” APA Chief Executive Arthur C. Evans Jr. told The New York Times. “Our concern is that more and more people are going to be harmed. People are going to be misled, and will misunderstand what good psychological care is.”

We’re still learning more about those risks. A recent study out of Stanford found that chatbots marketing themselves for therapy often stigmatized users dealing with serious mental health issues and issued responses that could be inappropriate or even dangerous.

“LLM-based systems are being used as companions, confidants, and therapists, and some people see real benefits,” co-author and Stanford Assistant Professor Nick Haber said. “But we find significant risks, and I think it’s important to lay out the more safety-critical aspects of therapy and to talk about some of these fundamental differences.”

What's Your Reaction?

Like Like 0
Dislike Dislike 0
Love Love 0
Funny Funny 0
Angry Angry 0
Sad Sad 0
Wow Wow 0