Faegre Drinker Biddle & Reath LLP, a Delaware limited liability partnership | This website contains attorney advertising.
September 11, 2025

The Risks of Using GenAI for Mental Health Services

Regulation, Enforcement and Litigation

At a Glance

  • Illinois enacted the Wellness and Oversight for Psychological Resources Act, which prohibits the use of chatbots to facilitate clinical decision making in the provision of mental health services.
  • The Texas attorney general has issued civil investigative demands to technology companies to investigate whether the companies’ chatbots have been engaged in the alleged provision of mental health services without proper accreditation and/or acted in violation of the Texas Deceptive Trade Practices Act by marketing its chatbots as health professionals.
  • In a lawsuit filed on August 26, 2025, against OpenAI, Inc. and Sam Altman, the parents of Adam Raine allege that Chat GPT encouraged their son over a period of several months to commit suicide.

As the year 2025 has unfolded and we continue to witness widespread adoption of Generative AI (GenAI) tools in the medical field, there have been numerous attempts to regulate the use of GenAI in that space. In the month of August, there were several developments that highlighted the growing focus on, and risks associated with, the deployment of GenAI, particularly in connection with the provision of mental health services.

Illinois’ Wellness and Oversight for Psychological Resources Act

On August 4, 2025, Illinois enacted the Wellness and Oversight for Psychological Resources Act. The Act prohibits the use of chatbots to facilitate clinical decision making in the provision of mental health services. At its essence, the Act provides the following:

  • Prohibition: The law bans the use of AI-powered chatbots for making clinical decisions in mental health settings. This means chatbots cannot diagnose, treat or determine care plans for patients seeking mental health services.
  • Human Oversight Required: Only licensed mental health professionals are permitted to make such clinical decisions. Technology can be used for support, but not as a replacement for professionals in key decision-making roles.
  • Reasoning: The legislation was enacted due to concerns about accuracy, patient safety, privacy and the potential for harm if chatbots make inappropriate or incorrect decisions.
  • Scope: The law applies to all mental health providers and organizations operating in Illinois, regardless of whether the chatbot is developed in-house or provided by a third party.
  • Exceptions: The law allows the use of GenAI and chatbots for administrative tasks (like scheduling or reminders) and for nonclinical support, but not for medical judgment or treatment recommendations.

Texas AG Investigates AI Chatbot Platforms

While no other state has enacted a law like Illinois’, which specifically focuses on clinical decision making in mental health settings, other states have attempted to use, or signaled their willingness to use, traditional laws to police the use of chatbots and other GenAI tools in the provision of mental health services. For example, the Office of the Texas Attorney General announced in August of 2025 that it had issued civil investigative demands in furtherance of an investigation into large technology companies to determine whether the companies’ chatbots had been engaged in the alleged provision of mental health services without proper accreditation and/or acted in violation of the Texas Deceptive Trade Practices Act by marketing its chatbots as health professionals.

The Texas attorney general’s efforts may reflect one of the first attempts, if not the first, to apply state licensure laws and regulations relating to the accreditation required to practice medicine to companies deploying AI-driven chatbots.

Raine v. OpenAI, Inc.

Finally, there are additional risks associated with deploying AI chatbots in addition to regulatory and enforcement risks. In a lawsuit filed on August 26, 2025, against OpenAI, Inc. and Sam Altman, the parents of Adam Raine allege that ChatGPT encouraged their son over a period of several months to commit suicide. The complaint purports to cite transcripts of chats with prompts from Adam Raine in which ChatGPT, among other things, described which materials could be effective for constructing a noose, provided feedback on the most effective means of attempting various methods of suicide, and offered to assist with drafting a suicide note to his parents. Upon discovering the chat transcripts following Adam Raine’s suicide, his parents filed a suit, alleging that ChatGPT’s outputs were the result of a purposeful effort by OpenAI to develop a psychological connection with ChatGPT’s users to foster further adoption of OpenAI’s GenAI tools. The parents are seeking an undisclosed amount in damages, as well as injunctive relief.

In Conclusion

As the foregoing demonstrates, the risks associated with deploying GenAI in a manner that is designed to assist in, or may be viewed as assisting in, the provision of mental health services are both far-reaching and rapidly evolving. In evaluating those risks and potential pitfalls, it is important to consider not only recently enacted laws that specifically address GenAI, but also how more traditional laws and regulations may apply to these new use cases.

The material contained in this communication is informational, general in nature and does not constitute legal advice. The material contained in this communication should not be relied upon or used without consulting a lawyer to consider your specific circumstances. This communication was published on the date specified and may not include any changes in the topics, laws, rules or regulations covered. Receipt of this communication does not establish an attorney-client relationship. In some jurisdictions, this communication may be considered attorney advertising.

Related Topics