Faegre Drinker Biddle & Reath LLP, a Delaware limited liability partnership | This website contains attorney advertising.
November 24, 2025

Potential Challenges to State AI Laws; Senate Hearing on AI; and More Developments

Artificial Intelligence Briefing

This month’s briefing covers the White House’s draft of an executive order that would task the Department of Justice with challenging state AI laws, as well as U.S. senators’ voiced concerns about AI’s role in content moderation, media consolidation and child safety. Meanwhile there is a surge in lawsuits alleging psychological and physical harm from Gen AI chatbots applying traditional liability theories. Read on for a deeper dive into these and more key updates.

Regulatory, Legislative & Litigation Developments

Trump Takes Aim at State Regulation of AI

The White House has reportedly put on hold a draft executive order that would task the Department of Justice with challenging state AI laws, “including on grounds that such laws unconstitutionally regulate interstate commerce, are preempted by existing Federal regulations, or are otherwise unlawful in the Attorney General’s judgment….” The draft also would enlist the Department of Commerce, Federal Communications Commission, Federal Trade Commission and other federal agencies in the effort to replace conflicting state laws with a “minimally burdensome national standard.” News of the potential executive order was hailed by big tech but condemned by consumer advocates and lawmakers from both parties. In addition, the White House is reportedly working with Republican lawmakers on a potential moratorium on state regulation that could be included in the National Defense Authorization Act. The Senate overwhelmingly rejected one such proposal that was included in the Big Beautiful Bill.

Senate Hearing Reveals Tensions Over AI Governance, Content Moderation and Free Speech

At an October 29, 2025, Senate Commerce Committee hearing titled “Shut Your App: How Uncle Sam Jawboned Big Tech Into Silencing Americans, Part II,” senators voiced concerns about AI’s role in content moderation, media consolidation and child safety. Senator Marsha Blackburn (R-TN) criticized Google after one of its AI models generated false claims against her, demanding answers about AI-generated libel, while also condemning Meta’s lobbying against the Kids Online Safety Act. Ranking Member Maria Cantwell (D-WA) questioned how AI-driven content moderation could create opacity in information systems, urging greater competition and transparency. Both Google and Meta executives testified that they independently develop content policies and emphasized their commitment to resisting external pressure on content moderation decisions.

Surge in Lawsuits Alleging Psychological and Physical Harm From Gen AI Chatbots Raises Questions on Applying Traditional Liability Theories

There has been a significant uptick in litigation filed against companies that own and operate leading generative AI chatbots where the plaintiffs assert a variety of claims based on the overarching allegation that users’ interactions with the chatbots resulted in a combination of psychological effects and physical harms to the users. Earlier in 2025, there were at least three cases filed against several defendants in which the plaintiffs asserted that minors who had interacted with chatbots committed suicide as a result of those interactions. More recently, seven cases were filed against OpenAI in which the plaintiffs asserted that they suffered various psychological harms as a result of a growing compulsion to interact with chatbots. In virtually every case, the plaintiffs assert that the systems failed to detect the use of harmful language and references to suicide even though the systems were designed to detect and then trigger certain safeguards or protocols in response to the use of such language.

While there are a handful of states that have enacted legislation in an attempt to regulate AI chatbots, the majority of the claims asserted in the cases filed to date are claims based on long-standing theories of liability, including product liability claims, negligence, negligence per se and intentional infliction of emotional distress. The cases are in the early stages of litigation. It will be important to monitor the cases to observe how courts will apply traditional theories of liability to relatively new and rapidly evolving technologies.

NAIC Releases Revised AI Systems Evaluation Tool to Help Regulators Assess Consumer and Financial Risks

On November 5, 2025, the National Association of Insurance Commissioners’ (NAIC) Big Data and Artificial Intelligence Working Group released a revised draft of its Artificial Intelligence Systems Evaluation Tool. Building off the original draft and incorporating some of the comments received from industry, the revised tool is intended to help regulators identify and assess, on an ongoing basis, the financial and consumer risks arising specifically from a company’s use of AI systems. Notably, the revised draft: (i) clarifies that the tool is intended to supplement existing NAIC resources and that regulators should continue to consider existing NAIC resources as authoritative; (ii) provides additional instructions for the use of the tool and its exhibits; and (iii) deletes questions about consumer complaints that relate to AI systems. The working group discussed the revised draft during a November 19 call and will hold a special half-day meeting to finalize the tool at NAIC’s Fall National Meeting.

European Commission Announces Simplified Measures and Delayed Implementation for High-Risk AI Under EU AI Act

On November 18, 2025, the European Commission revealed “targeted simplification measures” in respect of the EU AI Act. Key changes include a delay to the implementation timeline of new rules for high-risk AI systems to align with the publication of harmonized standards and regulatory guidance, simplified rules for small and mid-cap companies, and further forthcoming guidelines on the scope of exemptions for research, including pre-clinical research and product development for medicinal products and medical devices.

English High Court Issues First Major Ruling on AI and Copyright

The English High Court has delivered its first major judgment on copyright infringement in respect of AI. The claimants faced evidential difficulties in establishing that the allegedly infringing acts of training the AI models with copyright images took place in the United Kingdom. Getty’s claim of secondary copyright infringement failed on the basis that the AI model weights did not constitute infringing copies. The claimants succeeded on some limited historic trademark infringement claims. Further details are in our earlier update.

French Agency Finds Facebook Job Ad Algorithm Indirectly Discriminates by Gender; Recommends Corrective Measures

In October 2025, a French administrative agency, the Defender of Rights (Défenseur des Droits), ruled that Facebook’s algorithm for placing job ads indirectly discriminated based on gender. The ruling was based on studies in which three human rights organizations placed ads on Facebook that did not mention gender and selected the targeting criteria to be as broad as possible but found the algorithm showed stereotypically female jobs (such as early childhood assistant and secretary) to women 80-94% of the time and stereotypically male jobs (such as IT manager and airline pilot) to men 74-85% of the time. Although its ruling is not legally binding, the Defender of Rights recommended that Facebook implement measures to ensure that its ads are disseminated in a nondiscriminatory manner and requested that Facebook provide a response describing the measures it has taken within three months.

What We’re Reading

The material contained in this communication is informational, general in nature and does not constitute legal advice. The material contained in this communication should not be relied upon or used without consulting a lawyer to consider your specific circumstances. This communication was published on the date specified and may not include any changes in the topics, laws, rules or regulations covered. Receipt of this communication does not establish an attorney-client relationship. In some jurisdictions, this communication may be considered attorney advertising.