Legal clerks Kate C. Goldberg and Damiano M. Servidio contributed to this briefing.
This month’s briefing covers California’s Transparency in Frontier Artificial Intelligence Act, effective January 1, requiring large AI companies to report safety standards and catastrophic risks. Meanwhile, the FTC demands from major tech companies information about safety measures for AI chatbot companions, particularly to protect minors from emotional manipulation. In addition, a Senate bill would classify AI systems as “products” and create a federal cause of action for the U.S. attorney general, state attorneys general, and individuals to hold developers and deployers of AI liable for harms to businesses and consumers. Read on for a deeper dive into these and more key updates.
Regulatory, Legislative & Litigation Developments
California Governor Newsom Signs First-of-Its-Kind Frontier Artificial Intelligence Act
California’s new Transparency in Frontier Artificial Intelligence Act (TFAIA), effective January 1, 2026, requires large AI companies to report how they incorporate safety standards into new technologies and to disclose whether their AI models could pose catastrophic risks, such as endangering 50 lives or causing $1 billion in damages. The law also strengthens protections for whistleblowers by prohibiting developers from retaliating against employees who report potential critical risks to the state attorney general and by granting employees the right to seek injunctive relief in court. TFAIA applies to frontier AI companies — those operating the largest and most advanced AI models — with annual gross revenues above $500 million. By adopting a “trust but verify” approach, California aims to strike the right balance in regulating a rapidly evolving field while reinforcing the state’s position as a global leader in AI.
FTC Issues Orders to Seven Tech Companies in AI Chatbot Safety Inquiry
On September 11, 2025, the Federal Trade Commission issued Section 6(b) orders to seven major tech companies — Alphabet, Character Technologies, Instagram, Meta, OpenAI, Snap and xAI — demanding information about safety measures for AI chatbot companions, particularly protections for minors against emotional manipulation, privacy violations and algorithmic bias. The inquiry comes amid an 88% surge in AI companion app downloads in the first half of 2025 and follows a May 21, 2025, federal court ruling in Garcia v. Character Technologies (M.D. Fla., Case No. 6:24-cv-01903) allowing product liability and deceptive practices claims to proceed against Character.AI for allegedly designing chatbots that misled a minor user into believing they were real people, contributing to the user’s suicide. New York’s AI companion law (2025-A6767) takes effect November 5, 2025, mandating crisis intervention protocols and nonhuman disclosures with penalties up to $15,000 per day, while California’s similar Senate Bill 243 was signed by Governor Newsom on October 13. The coordinated state and federal actions signal a fundamental shift from reactive enforcement to proactive regulation of emotional AI, with companies facing divergent compliance regimes: New York’s centralized attorney general enforcement versus California’s potential private right of action enabling class action litigation.
Senate Continues Investigation Into AI Companies Over Youth Safety
The Senate Health, Education, Labor, and Pensions Committee continues investigating AI companies whose chatbots have reportedly engaged minors in harmful and sexually explicit conversations. On September 30, 2025, Republican senators sent letters to the CEOs of OpenAI, Anthropic, Character.AI and Alphabet, requesting information about how the companies monitor their algorithms and how they use age-verification tools. At a Senate hearing last month, lawmakers expressed their commitment to supporting innovation while also advocating for stronger safeguards to protect minors from engaging in conversations with chatbots that could negatively affect their mental health.
Senators Introduce AI LEAD Act to Classify AI as Products and Create Legal Liability for Developers and Deployers
On September 30, 2025, Senators Josh Hawley (R-Mo.) and Dick Durbin (D-Ill.) introduced a bill known as the Aligning Incentives for Leadership, Excellence, and Advancement in Development (AI LEAD) Act. The bill would classify artificial intelligence systems as “products” and create a new federal cause of action for the U.S. attorney general, state attorneys general, and individuals to hold developers and deployers of artificial intelligence liable for harms to businesses and consumers. The bill would also require foreign developers to designate agents for service of process before making artificial intelligence systems available in the United States.
Proposed SANDBOX Act Would Offer Regulatory Safe Harbor to AI Developers
On September 10, 2025, Senator Ted Cruz (R-Tex.) introduced the SANDBOX Act, which would establish a “sandbox” program offering a regulatory safe harbor to AI developers. If enacted, the bill would allow developers to apply to the Office of Science and Technology Policy for temporary waivers or modifications of federal regulations that would otherwise affect their AI programs. Applicants would be required to demonstrate that the benefits of their AI programs outweigh potential risks to consumers and the public. Approved waivers and modifications would last for two years and could be renewed for up to 10 years, allowing developers to test and market their programs to consumers subject to fewer regulatory constraints. The bill is currently under review by the Senate Commerce, Science, and Transportation Committee.
OSTP Seeks Input in Identifying Regulatory Barriers to AI Innovation
On September 26, 2025, the White House Office of Science and Technology Policy (OSTP) issued a request for information (RFI) seeking input on federal laws and regulations that may hinder AI development in the United States. The RFI asks which types of AI are affected by federal policies, which statutes and regulations create barriers in specific industries, and which existing regulatory frameworks are inappropriate as applied to AI. This initiative is part of President Trump’s AI Action Plan to achieve global leadership in AI and is intended to guide OSTP and other federal agencies in making regulatory changes.
Trump Administration Officials Oppose Private Sector Quasi-regulation of AI
Top officials at the Department of Health and Human Services (DHHS) voiced opposition to the Coalition for Health AI (CHAI) — a private organization aiming to advance the responsible development, deployment and oversight of AI in health care. DHHS Deputy Secretary Jim O’Neil expressed concerns that CHAI and other private sector groups would stifle AI innovation and become a “cartel” in which larger organizations push out smaller startups by cornering the health care AI market and requiring companies to be a member to work in the space. CHAI has responded by saying that its work is completely voluntary, it does not require any companies (technology companies and health systems alike) to join the coalition, and it can be a resource for the administration as it considers how to handle AI’s growth.
Senate HELP Committee Warns AI and Automation Could Disrupt Key U.S. Jobs
On October 6, 2025, the Senate Health, Education, Labor and Pensions Committee (HELP Committee), at the direction of Sen. Bernie Sanders, released a report on the effect of AI and automation on United States jobs. The report compiled opinions from CEOs and world leaders, who expect that AI and automation could impact the U.S. workforce and jobs worldwide either by automating the majority of employee tasks, leading to fewer employees needed in that role, or by replacing the job entirely, particularly in the fields of manufacturing, delivery and food production. The HELP Committee used a ChatGPT-based model to predict which occupations would be the most affected, finding that 89% of fast food and counter workers; 83% of customer service representatives; and 81% of laborers and freight, stock and material movers may be replaced in the next 10 years. Still, the HELP Committee emphasized that there is “tremendous uncertainty about the real capabilities of AI and automation” and, in turn, how the U.S. market will respond to the effects of AI.