This month's briefing covers the Musk v. OpenAI trial, as well as litigation by Musk's xAI and the US DOJ challenging Colorado's AI law. Additionally, Florida's attorney general has opened a criminal investigation against OpenAI. Meanwhile, Alabama, Indiana, Utah, and Washington State have adopted laws this year impacting the use of AI in health insurance claims and prior authorizations. Read on for a deeper dive into these and more key updates.
Regulatory, Legislative & Litigation Developments
Musk v. OpenAI Trial Begins in Federal Court
On April 27, 2026, trial began in federal court in Oakland, California, in the high-stakes lawsuit brought by Elon Musk against OpenAI, its CEO Sam Altman and President Greg Brockman, and against Microsoft before Judge Yvonne Gonzalez Rogers. Musk's attorneys argued in opening statements that OpenAI's leaders "stole a charity," contending that the company's conversion to a for-profit structure betrayed the nonprofit mission on which Musk based his founding contributions. Musk is seeking a reversion of OpenAI to a nonprofit structure, removal of Altman and Brockman from OpenAI's leadership, and more than $130 billion in damages to be returned to OpenAI's nonprofit foundation. Musk took the stand on the first day of testimony and testified that he would not have contributed his resources to OpenAI had he known the founders intended to operate the company for profit. The outcome of the trial could have significant implications for the AI sector.
Florida Attorney General Launches Criminal Investigation into OpenAI
Florida Attorney General James Uthmeier announced that the Florida Office of Statewide Prosecution launched a criminal investigation into OpenAI, alleging that ChatGPT provided significant advice to the suspect in a 2025 mass shooting at Florida State University. The advice allegedly included guidance on weapons, timing, and target location. The Office of Statewide Prosecution has issued subpoenas to OpenAI seeking information about its policies and internal training materials related to user threats of harm to others and themselves, how it cooperates with and reports crimes to law enforcement, and publicly released media and statements related to the April 2025 shooting. The office recently expanded the investigation to encompass a separate homicide that occurred at the University of South Florida in 2026 after learning that the accused killer in that case also used ChatGPT. OpenAI has denied responsibility, stating that ChatGPT provided only factual responses available from public sources on the internet and did not encourage or promote illegal activity.
Musk's xAI and DOJ Sue Colorado, Alleging Constitutional Flaws in State AI Law
In our April 1, 2026, AI briefing, we reported that the Colorado AI Policy Workgroup (convened by Gov. Polis) published a revised policy framework that addresses the use of Automated Decision-Making Technology (ADMT) when making consequential decisions about Colorado consumers. Since then, on April 9, 2026, Elon Musk's xAI company sued Colorado alleging that many of the key terms in the original Colorado AI Act are "unconstitutionally vague" and that the law is overreaching by regulating development activities from other states. xAI also alleges a First Amendment violation that would require it to alter Grok's training in order to comply with its requirements and limiting its freedom of speech.
On April 24, the Department of Justice moved to intervene, asserting that the Colorado law "jeopardizes the United States' position as the global AI leader by requiring AI systems to incorporate discriminatory ideology that prioritizes preferred demographic characteristics and outcomes over accurate and merit-based outputs."
On April 27, Magistrate Judge Chung of the US District Court for the District of Colorado issued an order barring Colorado from enforcing any alleged violations of the law until after the court rules on xAI's preliminary injunction.
New State Laws Curb AI-Only Decisions in Health Insurance
States are increasing regulation of the use of AI by health insurance companies. So far this year, Alabama (SB 630), Indiana (HB 1271), Utah (SB 0319), and Washington (SB 5395) have adopted laws impacting the use of AI in connection with health insurance claims and prior authorization requirements. The laws vary in their approach, with some focused on preventing adverse decisions made solely by AI systems without human involvement, while others aim to reduce the risk of bias in insurance determinations. To regulate these risks, they have added notice requirements to inform consumers of the use of AI and requirements for human review of decisions that would negatively impact consumers.
Washington OIC Adds AI Disclosure Questions to SERFF Rate and Form Filings
While Washington is not among the 12 states participating in the National Association of Insurance Commissioners' (NAIC) AI Systems Evaluation Tool pilot, the state's Office of the Insurance Commissioner is taking its own steps to understand how insurers are using AI in connection with rates and policy forms. Effective May 1, 2026, all new SERFF filings in Washington — covering property and casualty (P&C) and life, health, and disability (LHD) form, rate, and network submissions — must disclose whether AI tools (including generative AI, machine learning, or vendor-embedded AI) were used to create or support the filing. If AI tools were used, filers will be asked to identify the system, describe what it did, and characterize its impact on the filing's substance.
EIOPA Proposes Clarification of EU AI Act for Actuarial Models
In an April 13, 2026, letter to senior EU policymakers — including the European Commission, European Council, and European Parliament — the European Insurance and Occupational Pensions Authority (EIOPA) has provided targeted recommendations on how the EU's new Artificial Intelligence Act should be applied within the insurance sector. The letter explains that while the AI Act introduces a cross-sector, risk-based framework governing certain AI uses (including "high-risk" applications such as underwriting and pricing in life and health insurance), its interaction with existing EU insurance legislation (e.g., Solvency II, the Insurance Distribution Directive (IDD), and the Digital Operational Resilience Act (DORA) raises practical implementation challenges. EIOPA cautions that, absent clarification, the regime could unintentionally capture long-established, transparent actuarial models and create duplicative or disproportionate compliance burdens for insurers and supervisors. In particular, EIOPA proposes excluding generalized linear models (GLMs) and generalized additive models (GAMs) — widely used, well-understood, and highly interpretable techniques — from the scope of the AI Act's definition of AI systems, or at least from classification as "high-risk" systems.
Kansas Federal Court Extends AI Restrictions to All Discovery Materials
In Jeffries v. Harcros Chemicals Inc., No. 25-2352-KHV-ADM (D. Kan. Mar. 25, 2026), Magistrate Judge Angel D. Mitchell entered an amended protective order extending AI tool restrictions to all discovery materials in a putative class action alleging toxic emissions from an industrial facility. The order requires parties to provide five business days' advance notice before using any AI tool on discovery materials, including detailed disclosures about the tool's vendor, hosting, security measures, and training policies. Producing parties may object and trigger mandatory meet-and-confer obligations, with AI use prohibited pending resolution. The order also prohibits training AI tools on discovery materials (except action-specific tools destroyed at case conclusion) and requires deletion of all discovery materials from AI systems after litigation ends, reflecting judicial concern about data security and the potential incorporation of litigation materials into AI training datasets.
Sullivan & Cromwell Apologizes for AI-Generated Errors in Bankruptcy Court Filings
On April 18, 2026, Sullivan & Cromwell apologized to Chief Judge Martin Glenn of the US Bankruptcy Court for the Southern District of New York after submitting filings that contained approximately 40 inaccurate citations and other errors generated by artificial intelligence. The mistakes, described as AI "hallucinations," appeared to include fabricated case citations, misquoted legal sources, and references to nonexistent authorities. The errors were detected by opposing counsel Boies Schiller Flexner, prompting Sullivan & Cromwell to file a corrected version and directly apologize to both the court and Boies Schiller. The firm acknowledged that its internal AI policies and review safeguards were not followed, and stated it is evaluating enhancements to its training and review processes.