Faegre Drinker Biddle & Reath LLP, a Delaware limited liability partnership | This website contains attorney advertising.
June 23, 2025

Federal Moratorium on State AI Regulation vs. the Latest Regulations From the States; and More

Artificial Intelligence Briefing

This month’s briefing covers the moratorium on state regulation of AI in the “One Big Beautiful Bill Act,” as well as the recently passed or proposed AI regulations from California, Colorado, New York, Rhode Island and Texas. Meanwhile, a federal judge has conditionally certified a nationwide collective action in Mobley v. Workday for the plaintiffs’ claims that Workday’s AI-powered hiring tools systematically discriminate against job applicants over 40. Read on for a deeper dive into these and more key updates.

Regulatory, Legislative & Litigation Developments

AI Moratorium Language in Budget Legislation Draws Criticism

Deep within the latest congressional budget legislation that passed in the U.S. House of Representatives, known as the “One Big Beautiful Bill Act,” lies language that would institute a 10-year moratorium on state regulation of artificial intelligence and further complicate the existing AI-related regulatory landscape. Proponents of the proposal argue that the moratorium will ensure that companies involved with the fast-developing AI industry will not have their progress delayed by a complicated and diverse patchwork of state laws all seeking to regulate the same topics. But numerous members of Congress in both the House and the U.S. Senate, as well as state lawmakers nationwide, have voiced concern over the proposed prohibition on state AI regulation, fearing that the moratorium would prevent states from being able to pass important regulations to protect their residents, workforce and consumers from abusive use of AI technology. After passing the House, the bill moved to the Senate, where it is currently undergoing further consideration and revision. A revised proposal conditions the provision of certain federal broadband funding on compliance with the moratorium. The Senate parliamentarian has determined that with this revision the moratorium passes the Byrd Rule applicable to reconciliation, bringing the moratorium closer to reality.

New York Passes AI Bill

While Congress debates a potential moratorium on state regulation of AI, the New York legislature has passed a bill that would regulate frontier models that are developed, deployed or operating in the state. The Responsible AI Safety and Education (RAISE) Act would impose transparency and safety obligations on large developers of frontier models and is aimed at preventing critical harm, which is defined as the death or serious injury of 100 or more people or at least $1 billion dollars of damages to rights in money or property. The bill is now awaiting action by Governor Kathy Hochul (D).

Trump Administration Rescinds Biden-Era AI Diffusion Rule, Announces New Export Controls and Plans for Revised Framework

The U.S. Department of Commerce has rescinded the Biden administration’s AI Diffusion Rule, which was set to take effect on May 15, 2025, citing concerns it would stifle innovation, impose burdensome regulation and strain diplomatic relations. The Trump administration, now steering AI policy, has rejected what it calls the Biden administration’s “ill-conceived and counterproductive” approach and is instead pursuing a more inclusive, innovation-driven strategy that prioritizes collaboration with trusted allies while restricting access to adversaries. The Bureau of Industry and Security (BIS) will issue a formal notice of the rescission and develop a replacement rule. In the interim, BIS has halted enforcement of the rescinded rule. The agency also announced new export control measures aimed at preventing the misuse of advanced U.S. AI chips by China and strengthening protections for global supply chains.

New U.S. Center for AI Standards and Innovation

The latest move in the current administration’s push for global AI dominance is the rebranding and remaking of the U.S. AI Safety Institute at the National Institute of Standards and Technology as the Center for AI Standards and Innovation (CAISI). These plans were announced by Secretary of Commerce Howard Lutnick. CAISI’s goals include developing guidelines and best practices for the security of AI systems, assisting industry in developing voluntary standards, working with the private sector to evaluate AI capabilities with potential risk to national security, leading evaluations of domestic and foreign AI systems, assessment of vulnerabilities, coordination with other agencies, and representing “U.S. interests internationally to guard against burdensome and unnecessary regulation of American technologies by foreign governments.” CAISI will work with the Department of Defense, the Department of Energy, the Department of Homeland Security, the Office of Science and Technology Policy, and the Intelligence Community to develop methods and conduct evaluations. For the industry, CAISI will serve as the central point for interacting with the federal government for testing and collaborative research on commercial AI systems.

Federal Court Certifies Nationwide Collective Action in Workday’s AI Hiring Bias Case

On May 16, 2025, U.S. District Judge Rita F. Lin conditionally certified a nationwide collective action in Mobley v. Workday, Inc., for plaintiffs’ claims that Workday’s AI-powered hiring tools systematically discriminate against job applicants over 40. The ruling enables Derek Mobley and similarly situated applicants to collectively pursue Age Discrimination in Employment Act (ADEA) claims against the HR technology giant, potentially affecting millions of job seekers who applied through Workday’s platform since September 2020. The court found that Workday’s AI recommendation system constitutes a “unified policy” susceptible to common proof of disparate impact, rejecting the company’s arguments about logistical hurdles and individual applicant variations. This decision represents one of the first major judicial challenges to AI hiring tools and follows increasing regulatory scrutiny, including new bias-auditing requirements in New York, Colorado and Illinois, signaling a need for employers using AI-driven recruitment technologies to assess their compliance strategies and potential exposure.

NAIC RFI on AI Model Law & Regulatory Examination Tools

On May 15, 2025, the National Association of Insurance Commissioners’ Big Data and AI Working Group issued a Request for Information soliciting stakeholder input on whether the working group should develop a model law on the use of AI in insurance. The RFI seeks feedback on the possibility of AI model law development; whether existing laws are sufficient to protect consumers; whether a potential model should consider all lines of business, have varying requirements based on company size, and include third-party vendors; and if any specific state legislation should be considered in the working group’s discussions. The RFI also requests feedback on the development of the AI Regulatory Examination Tool (discussed at the Spring National Meeting), specifically to learn whether any industry standard templates should be considered in developing the tool and if any noninsurance templates could be leveraged for insurance industry use. Comments are due on June 30, 2025.

Comment Period Closes on CPPA Draft Automated Decision-Making Technology Rules

The California Privacy Protection Agency (CPPA) recently announced further revisions to its long-proposed automated decision-making technology (ADMT) regulations and to other portions of its current draft rulemaking package. The revised proposed ADMT rules narrow the scope of the ADMT rules to ADMT that is used to make “significant decisions” about consumers. The proposed rules establish requirements for businesses using ADMT for significant decisions, including consumer rights and pre-use notices. The public comment period on these revisions closed on June 2, 2025, and it is not yet clear whether the draft regulations will now be adopted by the CPPA or further revised prior to finalization.

Texas Legislature Passes AI Governance Act to Regulate Development, Deployment and Innovation

On June 2, 2025, the Texas legislature passed the Texas Responsible Artificial Intelligence Governance Act (HB 149), which now awaits signature by the governor. The act has a broad scope and applies to developers and deployers of any “artificial intelligence system,” defined as “any machine-based system that, for any explicit or implicit objective, infers from the inputs the system receives how to generate outputs, including content, decisions, predictions, or recommendations, that can influence physical or virtual environments.” The act prohibits the development or deployment of AI systems with an intent to unlawfully discriminate against protected classes, establishes transparency oversight requirements for developers and deployers of AI systems, and creates an AI sandbox program for testing of innovative AI systems.

Rhode Island Attorney General Seeks Comment on Whether to Propose AI Rule to Prevent Bias and Misleading Practices

On May 23, 2025, the Rhode Island attorney general issued a Notice of Proposed Rulemaking seeking comment on whether to adopt a rule under the state’s Deceptive Trade Practices Act to prevent unfair or deceptive trade practices involving the use of artificial intelligence, algorithms, software and other emerging technologies. The attorney general expressed concern that such regulation may be necessary because: (1) historically, “many communities have been unfairly impacted when the bias of human-designed systems” resulted in “unwittingly allowing … discriminat[ion]” based on “protected categories” such as “race, national origin, gender, [and] sexual orientation”; and (2) “consumers and businesses may be misled by sellers of these products about the product’s effectiveness” because “[r]esearch reflects that people credit decisions made by computers as more trustworthy than those made by humans.” A hearing on the issue will be held on July 9, 2025, and the public comment period will remain open through July 23, 2025.

Colorado AI Law Update

Last year, Colorado became the first state to enact broad legislation requiring developers and users of AI systems to take steps to prevent algorithmic discrimination. (Our 2024 client alert described the law’s burdensome requirements, which are scheduled to take effect on February 1, 2026.) Legislative efforts to pare down the requirements failed in May 2025, which means that, absent a special session, the law will take effect in its original form.

Anthropic CEO Warns of Imminent White Collar Job Displacement From AI

In a May 28, 2025, Axios interview, Anthropic CEO Dario Amodei warned that AI could eliminate half of all entry-level white collar jobs and spike unemployment to 10-20% within the next one to five years, particularly affecting technology, finance, law and consulting positions. Amodei, who previously served as VP of research at OpenAI before founding Anthropic, criticized AI companies and government for “sugar-coating” the potential mass job displacement, arguing that most workers remain unaware of the looming threat despite rapid advances in AI capabilities. He predicted a shift from AI augmentation to automation will occur rapidly as companies increasingly deploy AI agents to replace human workers at significantly lower costs, potentially within a couple of years. To address these concerns, Amodei proposed policy solutions including a “token tax” on AI usage revenue that would be redistributed by the government, alongside calls for increased public awareness and congressional education, positioning himself as a “truth-teller” seeking to prepare society for an inevitable economic transformation that could concentrate wealth and undermine democracy.

The material contained in this communication is informational, general in nature and does not constitute legal advice. The material contained in this communication should not be relied upon or used without consulting a lawyer to consider your specific circumstances. This communication was published on the date specified and may not include any changes in the topics, laws, rules or regulations covered. Receipt of this communication does not establish an attorney-client relationship. In some jurisdictions, this communication may be considered attorney advertising.

Related Policy, Advocacy, and Consulting Services