Faegre Drinker Biddle & Reath LLP, a Delaware limited liability partnership | This website contains attorney advertising.
April 01, 2026

National Policy Framework for Artificial Intelligence, and Other Developments

Artificial Intelligence Briefing

This month’s briefing covers the Trump administration’s legislative and regulatory proposals for AI, including sector-specific national standards that preempt state AI laws. Meanwhile, the Anthropic-Pentagon dispute continues in the courts. Diverging from the approach of US and EU legislators with immediate plans for AI reforms, the UK government has indicated that it will continue to gather evidence and monitor the evolving international regulatory landscape and technological developments before taking further steps on AI copyright reform. Read on for a deeper dive into these and more key updates.

Regulatory, Legislative & Litigation Developments

White House Recommends Congressional Action on AI, Rejects Creation of New Federal Rulemaking Body

On March 20, 2026, the Trump administration released a “National Policy Framework for Artificial Intelligence,” including seven categories of legislative proposals intended to advance the use of AI through policies that balance encouraging innovation with protection from harm. The policy proposals focus on: protecting against harm to children through parental controls and privacy protection; protecting taxpayers from increased electricity costs associated with data centers; protecting vulnerable populations and the elderly from scams; protecting creators from inappropriate use of their content; and protecting free speech. To enable innovation and adoption, the policy proposals call for streamlining federal permitting for AI infrastructure, providing resources to small businesses for deployment of AI tools, creation of “regulatory sandboxes” for AI applications, and encouraging the use of AI training in workforce programs.

While the proposals call on Congress to act on a number of policies, the White House emphasizes that Congress should not create a new federal rulemaking body to regulate AI. Instead, Congress should support development by existing regulatory bodies of sector-specific national standards that preempt state AI laws in order to avoid a patchwork of state regulations that would hinder innovation.

New Federal AI Bill Targets AI Safety, Liability, and Political/Viewpoint Bias

On March 18, 2026, Senator Marsha Blackburn (R-TN) released the 291-page “Trump America AI Act,” aiming to create a comprehensive federal AI regulatory framework. Key provisions include: imposing duties on platforms to protect minors from AI-driven harms, repealing Section 230 immunity for online platforms, and establishing federal AI product liability. The bill also requires large employers to report AI-driven workforce changes, mandates federal safety evaluations of advanced AI, and introduces independent bias audits targeting viewpoint and political affiliation. Other measures address deepfake protections, restrict AI training on copyrighted works, require content watermarking, and set procurement standards for unbiased government AI.

The draft bill would preempt any conflicting state law, except for state legislation providing greater protections for minors. Whether it moves as a package or gets picked apart for individual titles, the bill maps the terrain of what federal AI regulation could look like — and any organization that develops, deploys, or significantly relies on AI should be paying attention.

Federal AI Policy Deadlines Pass; States Await Clarity on Preemption

March 11, 2026, marked a significant deadline under the Trump administration’s executive order, “Ensuring a National Policy Framework for Artificial Intelligence,” which directed the Commerce Department to identify state AI laws deemed inconsistent with federal AI policy and to detail potential BEAD (Broadband Equity, Access, and Deployment) funding restrictions for states with noncompliant laws. The executive order also called for the Federal Trade Commission (FTC) to clarify when Section 5 of the FTC Act would preempt state laws mandating changes to AI model outputs, specifically regarding what constitutes “unfair and deceptive trade practices.”

Neither agency released the expected deliverables, leaving states that have enacted AI regulations in a holding pattern and preparing for possible legal challenges to federal preemption efforts.

Anthropic-Pentagon Dispute Continues in the Courts

On March 26, 2026, Anthropic scored a preliminary victory in a challenge to its unprecedented designation as a “supply chain risk” to national security in the US District Court for the Northern District of California. Anthropic argued the government’s supply chain risk label and President Trump’s directive to federal agencies to stop using Anthropic’s technology were efforts to retaliate against Anthropic for criticizing the Pentagon’s contracting position in the press. (Anthropic wanted to restrict the Pentagon’s ability to use its AI model, Claude, for autonomous lethal warfare and mass surveillance of Americans; and the Pentagon rejected such restrictions.)

The district court agreed, granting Anthropic’s motion for a preliminary injunction and stating that the government’s broad punitive measures were likely unlawful. The order enjoined the defendant agencies from implementing or enforcing the supply chain risk designation and the directive ordering all federal agencies to cease use of Anthropic’s technology. The Department of War is still able to cease using Claude and choose a new AI vendor in the future if it chooses.

California Executive Order Strengthens AI Protections

On March 30, 2026, California Gov. Gavin Newsom issued an executive order instructing state agencies to develop new safeguards for AI procurement within 120 days, requiring vendors “to attest to and explain their policies and safeguards to protect public safety while preventing the misuse of their technologies.” In response to the Trump administration’s labeling of Anthropic as a supply chain risk, the order mandates a review of any new federal designations and allows continued procurement from companies whose designation California’s chief information security officer deems improper. Additional reforms target contractor responsibility and are intended “to ensure state entities do not contract with entities judicially determined to have unlawfully undermined privacy or civil liberties.”

Colorado Publishes Revised AI Policy Framework

On March 17, 2026, the Colorado AI Policy Workgroup (convened by Gov. Jared Polis) published a revised policy framework that addresses the use of Automated Decision-Making Technology (ADMT) when making consequential decisions about Colorado consumers. The new framework would substantially rewrite the Colorado AI Act, shifting the focus from bias reporting and mitigation to transparency and making the use of such systems less burdensome. The obligations apply to developers and deployers of ADMT that process personal information and generate output that materially influences a consequential decision. If enacted, the new legislation will go into effect on January 1, 2027.

Connecticut Attorney General Issues Guidance on Applying Existing Laws to AI

On February 25, 2026, the Connecticut attorney general issued a memorandum providing guidance on how existing state laws apply to artificial intelligence systems, emphasizing that businesses and individuals deploying AI must comply with Connecticut’s civil rights, privacy, data security, and consumer protection statutes. The memo clarifies that the state’s antidiscrimination laws prohibit algorithmic discrimination in employment, housing, lending, insurance, and public accommodations regardless of whether discrimination is facilitated by AI or human decision-making, and that the Connecticut Unfair Trade Practices Act (CUTPA) applies to AI-related misrepresentations, deceptive practices, and unconscionable conduct.

Under the Connecticut Data Privacy Act, AI developers and users must limit data collection to what is reasonably necessary; conduct data protection assessments for high-risk processing activities; obtain consent for processing sensitive data; and honor consumers’ rights to access, correct, delete, and opt out of automated profiling. The guidance also warns that AI-facilitated price-fixing, algorithmic collusion, and other anticompetitive conduct violate the Connecticut Antitrust Act, citing the office’s participation in the RealPage litigation involving AI-driven rental price coordination as an example of active enforcement in this area.

Florida House Fails to Pass AI Bill of Rights

The Florida House of Representatives declined to take up an AI Bill of Rights designed to protect consumers and championed by Florida Gov. Ron DeSantis. Among other things, the bill would have required companies to disclose when they use artificial intelligence chatbots in customer interactions and prohibited their use in licensed mental health counseling. The House speaker told reporters he believed the federal government should regulate artificial intelligence. The Florida bill is part of the broader national debate over whether artificial intelligence should be regulated by state or federal governments and how much regulation should be imposed.

Proposed Iowa Legislation Would Regulate Autonomous AI Service Providers in Health Care

On March 24, 2026, an Iowa House Appropriations subcommittee of Representatives Ray “Bubba” Sorensen (R-District 23), Ann Meyer (R-District 29), and Adam Zabner (D-District 90) moved HSB 766, relating to the licensure of artificial intelligence augmented and autonomous service providers, to the full Appropriations subcommittee by a 2-1 vote.

Notably, an Iowa Department of Health and Human Services representative stated there is no federal funding Medicaid match for AI services and there is no Senate companion.

The Cicero Institute model legislation includes provisions for an AI board of autonomous practice, AI board operations and powers, levels of autonomy, licensure, and reciprocity framework. A representative of the Cicero Institute was the only proponent, while the Iowa Medical Society, independent physicians and clinics, and the Iowa Hospital Association opposed the proposal. Opponents referenced support for AI to supplement human care decisions.

National Association of Insurance Commissioners Hears Update on Pilot Program

On March 24, 2026, the NAIC’s Big Data and AI Working Group received an update on the AI Systems Evaluation Tool pilot. Commissioner Nathan Houdek (WI) reported the pilot began in early March, with 12 states reaching out to participating companies. Most states contacted one to 10 companies; a few contacted more than 10. The tool is being used in financial exams, market conduct analysis, and regulatory inquiries. The working group also heard a discussion of AI governance trends and best practices featuring Scott Kosnoff of Faegre Drinker and Anthony Habayeb of Monitaur.

UK Government Abandons Proposed AI Copyright Reform, Opts for Evidence-Gathering Approach

In a new report on copyright and AI, the UK government has formally abandoned its preferred policy to introduce a broad text and data mining (TDM) exception (plus opt-out) for AI training into UK copyright law, following overwhelming opposition from rights holders. The report discusses a range of key issues impacting the use of copyright works in the development and deployment of AI systems, including copyright legislation, transparency, technical tools and standards, digital replicas, and the protection of computer-generated works.

Diverging from the approach of legislators in the US and EU, the UK government has indicated that, rather than immediate plans for new legislative reforms, it will continue to gather evidence and monitor the evolving international regulatory landscape and technological developments before taking further steps. This leaves open the possibility of a fairly wide range of potential legislative developments down the line, as we discuss further in our recent client alert.

Supreme Court Overturns Contributory Copyright Infringement Ruling in Cox

On March 25, 2026, the US Supreme Court reversed the Fourth Circuit’s finding of copyright infringement against Cox, an internet service provider. Sony Music had alleged that Cox was liable for the conduct of its customers who uploaded and downloaded copyrighted music using Cox’s internet services. The Fourth Circuit had affirmed a finding of contributory copyright infringement, relying primarily on Cox’s knowledge of its customers’ infringing acts.

In a unanimous decision, however, the Supreme Court reversed by stating that knowledge alone — without an inducing act or evidence that the service is tailored to facilitate infringement — is insufficient to establish contributory infringement. Although this case does not directly address artificial intelligence platforms, it is a win for providers of general-purpose AI systems. Copyright owners must show more to prevail on a claim for contributory copyright infringement arising from content generated by a general purpose AI system.

The material contained in this communication is informational, general in nature and does not constitute legal advice. The material contained in this communication should not be relied upon or used without consulting a lawyer to consider your specific circumstances. This communication was published on the date specified and may not include any changes in the topics, laws, rules or regulations covered. Receipt of this communication does not establish an attorney-client relationship. In some jurisdictions, this communication may be considered attorney advertising.