‘America’s AI Action Plan’ and More Developments for the Tech, Financial Services, Insurance and Health Care Sectors
Artificial Intelligence Briefing
This month’s briefing covers the Trump administration’s America’s AI Action Plan as well as the California Privacy Protection Agency’s recently finalized rulemaking package that includes regulations on the use of “automated decisionmaking technology.” Meanwhile, the Texas Responsible Artificial Intelligence Governance Act will establish an AI regulatory sandbox program to facilitate supervised testing of AI systems across sectors such as health care, finance, education and public services. Read on for a deeper dive into these and more key updates.
Regulatory, Legislative & Litigation Developments
White House Reveals America’s AI Action Plan
Released by the White House on July 23, 2025, America’s AI Action Plan details a comprehensive national AI strategy to win the global race for AI dominance. Built on three pillars — accelerating AI innovation, building robust American AI infrastructure, and leading in international AI diplomacy and security — the plan identifies more than 90 recommended policy actions for the Trump administration to pursue.
This includes, for example, removal of regulatory barriers to AI development and deployment, exporting American AI technology to allies, expediting and modernizing permits for data centers, developing initiatives to increase occupations vital to data centers (e.g., electricians and HVAC technicians), and revising federal procurement guidelines to protect free speech and contract only with AI developers whose systems are “objective and free from top-down ideological bias.”
Technology companies largely praised the plan; while civil rights and consumer advocacy groups criticized the plan for ignoring well-documented risks associated with AI and for eliminating references to diversity, equity, and inclusion and to climate change in the National Institute of Standards and Technology’s AI Risk Management Framework.
Federal Reserve Governor Provides View on AI
On July 17, 2025, Federal Reserve (Fed) Board Governor Lisa Cook presented the Fed’s perspective on artificial intelligence to the National Bureau of Economic Research. She explained how AI’s transformation of the economy will affect both sides of the Fed’s dual mandate of maximum employment and price stability, and highlighted the Fed’s research efforts to better understand relevant AI implications.
Cook also encouraged organizations to consider four guiding principles for responsible AI adoption: (1) establish strong governance and risk management; (2) educate and train staff; (3) empower teams to learn AI through hands-on engagement in controlled environments; and (4) experiment with AI while retaining the ability to halt projects that do not meet rigorous standards.
Senate Banking Subcommittee Holds AI Hearing Followed by Reintroduction of H.R. 4801
On July 30, 2025, the Senate Banking Committee’s Subcommittee on Securities, Insurance and Investment held a hearing on AI’s Role in Capital and Insurance Markets. Representatives from Aon, NASDAQ and IBM Research provided their views on AI regulation; notably, witnesses and members generally agreed that regulation at some level is needed, and strong governance frameworks are critical to the responsible use of AI.
During the hearing, Sen. Rounds (R-SD, Subcommittee Chair) announced the reintroduction of the Unleashing AI Innovation in Financial Services Act (H.R. 4801): bipartisan, bicameral legislation that promotes the use of regulatory sandboxes at federal financial regulatory agencies to experiment with financial products and services that substantially use AI.
EU Introduces Voluntary Code of Practice for General-Purpose AI Ahead of Stricter AI Act Enforcement
The European Union is building on 2024’s landmark AI Act with a new General-Purpose AI Code of Practice, aimed at helping providers of general-purpose AI models to meet strict transparency, copyright and safety standards. Voluntary signatories will be able to demonstrate compliance with the AI Act by adhering to the guidelines, while those who decline to sign may face tougher scrutiny when the full AI Act takes effect, with enforcement (and penalties) ramping up by 2026.
While some critics warn the rules may not do enough to curb the spread of harmful content, some European businesses are urging lawmakers to delay enforcement of the AI Act altogether, arguing that it puts Europe’s tech industry at a competitive disadvantage.
BMA Seeks Industry Feedback on Responsible AI Use in Bermuda’s Financial Services Sector
The Bermuda Monetary Authority has released a discussion paper, The Responsible Use of Artificial Intelligence in Bermuda’s Financial Services Sector,” outlining proposed principles and expectations for AI governance and oversight. The paper highlights the opportunities and risks associated with AI, and underscores the importance of responsible governance and risk management. The BMA is seeking feedback from stakeholders on the discussion paper’s proposals, with comments due September 30, 2025.
California Privacy Protection Agency Approves ADMT Regulations
Following a lengthy rulemaking process that has stretched on for more than a year, the California Privacy Protection Agency (CPPA) voted unanimously on July 24, 2025, to adopt a proposed California Consumer Privacy Act (CCPA) rulemaking package that included regulations on the use of “automated decisionmaking technology” or ADMT, among other topics. The CPPA’s ADMT regulations have been long-awaited, and a prior draft version was scaled back considerably in spring 2025 following criticism from industry, California lawmakers and California Gov. Gavin Newsom.
In particular, the definition of “automated decisionmaking technology” was substantially limited in the most recent draft, now covering a narrower range of activities. Notably, the ADMT regulations include a delayed enforcement period. Businesses that are subject to the CCPA and that use ADMT for “significant decisions” must be in compliance with the ADMT regulations by January 1, 2027.
Massachusetts AG Announces $2.5 Million Settlement With Lender Over AI Loan Underwriting Practices Alleged to Cause Bias
On July 10, 2025, Massachusetts Attorney General Andrea Joy Campbell announced a settlement with Earnest Operations, LLC (Earnest) relating to its use of an artificial intelligence model to underwrite student loan applications. The attorney general alleged that the model employed rules that had a disparate impact on Black and Hispanic applicants and posed an unfair risk of discrimination based on immigration status, in violation of state and federal laws. Pursuant to the settlement agreement, Earnest agreed to pay $2.5 million; to revise its model; and to develop corporate governance processes, draft written policies and procedures, and conduct regular testing and monitoring to ensure that its model complied with the applicable laws.
Sandboxed Intelligence: Texas Opens the Gates for Regulated AI Testing
On June 22, 2025, Texas enacted HB 149 — the Texas Responsible Artificial Intelligence Governance Act (TRAIGA) — which takes effect on January 1, 2026. The law prohibits the intentional use of AI to manipulate human behavior, engage in social scoring, infringe constitutional rights, collect biometric data without consent, or unlawfully discriminate against protected classes.
Notably for insurtech companies and carriers pursuing innovation, TRAIGA establishes an AI regulatory sandbox program to facilitate supervised testing of AI systems across sectors such as health care, finance, education and public services. Approved participants may test AI systems without first obtaining licenses or regulatory approvals and are granted legal protections through the temporary waiver of certain laws and regulations. The testing period may last up to 36 months, with extensions permitted for good cause. To participate, interested parties must apply through the Texas Department of Information Resources, providing a detailed description of the AI system; its intended use; a benefit assessment addressing potential impacts on consumers, privacy and public safety; a risk mitigation plan; and proof of compliance with applicable federal AI laws.
NAIC’s Big Data & AI Working Group Exposes AI Systems Evaluation Tool
On July 7, 2025, the National Association of Insurance Commissioners’ Big Data and Artificial Intelligence Working Group exposed a draft AI Systems Evaluation Tool for public comment. The purpose of the tool is to assist regulators in identifying and assessing AI-related financial and consumer risks evolving from a regulated entity’s use of AI systems. It is meant to supplement existing market conduct, financial, and other exams and includes four optional exhibits for use in assessing: (1) how many models a regulated entity has and their intended use cases; (2) the entity’s governance framework; (3) high-risk models; and (4) the sources and types of data used in a particular model. Comments are due by September 5.
URAC to Launch AI Accreditation Programs for Health Care in Late 2025
URAC, a major nonprofit health care accreditor, is developing its first AI accreditation programs with an anticipated release in Q3 2025. The current vision is to have separate AI accreditation programs for clinical users of AI tools and for AI developers, with the ultimate goal of having an independent and unbiased third-party to ensure that clinical users of AI and AI developers are following best practices and standards around AI in health care. While the two accreditation programs will have some standards in common, they will also contain unique requirements such as the AI developers’ program speaking to what information should come out of AI assurance labs or other bodies that vet algorithms before they are implemented in the real world.
Federal Court Rules Meta’s Use of Copyrighted Books for AI Training Is Fair Use
On June 25, 2025, U.S. District Judge Vince Chhabria of the Northern District of California granted summary judgment to Meta in Kadrey v. Meta Platforms, ruling that the company’s use of copyrighted books to train its Llama AI models constituted fair use under copyright law. The court found that Meta’s copying was “highly transformative” because it served the distinct purpose of training innovative AI tools rather than simply reproducing the original works for entertainment or education.
While acknowledging that AI training could potentially harm authors’ markets by enabling the creation of competing works, the court emphasized that the 13 plaintiff authors failed to present meaningful evidence of actual market dilution or harm from Meta’s specific use of their books. The ruling is limited to these particular plaintiffs and does not establish blanket protection for AI companies, with the judge noting that “in many circumstances it will be illegal to copy copyright-protected works to train generative AI models without permission” where proper evidence of market harm is presented.
The material contained in this communication is informational, general in nature and does not constitute legal advice. The material contained in this communication should not be relied upon or used without consulting a lawyer to consider your specific circumstances. This communication was published on the date specified and may not include any changes in the topics, laws, rules or regulations covered. Receipt of this communication does not establish an attorney-client relationship. In some jurisdictions, this communication may be considered attorney advertising.