February 02, 2026

US and International Developments in Artificial Intelligence

Artificial Intelligence Briefing

This month’s briefing covers the European Data Protection Board and European Data Protection Supervisor’s joint opinion on the proposed Digital Omnibus on AI, as well as the UK House of Commons Treasury Committee’s warning against taking a passive approach to the rapid adoption of AI by financial services firms. Meanwhile, a California putative class action against a recruiting software company alleges it operated as a “consumer reporting agency” to score and rank candidates, illustrating how existing laws that predate AI use apply to AI tools and can lead to vendor and employer liability. Read on for a deeper dive into these and more key updates.

Regulatory, Legislative & Litigation Developments

EU Data Authorities Support AI Act Simplification but Call for Stronger Safeguards

The European Data Protection Board (EDPB) and the European Data Protection Supervisor (EDPS) have issued a Joint Opinion on the European Commission’s proposed Digital Omnibus on AI, which aims to simplify the implementation of certain harmonised rules under the AI Act. While supporting the simplification efforts, the Opinion warns that administrative streamlining must not come at the expense of fundamental rights protections. In particular, they caution against the proposed relaxation of registration requirements for certain high-risk AI systems (cited as a threat to transparency and accountability) and the postponement of core provisions for high-risk AI systems. The Opinion also recommends clarifying the role of the AI Office and ensuring it does not affect the independent supervision of EU institutions’ use of AI systems.

UK MPs Challenge Regulators on AI Risks in Financial Services

The UK House of Commons’ cross-party Treasury Committee has warned the Treasury, Bank of England, and Financial Conduct Authority that they are exposing consumers and the financial system to “potentially serious harm” by taking a largely passive, “wait-and-see” approach to the rapid adoption of AI by financial services firms. The committee’s report highlights that while “AI offers important benefits,” current technology-agnostic rules may not adequately address novel risks such as the lack of transparency in AI-driven decisions and unregulated AI-generated financial advice. MPs called for more proactive action, including bespoke stress tests for AI-driven systems and clearer regulatory guidance on accountability and consumer protection, criticising the current framework as insufficiently prepared for a major AI-related incident. Regulators have acknowledged the concerns and aim to address them fully later in 2026.

South Korea’s New AI Laws

Strict AI laws took effect on January 22, 2026, in South Korea under the Act on the Development of Artificial Intelligence and Establishment of Trust. These laws impact both AI developers and companies that use AI in their operations. Risk management plans, human oversight and advanced notice to users are required, for example, on “high impact” AI systems in areas such as nuclear safety, transportation, health care, and financial screening. Similar to the EU’s AI Act, companies can be penalized for violating these requirements. With these regulations, South Korea looks to be positioning itself as a hub for safe AI growth, though companies are concerned these laws will stifle innovation.

Lawsuit Alleges AI Hiring Tool Violates FCRA

A newly filed putative class action against Eightfold AI (Erin Kistler, et al. v. Eightfold AI, Inc.) alleges that Eightfold operated as a “consumer reporting agency” under the Fair Credit Reporting Act (FCRA) and a similar California state law by collecting and analyzing vast amounts of applicant personal data — far beyond what applicants themselves submit — to score and rank candidates on a numerical scale used by employers in hiring decisions, without providing FCRA required pre-disclosure, consent, access, or dispute rights. The case illustrates how existing laws that predate the use of AI apply to AI tools and can lead to vendor and employer liability. If these claims succeed, the case could impact how organizations approach several critical issues including: (i) FCRA compliance for AI tools; (ii) vendor due-diligence protocols; (iii) contractual risk allocation with technology providers; and (iv) whether AI-generated applicant profiles fall under existing employment and consumer protection laws.

Utah’s AI Pharmacy Prescription Pilot

Utah has launched a pilot program with health-tech startup Doctronic to allow an AI system to handle prescription renewals for patients with chronic conditions. The pilot program, covering 190 commonly prescribed medications, is being operated through Utah’s Office of Artificial Intelligence Policy’s regulatory sandbox. According to Doctrinic, the AI is designed to automatically escalate cases to a physician if there is uncertainty regarding the prescription. Additionally, doctors will review the first 250 prescriptions issued in each medication class to validate the AI system’s performance prior to the AI system handling prescription renewals autonomously. The use of AI systems to perform prescription renewals has raised questions about whether the AI system will be treated as the practice of pharmacy subject to state regulation or as a medical device that requires FDA approval.

Colorado Division of Insurance Expects to Advance Testing Regulation This Year

The Colorado Division of Insurance has resumed its work with Cathy O’Neil (O’Neil Risk Consulting & Algorithmic Auditing) to continue developing its draft algorithm and predictive model quantitative testing regulation. The Division hopes to finalize the testing regulation for life insurance by mid-2026. A version applicable to private passenger auto insurance is also in the works, but it’s unclear when we might see a draft.

NAIC Resumes Work on AI Matters

The National Association of Insurance Commissioners is resuming its work on AI matters, as three calls have been scheduled in February 2026 to discuss the draft AI Systems Evaluation Tool and proposed Risk-Based Regulatory Framework for Third-Party Data and Model Vendors. We understand that Wisconsin Insurance Commissioner Nathan Houdek will chair the NAIC’s Big Data and AI Working Group, taking the reins from Pennsylvania Commissioner Mike Humphreys. Houdek previously chaired the (now disbanded) Accelerated Underwriting Working Group, which developed guidance for insurance departments to use when reviewing life insurers’ use of accelerated underwriting programs.

Artificial Intelligence Focus of Recent Discussions at Davos

At the World Economic Forum’s annual meeting in Davos-Klosters, Switzerland, CEOs of several technology companies, including Verizon and IBM, discussed that artificial intelligence likely will result in significant job losses and outlined programs they planned to implement to help workers find new training and job placements. The CEO of Cognizant said, “We have to reinvent processes and amplify the potential of humans rather than eliminate work.” President Trump gave a speech in which he emphasized the need to open more energy plants and build infrastructure to provide the power needed for data centers. Meanwhile, leaders of companies with artificial intelligence models, such as OpenAI and Microsoft, highlighted the rapid growth of their technologies.