Faegre Drinker Biddle & Reath LLP, a Delaware limited liability partnership | This website contains attorney advertising.
March 04, 2026

Attorney-Client Privilege, US and UK Guidance for Financial Services, and Other AI Developments

Artificial Intelligence Briefing

This month’s briefing covers a US district court’s decision that using consumer AI tools to analyze legal exposure creates discoverable documents, as well as the dispute between Anthropic and the Pentagon. Meanwhile, the US Treasury released two resources to guide AI use in financial services, and the Bank of England published a summary of the outcome of three roundtables on AI developments for the financial sector. Read on for a deeper dive into these and more key updates.

Regulatory, Legislative & Litigation Developments

SDNY: AI Chat Logs Not Protected by Privilege

In United States v. Heppner, the US District Court for the Southern District of New York held that a defendant’s communications with a generative AI platform (Claude) are not protected by attorney-client privilege or the work product doctrine.

The Ruling

Judge Jed S. Rakoff rejected the defendant’s privilege claims on four primary grounds:

  1. No Attorney-Client Relationship: Privilege requires a relationship with a licensed professional; AI does not qualify.
  2. No Confidentiality: Public AI privacy policies (like Anthropic’s) expressly reserve the right to share data with third parties and government authorities.
  3. Voluntary Use: The defendant acted on his own initiative, and the AI platform explicitly disclaimed providing legal advice.
  4. Independent Preparation: The work-product doctrine protects an attorney’s mental processes, not documents prepared independently by a defendant and later shared with counsel.

Open Questions

The court notably limited its ruling to consumer-grade AI used without counsel’s direction. Judge Rakoff hinted that enterprise AI tools with strict contractual confidentiality might support a different analysis. Furthermore, the court left open whether AI use specifically directed by counsel might qualify for protection, stating: “Had counsel directed Heppner to use Claude, Claude might arguably be said to have functioned in a manner akin to a highly trained professional who may act as a lawyer’s agent within the protection of the attorney-client privilege.”

Key Takeaway

Using consumer AI tools to analyze legal exposure creates discoverable documents. Organizations should audit AI policies, restrict the input of sensitive data into unsecured systems, and ensure any AI use in a legal context is strictly directed by counsel.

Anthropic-Pentagon Dispute Culminates in Federal Ban and Surge in Public Support

On February 27, 2026, President Trump ordered all federal agencies to stop using Anthropic’s AI technology, and Defense Secretary Pete Hegseth designated the company a “supply chain risk” to national security — a label historically reserved for US adversaries and never before publicly applied to an American company. The dispute arose after Anthropic insisted on two contractual restrictions to the Pentagon’s use of its AI model, Claude: no mass domestic surveillance of Americans and no fully autonomous weapons. Anthropic stated that “no amount of intimidation or punishment from the Department of War” would change its position and announced it would challenge the designation in court. In a show of public support, Claude rose to the No. 1 position on Apple’s list of top U.S. free apps over the weekend following the government’s actions.

Treasury Releases AI Lexicon and Risk Management Framework for Financial Sector

On February 19, 2026, the US Department of the Treasury released two resources to guide AI use in the financial sector: a shared Artificial Intelligence Lexicon and the Financial Services AI Risk Management Framework (FS AI RMF). The AI Lexicon establishes common definitions for key AI concepts, capabilities, and risk categories to enable clearer communication across regulatory, technical, legal, and business functions. The FS AI RMF adapts the National Institute of Standards and Technology (NIST) AI Risk Management Framework to operational, regulatory, and consumer protection considerations specific to the financial services sector, providing scalable tools to help institutions evaluate AI use cases and manage risks across the AI lifecycle. Developed through public-private collaboration via the Financial and Banking Information Infrastructure Committee and the Financial Services Sector Coordinating Council’s Artificial Intelligence Executive Oversight Group, these resources are intended to support the president’s AI Action Plan by translating national AI priorities into practical implementation tools for financial institutions, regulators, and technology providers.

AI Companies Seeking to Exert Political Influence 

AI companies concerned with whether and how Congress and the states may attempt to regulate artificial intelligence are contributing heavily to political action committees in an effort to protect their interests and worldview. OpenAI, which favors less regulation, contributed approximately $50 million to the super PAC Leading the Future. Anthropic, which favors more regulation, has contributed $20 million to Public First Action, a nonprofit that funds two super PACs. Meta has started two super PACs and recently announced plans to fund two more, all of which back candidates who promote policies that facilitate the development of artificial intelligence. These actions underscore the importance of developing legislation that strikes an appropriate balance between preventing harm and promoting innovation, and indicate that AI companies may play a significant role as this year’s midterm elections draw closer.

State Legislatures Advance Chatbot Regulation

State legislatures in Oregon, Utah, Virginia, and Washington have advanced bills targeting developers and deployers of AI chatbot services, focusing on data protection, transparency, and safety design requirements. Utah’s Companion Chatbot Safety Act recently passed the Utah House of Representatives, is headed to the Utah Senate, and would link certain chatbot practices to existing consumer privacy law. California and other states are drafting measures related to digital content and AI training data disclosures, which may affect market access and compliance across state lines. In the absence of comprehensive federal AI legislation, an expanding patchwork of state-level regulations is emerging, requiring ongoing monitoring and adaptation by AI developers and deployers. As more states pursue targeted legislation, companies offering AI chatbot services may encounter increased regulatory complexity and operational challenges.

Insurance Regulators to Pilot AI Systems Evaluation Tool

The National Association of Insurance Commissioners’ Big Data and AI Working Group has launched a pilot of its AI Systems Evaluation Tool. California, Colorado, Connecticut, Florida, Iowa, Louisiana, Maryland, Pennsylvania, Rhode Island, Virginia, Vermont, and Wisconsin will participate in the pilot, which is expected to run through September 2026. During the pilot, insurers may receive inquiries from regulators, particularly in the market conduct and financial examination context, about their use of AI and third-party data.

Joint Statement on AI imagery and the Protection of Privacy Signed by 61 Regulators Worldwide

Sixty-one national privacy regulatory authorities coordinated by the Global Privacy Assembly’s (GPA) International Enforcement Cooperation Working Group (IEWG) on February 23, 2026, issued a joint statement on AI imagery and the protection of privacy. The authorities noted their concern about the use of AI to generate identifiable images and videos of individuals without their consent or in some cases knowledge, including intimate or defamatory depictions used to bully or exploit vulnerable individuals. The group stated their intention to work collaboratively on enforcement and policy development. It also reminded organizations of the importance of the following fundamental principles, irrespective of jurisdiction: implementing robust safeguards, ensuring meaningful transparency, providing effective and accessible mechanisms for content removal, and addressing specific risks to children.

AI Developments in the Financial Sector: Summary of Bank of England’s AI Roundtables with Banks and Insurers 

On February 16, 2026, the Bank of England published a summary of the outcome of three roundtables it held in late 2025 with regulated firms to discuss the implications of AI developments for the financial sector. The key findings included a general support for the UK’s principles-based, outcomes-based approach to regulation, with encouragement for the Bank of England to push for greater international coordination to reduce cross-border compliance costs. Firms noted factors delaying AI deployment, including risk-averse management, and the need to comply with data protection laws and data location requirements. Participants also discussed the challenges of third-party AI procurement that meets compliance requirements for regulated sectors, and difficulties in sourcing data of sufficient quality for successful AI deployment, especially in insurance.

The material contained in this communication is informational, general in nature and does not constitute legal advice. The material contained in this communication should not be relied upon or used without consulting a lawyer to consider your specific circumstances. This communication was published on the date specified and may not include any changes in the topics, laws, rules or regulations covered. Receipt of this communication does not establish an attorney-client relationship. In some jurisdictions, this communication may be considered attorney advertising.