Dozens of senators kicked off a series of closed-door sessions, known as the A.I. Insight Forum, to hear from key industry stakeholders and discuss AI regulation. Will they be able to reach consensus on the scope of future legislation? We discuss this, a major setback for CFPB, a call from the G20 to harness the power of AI “responsibly for the good of all” and more in the latest briefing.
Regulatory and Legislative Developments
- A.I. Insight Forum Kicks Off. On September 13, more than 60 senators met in a closed-door session to discuss AI regulation. The session was the first of an educational series known as the A.I. Insight Forum, with senators hearing from big tech, unions, civil rights advocates and others. Senate Majority Leader Chuck Schumer (D-N.Y.) remains committed to passing AI legislation, and all of the tech leaders in attendance reportedly agreed that regulation is necessary. But achieving consensus on the scope and specifics of legislation will be difficult.
- Major Setback for CFPB. On September 8, a federal district court in Texas held that the Consumer Financial Protection Bureau's (CFPB) treatment of discrimination as an "unfair practice" under the Dodd-Frank Act exceeds the bureau's statutory authority. The case involved the CFPB's announcement last year that it would begin examining whether banks and other regulated entities were appropriately testing their data and algorithms for discrimination. In addition to being a major setback for the CFPB, the court decision also raises questions for the Federal Trade Commission, which has treated discrimination as an unfair practice under the Federal Trade Commission Act. The CFPB is reportedly considering an appeal.
- CFPB Issues Guidance on Credit Denials by Lenders Using AI. New guidance issued by the CFPB cautions creditors against relying on the CFPB’s sample forms to satisfy their adverse action notification requirements under the Equal Credit Opportunity Act (ECOA) when using artificial intelligence. Under ECOA and Regulation B, a lender must provide a credit applicant with a statement of specific reason(s) for an adverse action; these reasons must “relate to and accurately describe the factors actually considered or scored by a creditor.” Many lenders provide statements of reason(s) based on sample forms provided by the CFPB. However, according to the new guidance, “a creditor will not be in compliance with the law by disclosing reasons that are overly broad, vague, or otherwise fail to inform the applicant of the specific and principal reason(s) for an adverse action,” which may be more likely when lenders make credit decisions based on “complex algorithms [that] rely on data that are harvested from consumer surveillance or data not typically found in a consumer’s credit file or credit application.”
- Federal Trade Commission Continues Focus on Data and AI. On September 21, FTC Bureau of Consumer Protection Director Samuel Levine expressed concerns about the collection, sale and use of consumer data in remarks delivered at the Consumer Data Industry Association Law & Industry Conference. Levine said that the “business model that has led to the creation of detailed digital dossiers on almost every American” threatens consumer privacy, economic participation and constitutional liberties. He also noted that the FTC has initiated rulemaking regarding commercial surveillance and lax data practices and said that the agency is still reviewing the 11,000 comments it received at last year’s public hearing. At a Senate confirmation hearing on September 20, FTC Commissioner Rebecca Slaughter (who is up for reconfirmation) and Republican nominees Andrew Ferguson and Melissa Holyoak agreed that the law governing unfair and deceptive practice applies to AI but that new federal legislation would be needed to regulate AI more broadly.
- White House Secures Additional Commitments. On September 12, the White House announced that it had secured a second round of voluntary commitments from eight AI companies regarding the responsible development of artificial intelligence. The commitments — which focus on safety, security and trust — were touted as an “important bridge to government action.” The administration is continuing work on an Executive Order that it says will “protect Americans’ rights and safety.”
- Colorado Adopts AI Governance Regulation. It's official — the Colorado Division of Insurance has formally adopted its governance and risk management regulation for life insurers that use external consumer data (including in connection with their algorithms and predictive models). Life insurers doing business in Colorado are now on the clock and have until December 1, 2024, to get their risk management framework in place. (An interim progress report is due to the Division on June 1, 2024.) Don’t wait until the last minute on this one; it’s going to be a heavy lift.
- Stakeholders Comment on NAIC Model Bulletin. The National Association of Insurance Commissioners Innovation, Cybersecurity and Technology (H) Committee has released the extensive comments received on its Model Bulletin on the Use of Algorithms, Predictive Models, and AI Systems by Insurers. Industry comments focused primarily on broad definitions and third-party vendor provisions, while consumer representatives criticized the “meek” approach to unfair discrimination and absence of transparency guidelines. Notably, five states (Colorado, Indiana, Missouri, New York and Virginia) and the U.S. Chamber of Commerce also submitted comment letters; the states’ comments may signal where they are headed on AI regulatory policy.
- Leaders of the G20 Issue Call to Harness AI “Responsibly for the Good of All.” At the G20 in India, leaders of the world’s largest economies issued a statement recognizing “the rapid progress of AI,” which “promises prosperity and expansion of the global digital economy.” While recognizing the various challenges posed by AI, the New Delhi Leaders’ Declaration committed to leveraging AI “for the public good by solving challenges in a responsible, inclusive and human-centric manner, while protecting people’s rights and safety.” The statement further called for international cooperation on international governance for AI, including reaffirming the G20 AI Principles (2019), pursuing a pro-innovation regulatory/governance approach that maximizes the benefits and takes into account the risks associated with the use of AI and promoting responsible AI to achieve sustainable development goals.