Significant new guidance from the Department of Justice (DOJ) and Equal Employment Opportunity Commission (EEOC) advises employers that use of AI and algorithmic decision-making systems in employment-related decisions may violate the Americans with Disabilities Act. In other AI news, automated decision-making and algorithmic bias became focal points at three major industry conferences held in the past month, as industry leaders work to get ahead of the rising tide of regulations targeting AI.
Regulatory, Legislative & Litigation Developments
- Federal agencies address using AI technology in employment-related decisions. On May 12, the EEOC and DOJ issued separate guidance focusing on how employers using AI and algorithms may adversely impact individuals with disabilities and violate the Americans with Disabilities Act. The documents (which our Faegre Drinker labor and employment colleagues analyze in greater depth in this alert) also provide best practices that should be top of mind for employers using technology in personnel decisions or performance evaluations.
- Congressional hearing highlights the importance of confronting algorithmic bias. The House Financial Services Committee’s Task Force on Artificial Intelligence convened a hearing on May 13 entitled “Keeping Up with the Codes – Using AI for Effective RegTech.” Deputy Comptroller for Operational Risk Policy Kevin Greenfield testified regarding the Office of the Comptroller of the Currency’s regulatory and compliance expectations for banks using AI. He noted that “AI may perpetuate or even amplify bias,” and that the “potential for unintended or illegal outcomes increases the importance of enhanced understanding, monitoring and review of AI systems that are used for customer-focused activities such as credit underwriting.”
- Facial recognition company reaches settlement with ACLU. The facial recognition software company Clearview AI agreed to a permanent, nationwide ban on making its database of 20 billion facial photos available to most businesses and other private entities. The ban is part of a May 9 settlement between Clearview AI and the American Civil Liberties Union (ACLU) and other groups that alleged the company had violated the Illinois Biometric Information Privacy Act (BIPA). BIPA was enacted in 2008 to ensure that Illinois residents would not have their biometric identifiers (such as fingerprints, faceprints and retinal scans) captured and used without their knowledge and permission. Clearview AI will remain free to sell its database to most federal and state agencies.
- The 17th Annual Insurance Public Policy Summit touched on issues related to algorithmic decision-making by insurers, as part of a far-ranging discussion on how to promote innovation and protect consumers. Maryland Insurance Commissioner Kathleen Birrane, who chairs the NAIC’s Innovation, Cybersecurity and Technology (H) Committee, discussed plans for the H Committee’s Collaboration Forum and its focus on algorithmic bias. Among other things, Commissioner Birrane highlighted the concern about data sources that may be predictive of loss but also are highly correlated with protected class status. Indiana State Rep. Matt Lehman (R-IN), who serves as majority floor leader in the Indiana House of Representatives and is the immediate past president of the National Council of Insurance Legislators (NCOIL), focused on the challenges that insurance scores pose for agents and consumers. Rep. Lehman discussed his proposed Insurance Underwriting Transparency Model Act, which NCOIL is considering.
- The Geneva Association Conference held a session on responsible use of data, AI and algorithms. The Geneva Association, the insurance industry’s leading international thinktank, held its second annual New Technologies and Data Conference on May 5. During a session entitled “Using Data Responsibly for Innovation – How can insurers strike the right balance?” panelists discussed ways in which insurers can use data responsibly, including in artificial intelligence and algorithms. Chaouki Boutharouite (AXA) noted the importance of having an AI governance framework in place and the need for human control and oversight of how/what data is collected and how models are trained, used and perform. He also said identifying and eliminating biases continues to be a challenge that companies are focused on, but there’s no one-size-fits-all solution. However, at this stage, Boutharouite believes the best way to fight against bias is making sure the data is diverse, mature and collected in the right way.
- Artificial intelligence took center stage at the International Association of Privacy Professionals (IAPP) Global Privacy Summit. The global gathering of privacy professionals featured a keynote from Federal Trade Commission (FTC) Chair Lina Khan, in which she confirmed her belief that the FTC has the existing enforcement and rulemaking tools to address various privacy concerns, including harms that can result from “powerful cloud storage services and automated decision-making systems” that enable “stunningly detailed and comprehensive user profiles that can be used to target individuals with striking precision.” Chair Khan also highlighted recent enforcement actions by the FTC. These actions require deletion of ill-gotten data and disgorgement of any algorithms trained with that data. They also prohibit individual executives from participating in certain industries following an enforcement action. Chair Khan’s comments, while short on specific priorities, confirmed that the FTC is likely to be an active regulator and rulemaker regarding data practices and the use of algorithms. Her comments are with the backdrop of the FTC providing notice to the Office of Management and Budget in the fall that it is exploring rulemaking options to, in part, “ensure that algorithmic decision-making does not result in unlawful discrimination.” “What’s at stake,” Chair Khan said, “with these business practices . . . is one’s freedom, dignity, and equal participation in our economy and society.”
- The IAPP Global Privacy Summit also featured discussion of the European Commission’s 2021 Artificial Intelligence Act. The Act, which speakers expect to be finalized no later than mid-2023, would be a significant development in the regulation of AI. The pending regulation includes a prohibition on AI manipulation leading to decontextualized or unjustifiably detrimental social scoring; design obligations (e.g., human oversight or risk assessment, etc.) for credit scoring or AI used in critical infrastructure; transparency obligations for deep fake, or emotion recognition, technology; and voluntary codes of conduct for residual impacts.
What We’re Watching
- Confirmation of FTC Commissioner: The Senate’s confirmation of Alvaro Bedoya to the Federal Trade Commission gives Democrats a 3-2 majority that may pave the way for rules governing the use of artificial intelligence and algorithmic decision-making.
Key Upcoming Events