Our latest briefing dives into new local laws about AI and how it affects both employment and insurance industries, the launch of NIST’s Trustworthy & Responsible Artificial Intelligence Resource Center and the plans for it moving forward, new guidance from the FDA on cybersecurity and on artificial intelligence/machine-learning frameworks, and the Coalition for Health AI’s quality assurance standards for use of AI in the health care and related industries.
Regulatory and Legislative Developments
- NY Local Law 144. The New York City law regulating “automated employment decision tools” was enacted in 2021 with an effective date of January 1, 2023. Because the New York City Department of Consumer and Worker Protection (DCWP) was unable to develop regulations prior to the effective date, the enforcement date was pushed to April 15, 2023. Now, just nine days before the enforcement date, the DCWP issued its Final Rule for enforcement. The Final Rule gives more detail to (1) the definition of an automated employment decision tool, (2) the required bias audit and (3) the requirements of giving notice of use of the tool.
- Colorado is “All About the Outcomes.” On April 6, 2023, the Colorado Division of Insurance held its first stakeholder session to implement SB 21-169 with respect to private passenger auto insurance underwriting/pricing (the stakeholder process for life insurers started over a year ago). Commissioner Michael Conway and Big Data and AI Policy Director Jason Lapham provided background on the legislation and the stakeholder process, while the insurance trade associations and consumer advocates staked out their going-in positions. While the consumer advocates raised concerns about traditional underwriting factors that they consider problematic, Commissioner Conway made clear that the Division will focus on outputs first and will drill down into specific factors only if warranted. The Division anticipates holding these sessions every 6-8 weeks, with future sessions likely featuring presentations by stakeholders. A governance survey will be issued to select companies during the first half of May. With respect to the life insurer process, the Division continues reviewing feedback on its draft governance regulation. Commissioner Conway said that the yet-to-be-released testing regulation will be even less flexible than the governance regulation, as the Division may be signaling that it wants all insurers to follow the same testing methodology.
- NIST Unveils Trustworthy and Responsible AI Resource Center. The National Institute of Standards and Technology (NIST) launched a Trustworthy and Responsible AI Resource Center (AIRC). The AIRC provides access to foundational content and, in the future, will include technical and policy documents, a standards and metrics hub to assist with AI testing, and software tools and resources. The AIRC will also enable distribution of stakeholder content, case studies and educational materials later in its deployment.
- UK Whitepaper on AI Innovation. The UK government recently published its latest proposals to regulate AI in a whitepaper titled “A Pro-innovation Approach to AI Regulation.” Unlike the draft EU AI Act, the UK government proposes to set out the core characteristics of AI – “adaptivity” and “autonomy” – to future-proof the regulatory framework, rather than provide a concrete definition. It also proposes a principles-based regulatory regime based on five core principles: (1) safety, security and robustness; (2) appropriate transparency and explainability; (3) fairness; (4) accountability and governance; and (5) contestability and redress. There will be central coordination of existing sectoral regulators (e.g., in health care or product safety) rather than a new central regulator. The proposals are open to further feedback until June 21, 2023.
- FDA Cybersecurity Guidance. On March 30, 2023, the FDA issued guidance titled “Cybersecurity in Medical Devices: Refuse to Accept Policy for Cyber Devices and Related Systems Under Section 524B of the FD&C Act.” Effective March 29, 2023, the Federal Food, Drug, and Cosmetic Act (FD&C Act) was amended to include Section 524B, “Ensuring Cybersecurity of Devices.” The guidance outlines the additional cybersecurity information that must be included in marketing submissions for “cyber devices,” including the manufacturer’s plan to monitor, identify and address post-market cybersecurity vulnerabilities. The statute applies to device submissions after March 29, 2023. The FDA will generally refrain from issuing refuse to accept (RTA) decisions for submissions that do not comply with Section 524B until October 1, 2023 (Transition Period). Rather than RTA non-compliant submissions, the FDA has committed to working with manufacturers to address deficiencies during the Transition Period.
- FDA Predetermined Change Control Plan Draft Guidance. The FDA issued draft guidance titled “Marketing Submission Recommendations for a Predetermined Change Control Plan for Artificial Intelligence/Machine Learning (AI/ML)-Enabled Device Software Functions (Draft Guidance).” The Draft Guidance is the FDA’s most recent attempt to develop a regulatory framework for AI/ML-enabled devices and builds on its SaMD Discussion Paper and SaMD Action Plan. The Draft Guidance sets forth the information that would be included in a Predetermined Change Control Plan (PCCP), which would be submitted with a manufacturer’s marketing submission and limit a manufacturer’s obligation to submit marketing submissions for future iterations of the machine learning-enabled device software functions (ML-DSF). The PCCP would allow a manufacturer to outline for the FDA how the ML-DSFs will evolve and change over time, describe a plan for testing the performance of the “changed” device, and advise on how the manufacturer intends to alert users to those changes. No additional marketing submissions would be required for modifications described in the PCCP. The FDA is hosting a webinar on the draft guidance on April 13, 2023, and is accepting comments on the Draft Guidance through July 3, 2023.
- Blueprint for Trustworthy AI in Healthcare. On April 4, 2023, the Coalition for Health AI (CHAI) released its Blueprint for Trustworthy AI Implementation Guidance and Assurance for Healthcare, which seeks to establish assurance standards on trustworthy AI in health care (Read CHAI’s release here). The Blueprint, which builds on the NIST AI risk management framework, seeks to drive the responsible adoption of AI technology in health care by educating end-users on how to evaluate AI technologies and harmonizing standards for leveraging AI. The Blueprint also sets forth recommendations to “increase trustworthiness within the healthcare community, ensure high-quality care, and meet healthcare needs.”