February 01, 2023

Artificial Intelligence Briefing: NIST Releases AI Risk Management Framework and Playbook

Our latest briefing dives into the public launch of the NIST’s long-awaited AI Risk Management Framework, the EEOC’s new plan to tackle AI-based discrimination in recruitment and hiring, and the New York Department of Financial Services’ endeavor to better understand the potential benefits and risks of AI and machine learning in the life insurance industry.

Regulatory and Legislative Developments

  • NIST Releases AI Risk Management Framework: On January 26, the National Institute of Standards and Technology (NIST) released its long-awaited AI Risk Management Framework (AI RMF) during a public launch event. The AI RMF is intended to help organizations operationalize responsible AI by improving their ability to incorporate trustworthiness considerations into the design, development, use and evaluation of AI systems. NIST also released a companion resource, the Playbook, to help navigate the AI RMF; it includes actionable suggestions to help achieve the goals of the framework. NIST is accepting comments on the draft Playbook through February 27. A revised Playbook will be released in Spring 2023. Other resources released with the AI RMF include the Explainer Video, Roadmap, Crosswalk and Perspectives.
  • Representatives Seek Transparency into Blueprint for an AI Bill of Rights: Rep. Frank Lucas (R-OK, Chairman of the Committee on Science, Space, and Technology) and Rep. James Comer (R-KY, Chairman of the Committee on Oversight and Accountability) issued a letter to the White House Office of Science and Technology Policy (OSTP) requesting insight into the OSTP’s development of the Blueprint for an AI Bill of Rights (Blueprint). Their letter echoes various concerns that certain stakeholders voiced about the Blueprint in the fall of 2022,  and requests answers to 17 questions regarding the identity of the stakeholders involved in the development of the Blueprint, whether the OSTP coordinated with the National Institute of Standards and Technology in developing the Blueprint, and the OSTP’s position on the Blueprint’s provisions or positions that the Representatives believe conflict with NIST’s draft AI RMF, among other things. The letter requests a response from the OSTP by January 31, 2023.
  • EEOC's Draft Enforcement Plan Zeros in on AI-based Discrimination: On January 10, the Equal Employment Opportunity Commission published its draft strategic enforcement plan for 2023-27. The top subject matter priority is "Eliminating Barriers in Recruitment and Hiring," which includes "the use of automatic systems, including artificial intelligence or machine learning, to target job advertisements, recruit applicants, or make or assist in hiring decisions where such systems intentionally exclude or adversely impact protected groups." The EEOC held a public meeting on January 31 — Navigating Employment Discrimination in AI and Automated Systems: A New Civil Rights Frontier. A recording will be available.
  • NTIA Requests Comments on Privacy, Equity and Civil Rights: The National Telecommunications and Information Administration (NTIA) has issued a request for comment seeking public input on issues at the intersection of privacy, equity and civil rights. The input will be reflected in a forthcoming report on commercial data practices and how they can lead to disparate impacts and outcomes for marginalized or disadvantaged communities. Among other things, the request asks whether there “[a]re there any contexts in which commercial data collection and processing occur that warrant particularly rigorous scrutiny for their potential to cause disproportionate harm or enable discrimination.” Input is specifically requested with respect to insurance, employment, credit, healthcare, education, housing, and utilities.
  • NYDFS Looking into Life Insurers’ Use of AI/ML: The New York Department of Financial Services (NYDFS) has hired consulting firm FairPlay-Sustain Solutions to assist the NYDFS in understanding the potential benefits and harms stemming from the use of artificial intelligence and machine learning models. This work builds off the NYDFS’ 2019 guidance to life insurers regarding their use of external consumer data in accelerated underwriting (Insurance Circular Letter No. 1). It will focus on “improper discrimination stemming from data sources and data interactions, accelerated modeling approaches, and best practices in risk monitoring and mitigation.” Throughout the year, FairPlay-Sustain Solutions will be engaging with life insurers, consumer advocates, data providers, law firms and other jurisdictions; the information collected through the discussions will remain confidential.
  • Justice Department and Meta Implement Settlement Agreement: On January 9, the Justice Department announced that it had reached a key milestone in implementing its settlement agreement with Meta Platforms, Inc. (formerly Facebook). The settlement resolved allegations that Meta's advertising algorithms had discriminated in violation of the Fair Housing Act by taking FHA-protected characteristics into account. Pursuant to the settlement agreement, Meta has developed a new system to address algorithmic discrimination in its ad delivery system and will remain subject to ongoing review and court oversight.
  • Justice Department Files Statement of Interest: On January 9, the Department of Justice filed a statement of interest in a case involving the use of an algorithm-based scoring system for screening potential tenants. The plaintiffs allege that the system violates the Fair Housing Act by discriminating against Black and Hispanic applicants. In announcing the filing, Assistant Attorney General Kristen Clarke of the Justice Department's Civil Rights Division said, “[h]ousing providers and tenant screening companies that use algorithms and data to screen tenants are not absolved from liability when their practices disproportionately deny people of color access to fair housing opportunities.”
  • Health Technology Act of 2023: The House introduced the Health Technology Act of 2023 which would include in the definition of “practitioner licensed by law to administer such drug,” any “artificial intelligence and machine learning technology that are (A) authorized pursuant to a statute of the State involved to prescribe the drug involved; and (B) approved, cleared, or authorized under section 510(k), 513, 505, or 564.”

What We’re Reading

The Faegre Baker Daniels website uses cookies to make your browsing experience as useful as possible. In order to have the full site experience, keep cookies enabled on your web browser. By browsing our site with cookies enabled, you are agreeing to their use. Review Faegre Baker Daniels' cookies information for more details.