March 18, 2022

Artificial Intelligence Briefing: CFPB, NIST Take Aim at AI Issues

Our latest briefing unpacks important developments at the Consumer Financial Protection Bureau (CFPB) and the National Institute of Standards and Technology (NIST) and other AI-related news at the state and industry level.

Regulatory and Legislative Developments

  • The CFPB released an announcement on March 16 taking aim at unfair discrimination in consumer finance. The CFPB said it will “closely examine financial institutions’ decision-making in advertising, pricing and other areas to ensure that companies are appropriately testing for and eliminating illegal discrimination.” To that end, the Bureau also said that “CFPB examiners will require supervised companies to show their processes for assessing risks and discriminatory outcomes, including documentation of customer demographics and the impact of products and fees on different demographic groups.” This new point of emphasis should be top of mind for financial institutions — and another signal that automated decision-making processes are likely to face increased scrutiny from the CFPB.
  • The NIST has released a draft of its AI Risk Management Framework. The draft framework identifies risks related to AI systems and offers a process for managing those risks. In addition, NIST has published a paper, Towards a Standard for Identifying and Managing Bias in Artificial Intelligence (NIST Special Publication 1270), which explores how AI bias can result from human biases and systemic, institutional biases. NIST is hosting a workshop on March 29-31 to further its development of the risk management framework.
  • The New York Department of Financial Services (NYDFS) is seeking information on insurer use of personal credit information. The NYDFS has requested that insurers writing private passenger auto, commercial auto and homeowners insurance provide information regarding their use of credit and insurance scores. Among other things, NYDFS wants to know if companies have performed “any independent or other analysis to ensure that the use of credit and insurance scores for initial tier placement is not a proxy for any other prohibited variables, such as income.” In other news, New York Superintendent Adrienne A. Harris will serve as one of the co-chairs for the NAIC’s Big Data and Artificial Intelligence Working Group.

Industry Activity

  • Digital health stakeholders convened at the inaugural ViVe conference — billed as a “new health information technology event focused on the business of health care systems” — and discussed key themes about the use of data and data science in health care. The conference, held March 6-9 in Miami, brought together health care, digital health, and technology executives and leaders, along with health startups and government officials, to discuss innovations and advancements in digital health. Programming grappled with topics such as using technology to increase access to care; leveraging data, algorithms and AI to improve patient outcomes and reduce clinician burnout; data security; interoperability; policy and regulatory developments; and “techquity.” Among the many issues discussed, one key takeaway loomed large over all sessions: data, algorithm-based solutions, and artificial intelligence are destined to become a fixture in the future of health care. Over the next decade and beyond, industry leaders, care providers and regulators will be tasked with finding ways to encourage technological innovation and deploy health technology solutions while ensuring patient safety, data security and privacy.
  • Insurance industry regulators and standard-setters weighed in on artificial intelligence at the Geneva Association’s Programme on Regulation and Supervision (PROGRES) Seminar. The seminar included a panel on AI, featuring Petra Hielkema (Chair of EIOPA), and Maryland Insurance Commissioner Kathleen Birrane (Chair of NAIC Innovation, Cybersecurity, and Technology (H) Committee). Hielkema outlined the EU Artificial Intelligence Act, which would establish harmonized rules on AI throughout the EU, including assigning applications of AI to three risk categories. Although we won’t see finalized language until the end of 2023 at the earliest, this bears watching as it will likely impact thinking in the U.S. Commissioner Birrane laid out the charges of the H Committee, focusing on the work plans that the H Committee will roll out in April for each of its working groups. She reported that the Big Data and Artificial Intelligence Working Group will continue in full force, fleshing out the AI principles and developing a more precise framework around those guidelines. This will include looking at third party vendors. The H Committee also will establish a single group to identify implicit bias in algorithms. That work is currently interspersed among several groups, and she intends to bring it together into one collaborative group.

Key Upcoming Events

Related Topics

The Faegre Baker Daniels website uses cookies to make your browsing experience as useful as possible. In order to have the full site experience, keep cookies enabled on your web browser. By browsing our site with cookies enabled, you are agreeing to their use. Review Faegre Baker Daniels' cookies information for more details.