February 10, 2022

Artificial Intelligence Briefing: Tracking AI Regulation and Legislation

As more organizations use artificial intelligence and algorithms to drive decision-making processes, policymakers are beginning to address concerns about these tools — including their lack of transparency and potential for generating unintended bias and discrimination. In our inaugural artificial intelligence briefing, we provide a rundown of recent AI regulatory and legislative developments from across the U.S. that should be top of mind for any organization using AI or algorithms.

Regulatory and Legislative Developments

  • On Capitol Hill, Sens. Ron Wyden (D-OR) and Cory Booker (D-NJ) and Rep. Yvette Clarke (D-NY) introduced the Algorithmic Accountability Act of 2022, which updates a 2019 version of the bill that failed to gain any traction. Among other things, the updated bill would affect algorithms relating to education and vocational training; employment; utilities and transportation; family planning and adoption; financial services; health care; housing; legal services; and other matters as determined by the Federal Trade Commission (FTC) through rulemaking. The bill would require that certain companies using algorithms conduct impact assessments for bias and other matters and submit reports to the FTC.
  • Also in Congress, on February 4, the House of Representatives passed the America Creating Opportunities for Manufacturing, Pre-Eminence in Technology, and Economic Strength Act of 2022 (America COMPETES Act), which now heads to the Senate for consideration. The bill includes an amendment offered by Rep. Ayana Pressley (D-MA) that directs the National Institute of Standards and Technology (NIST) to create a new office dedicated to studying bias in the use of artificial intelligence. In addition, it requires NIST to publish guidance to reduce the disparate impacts that artificial intelligence may have on historically marginalized communities. NIST is already working on an AI Risk Management Framework, which will be the focus of a two-part workshop in March.
  • The Rhode Island General Assembly is considering a bill that would restrict insurers’ use of external consumer data, algorithms and predictive models. The bill, which would mirror the Colorado legislation enacted last year, would direct the Director of Business Regulation, in consultation with the Health Insurance Commissioner, to engage in a stakeholder process and adopt implementing regulations. On February 9, the House Corporations Committee recommended that the bill be held for further study.
  • The New York City Council recently passed a bill regulating the use of AI in employment decisions. When it takes effect on January 1, 2023, the city law will require employers to complete bias audits for any AI process they use to screen candidates for employment or promotion, and it stipulates that those audits must be conducted no more than a year before the AI process is used. The law also requires employers to post information about any such AI processes on the employer’s website and provide applicants and employees with 10 days’ notice, including information about the process, before the AI process may be used. Applicants and employees have a right to opt out of participating in the AI process and force the employer to use an alternate process to evaluate the objecting applicant or employee.
  • The Illinois General Assembly last year passed amendments to its Artificial Intelligence Video Interview Act, which was originally passed in 2019. The amendments, which took effect January 1, 2022, impose data collection and reporting obligations on “[a]n employer that relies solely upon an artificial intelligence analysis of a video interview to determine whether an applicant will be selected for an in-person interview.” The law requires employers to collect and report annually to the Department of Commerce and Economic Opportunity the race and ethnicity of applicants who are and are not afforded in-person interviews after the use of AI analysis — as well as the race and ethnicity of all hired applicants. The law also requires the Department of Commerce and Economic Opportunity to “analyze the data reported and report to the Governor and General Assembly by July 1 of each year whether the data discloses a racial bias in the use of artificial intelligence.”
  • At the National Association of Insurance Commissioners, the Accelerated Underwriting (A) Working Group exposed an updated draft of its educational paper for comment through February 11. The working group has been working on the paper since last year, which is intended to help regulators better understand how accelerated underwriting is used by life insurers and makes recommendations for evaluating accelerated underwriting. Among other things, the revised draft includes a revised definition of “accelerated underwriting” which includes specific references to big data and artificial intelligence. The working group will meet at 4 p.m. ET on February 17 to discuss the draft and any comments received.

What We’re Reading

  • AI and life insurers: Azish Filabi and Sophia Duffy of the American College of Financial Services authored a recent paper regarding AI use by life insurers. Among other things, the paper proposes a self-regulatory organization that can work with the National Association of Insurance Commissioners to develop standards and oversee certification and audit processes.
  • Transatlantic AI regulation. A recent article from the Brookings Institution summarizes AI regulatory developments in the EU and the U.S. — and outlines “steps that these leading democracies can take to build alignment on curtailing AI harms.”

Key Upcoming Events

  • February 17: Colorado Stakeholder Session on SB 169, which restricts insurers’ use of external data, algorithms and predictive models.

Related Industries

The Faegre Baker Daniels website uses cookies to make your browsing experience as useful as possible. In order to have the full site experience, keep cookies enabled on your web browser. By browsing our site with cookies enabled, you are agreeing to their use. Review Faegre Baker Daniels' cookies information for more details.