January 28, 2022

Regulation of Artificial Intelligence Heats Up

2021 ended with several significant developments in the regulation of artificial intelligence (AI). Some of the key developments include:

  • District of Columbia Attorney General Karl A. Racine introduced legislation in the D.C. Council aimed at discriminatory or biased algorithms. The bill specifically targets algorithms that limit “important life opportunities” involving insurance, credit, education, employment, housing and public accommodations. Among other things, the bill would require that companies inform consumers about what personal information is collected and how that information is used to make decisions, require annual algorithmic audits and reports, impose penalties for non-compliance and create a private cause of action.
  • The Federal Trade Commission announced that it is considering rules to ensure that algorithmic decision-making does not result in unlawful discrimination. The announcement was the latest sign of the FTC’s keen interest in AI. Earlier in 2021, the FTC provided guidance on how to use AI truthfully, fairly and equitably and hired AI experts to advise the agency on emerging issues. In addition, FTC Commissioner Slaughter published an article in the Yale Journal of Law & Technology titled "Algorithms and Economic Justice: A Taxonomy of Harms and a Path Forward for the Federal Trade Commission." We’re expecting more AI news from the FTC in 2022.
  • The National Telecommunications and Information Administration announced plans to convene three listening sessions and publish a report on the ways that commercial data flows of personal information can lead to disparate impact and outcomes for marginalized or disadvantaged communities. (The public notice specifically called out personal information being used by the insurance industry.) Assistant Attorney General Kristen Clarke delivered the keynote at the first listening session, noting that the DOJ’s Civil Rights Division is “particularly concerned about how the use of algorithms may perpetuate past discriminatory practices by incorporating, and then replicating or ‘baking in,’ historical patterns of inequality.”
  • Looking ahead, the Colorado Insurance Division will soon kick off the stakeholder sessions required by Senate Bill 169, which restricts insurers’ use of external consumer data, algorithms and predictive models. We expect the sessions to start with life insurance, with health insurance (especially wellness programs) coming second. Any rules that are developed through this process will not be effective until January 1, 2023, at the earliest.

We continue to monitor these and other AI-related developments.

Related Industries

The Faegre Baker Daniels website uses cookies to make your browsing experience as useful as possible. In order to have the full site experience, keep cookies enabled on your web browser. By browsing our site with cookies enabled, you are agreeing to their use. Review Faegre Baker Daniels' cookies information for more details.