Our latest briefing explores the FTC’s push to address “commercial surveillance” and data security, proposed rulemaking from HHS prohibiting the use of discriminatory clinical algorithms, a move by the CFPB to hold digital marketers accountable for unfair or deceptive practices, and a new partnership between the NLRB and FTC.
Regulatory and Legislative Developments
- The Federal Trade Commission is considering rules that would address "commercial surveillance" and data security. In a move that could have far-reaching impacts on organizations that collect, analyze or use consumer data, the FTC issued an advance notice of proposed rulemaking on August 11 seeking public comment on 95 data-related questions – 19 of which deal with automated decision-making and algorithmic discrimination. According to Chair Lina Khan's statement regarding the ANPR, the public's input will "inform whether rulemaking is worthwhile and the form that potential proposed rules should take." Interested parties have 60 days to comment, including by participating in a virtual public forum scheduled for September 8.
- HHS Proposes Rule Prohibiting Use of Discriminatory Clinical Algorithms. On August 4, the Department of Health and Human Services (HHS) issued a notice of proposed rulemaking on Section 1557 of the Affordable Care Act, which prohibits discrimination on the basis of race, color, national origin, sex, age or disability in various health programs and activities. Proposed section 92.210 would prohibit covered entities from using discriminatory clinical algorithms. HHS notes, however, that covered entities would not be liable for clinical algorithms they did not develop but rather for their decisions made in reliance on such algorithms. HHS highlighted recent research concerning the prevalence of clinical algorithms that may result in discrimination. The Department “strongly cautioned” covered entities that over-relying on such algorithms in healthcare decision-making, such as by replacing or substituting clinical judgment with an algorithm, risks violation of Section 1557 if the decision “rests upon or results in” discrimination. HHS also noted that complaints alleging discrimination resulting from the use of a clinical algorithm would result in a fact-specific analysis of the allegations considering, among other things: the decisions and actions taken by the covered entity in reliance on the clinical algorithm in its decision-making, and what measures the covered entity took to ensure its decisions and actions were not discriminatory. Comments on this proposed rule must be submitted on or before October 3, 2022.
- Consumer Financial Protection Bureau puts digital marketers on notice. On August 10, the CFPB issued an interpretive rule aimed at digital marketers that "commingle the targeting and delivery of advertisements to consumers, such as by using algorithmic models or other analytics, with the provision of advertising 'time or space.'” Under the rule, digital marketers that provide such services to financial services companies covered by the Consumer Financial Protection Act will themselves be subject to the Act, including its prohibition on unfair, deceptive or abusive acts or practices.
- The CFPB took action against financial technology company Hello Digit LLC for deceiving consumers about its automated savings application. The CFPB issued a $2.7 million fine and required the company to reimburse consumers for financial losses they incurred due to Hello Digit’s faulty algorithm. The app was advertised as a tool to save people money, but instead, CFPB asserts that consumers paid unnecessary overdraft fees, which Hello Digit failed to reimburse as promised. Instead of limiting transfers to only amounts available in consumers’ checking accounts, the algorithm-powered application routinely overdrew funds, incurring fees, which the company received complaints about daily. The company also reportedly kept a significant amount of interest earned from holding consumer funds contrary to its advertising.
- National Labor Relations Board and Federal Trade Commission execute Memorandum of Understanding to promote fair competition and advance workers’ rights. On July 19, 2022, the NLRB and FTC formalized a partnership between the agencies that, among other things, will seek to protect worker rights from algorithmic decision-making. This is the most high-profile instance of the NLRB identifying algorithmic decision-making as something that could impact employee rights protected by the National Labor Relations Act. Employers with organized workforces (or workforces that could be the target of union organizing) should be aware of this development and the NLRB’s growing cooperation with the FTC.
Insurance regulators maintain their focus on algorithmic bias. At last week's meeting of the National Association of Insurance Commissioners, the Collaboration Forum on Algorithmic Bias featured presentations on AI risk management and governance (Scott Kosnoff, Faegre Drinker), bias detection methods and tools (Eric Krafcheck, Milliman), and approaches that insurers can take to manage and mitigate the risk of unintended bias and unlawful discrimination when developing and using AI and machine learning (Dale Hall, Society of Actuaries; Tulsee Doshi, Lemonade; and Daniel Schwarcz, University of Minnesota). The Collaboration Forum held regulator-only sessions in Kansas City last month and will hold additional public sessions at the NAIC's Insurance Summit in September.
What We’re Reading
- POLITICO: "Artificial intelligence was supposed to transform health care. It hasn’t."
- Health Affairs: "Predicting Race and Ethnicity to Ensure Equitable Algorithms for Health Care Decision Making"
Key Upcoming Events
- FTC’s Commercial Surveillance and Data Security Public Forum (September 8, 2022)