The European Parliament approves a revised draft of the EU AI Act, Senate Majority Leader Chuck Schumer lays out plans for groundbreaking AI legislation in the U.S., and the FTC issues a warning that companies who ignore consumers’ data rights do so “at their own peril.” We explore these developments and other regulatory updates in the latest briefing.
Regulatory and Legislative Developments
- EU Parliament Approves the AI Act. On June 14, 2023, the European Parliament voted to adopt a compromise position on the draft text of the proposed EU AI Act. The revised draft now includes extensive obligations on developers of foundation models — AI trained on large data sets to accomplish a wide range of downstream tasks. The revised draft also expands the categories of prohibited AI significantly, including prohibitions on indiscriminate scraping of biometric data from social media to create facial recognition databases, and adds further categories of high-risk systems, including systems that influence voters in political campaigns. Penalties are also increased significantly to up to 7% of global revenue. The draft text will now be subject to a final set of trilogue negotiations during the latter half of 2023.
- Senator Majority Leader Lays Out Approach for AI Legislation. On June 21, Senate Majority Leader Chuck Schumer (D-NY) laid out his proposed approach for enacting groundbreaking AI legislation. It starts with a basic framework that encourages safe innovation and calls for security, accountability, protection of our democratic foundations and (especially) explainability. It also includes a new legislative process for developing policies to implement the framework, starting this fall with a series of AI insight forums.
- President Biden Addresses AI Opportunities and Risks. On June 20, President Biden met with tech leaders in a closed-door session to discuss what he described as AI’s “enormous promise and its risks.” Top administration officials are reportedly meeting multiple times each week to develop executive orders and other actions that the federal government can take with respect to artificial intelligence. Next month, Vice President Harris will meet with civil rights leaders, consumer protection groups and others as part of the administration’s ongoing efforts.
- FTC Issues Warning. In a June 13 blog post, the Federal Trade Commission warned that it “will hold companies accountable for how they obtain, retain, and use the consumer data that powers their algorithms.” The post also says that “companies that ignore consumers’ rights to control their data do so at their peril.”
- State AGs Weigh In. On June 12, 23 state attorneys general submitted a comment to the National Telecommunications and Information Administration (NTIA) calling for independent standards for transparency, testing, assessments, and audits in AI governance. They also called for certain measures to be adopted into federal law, including a human review of AI-driven decisions in certain high-risk use cases, aligning AI governance standards with state privacy rights, and concurrent enforcement powers between the federal government and state attorneys general. The letter was signed by the attorneys general of Arizona, Arkansas, California, Colorado, Connecticut, Delaware, the District of Columbia, Illinois, Maine, Minnesota, Nevada, New Jersey, New York, North Carolina, Ohio, Oklahoma, Pennsylvania, South Carolina, South Dakota, the U.S. Virgin Islands, Tennessee, Vermont, and Virginia. The NTIA is within the U.S. Department of Commerce and advises the President on telecommunications and information policy.
- Global Insurance Regulators Provide Update. Last week’s IAIS Global Seminar in Seattle featured a panel focused on AI, where Rhode Island Superintendent Beth Dwyer (who chairs the NAIC's Big Data and AI Working Group) said, "We feel very strongly that it is our duty to avoid as many unintended consequences to consumers as possible, while still allowing the benefits." Immediately following the Global Seminar on June 16, the Steering Committee of the EU-U.S. Insurance Dialogue Project provided an update on its work and future priorities, including with respect to AI. (The Dialogue Project began in 2012 as a means of enhancing cooperation between EU and U.S. insurance regulators.) The summary report released in connection with the Dialogue Project does a nice job of describing efforts on both sides of the Atlantic.
- Employers, Beware! NYC AI Audit Law Will Soon Be Effective. On July 5, 2023, New York City will begin enforcing Local Law 144, which regulates the use of AI-driven tools in certain employment decisions. Local Law 144 applies to all employers and employment agencies that hire employees who reside in New York City, regardless of the employer’s location. Under the law, employers will be liable if they fail to comply with the law’s requirements, which include commissioning and publishing the results of an annual independent bias audit; providing notice of the use of the AI-driven hiring tools to candidates and employees; and providing an alternative selection process or reasonable accommodation to those who request it.