The SEC cracks down on potential conflicts of interest for broker-dealers and investment advisors using predictive data analytics, the “No Robot Bosses Act of 2023” is introduced in the Senate to regulate employers’ use of AI, and the Chinese government takes a big step to tackle the use of generative AI — we’re diving into these developments and more in the latest briefing.
Regulatory and Legislative Developments
- SEC Cracks Down on AI, Robo-Advisors and Potential Conflicts of Interest. On July 26, the Securities and Exchange Commission proposed a regulation to combat potential conflicts of interest arising from the use of predictive data analytics by broker-dealers and investment advisors. The proposed rule seeks “to eliminate, or neutralize the effect of, certain conflicts of interest associated with broker-dealers’ or investment advisers’ interactions with investors through these firms’ use of technologies that optimize for, predict, guide, forecast, or direct investment-related behaviors or outcomes.” The proposed regulation would require broker-dealers and investment advisors to take steps to address potential conflicts of interest from predictive analytics and similar technologies that interact with investors to prevent firms from placing their own interests ahead of the investors’ interests. Read more about the proposed regulation on Faegre Drinker’s Broker-Dealer Regulation & Litigation Insights Blog.
- White House Convenes Roundtable on Data Broker Practices. On August 15, the White House convened a roundtable to discuss ways that the data broker industry monetizes personal information. Among other things, participants asserted that the “data broker economy enables discriminatory practices in credit underwriting, insurance, housing, employment, and advertising, continuing patterns of exclusion that disproportionately harm underserved and vulnerable groups.” Earlier in the day, the CFPB announced that it will be developing rules under the Fair Credit Reporting Act to “prevent misuse and abuse” by data brokers.
- Senate Introduces Legislation to Regulate Employers’ Use of Artificial Intelligence. On July 20, Senators Robert Casey (D-PA) and Brian Schatz (D-HI) introduced the “No Robot Bosses Act.” According to its sponsors, the bill aims to “safeguard workers’ rights, autonomy, and dignity in the workplace from discriminatory decisions and dangerous working conditions being set by algorithms.” The bill prohibits employers from relying exclusively on an automated decision system in making an employment-related decision; requires pre-deployment and periodic testing and validation of automated decision systems for issues such as discrimination and biases before such systems are used in employment-related decisions; requires employers to train individuals or entities on the proper operation of automated decision systems; mandates employers to provide independent, human oversight of automated decision system output before using the outputs to aid an employment-related decision; requires timely disclosures from employers on the use of automated decision systems, the data inputs to and outputs from these systems, and employee rights related to the decisions aided by these systems; and establishes the Technology and Worker Protection Division at the Department of Labor to regulate the use of automated decision systems in the workplace.
- HFSC Ranking Member Waters Requests GAO Study on AI and PropTech. House Financial Services Committee Ranking Member Maxine Waters (D-CA) sent a letter to the Government Accountability Office (GAO) expressing concern about online platforms, tenant screening companies, rent-setting companies and similar companies that use AI and property technology (PropTech). The letter requests that the GAO assess the impact of AI and PropTech on consumers’ ability to access affordable housing and report back to Congress with its findings and recommendations. The requested study would include evaluation of 1) how PropTech companies are using AI and how the technologies are being assessed for compliance with anti-trust, fair housing and fair lending laws; 2) the extent to which the FHFA and HUD allow for the use of AI in their policies and practices; and 3) the benefits and implications of the technologies on the availability and affordability of housing and mortgage lending.
- States Collaborate to Propose AI Legislation. State legislators are collaborating on new legislation that would govern the development and use of AI in both the public and private sectors. On August 10, the National Conference of State Legislatures released a report on “Approaches to Regulating Artificial Intelligence,” which provides information for state legislators to help them determine the role such legislation could play in regulating the use and development of AI. According to the report, at least 25 states, Puerto Rico and D.C. have seen proposed legislation relating to AI in 2023, with resolutions passing in at least 14 states and Puerto Rico. State legislators are concerned that the federal government is not acting fast enough and are looking to each other to share ideas and learnings. Of greatest concern are risks related to data privacy, security, transparency, impacts to the labor market and bias/discrimination.
- NAIC Receives Comments on Model Bulletin. At the NAIC’s Summer National Meeting in Seattle, the Innovation, Cybersecurity, and Technology (H) Committee heard comments on a draft Model Bulletin addressing Use of Algorithms, Predictive Models and AI Systems by Insurers. The committee heard from consumer representatives, who said the bulletin is not sufficiently prescriptive, does not build upon the NAIC’s AI principles, does not tackle proxy discrimination and is not strong enough on testing. It also heard from several industry trade groups, who were generally supportive but expressed concerns about the draft’s definitions and provisions concerning third party vendors. Written comments are due on September 5.
- Colorado Division of Insurance Forges Ahead. The Colorado Division of Insurance held its third stakeholder session relating to unfair discrimination in private passenger auto insurance on August 24. The session included a presentation by the American Academy of Actuaries on potential methods for identifying and mitigating bias. The Division will hold a virtual rulemaking hearing on its Proposed AI Governance Regulation for life insurers on August 31.
- China Tackles Generative AI. On August 15, China’s Interim Administrative Measures for Generative Artificial Intelligence Services went into effect. Key provisions require generative AI service providers to conduct security reviews and register their algorithms with the government under certain circumstances. According to a draft national standard, AI-generated content should also be watermarked. Additionally, the new measures empower regulators to address generative AI services outside of China that violate the new rules.
What We’re Reading
AI Regulations Around the World. A recent review in the journal Nature discusses how China, the EU and the U.S. are currently taking different approaches to AI regulation. To date, China has developed the most restrictive and most comprehensive framework. The country has a specialized agency — Cyberspace Administration of China (CAC) — which oversees the use of AI, internet, and other digital technologies. As reported above, CAC recently published new rules for generative AI — read more as CNN Business explores China’s major step in regulating generative AI services and Forbes highlights the differences in China’s approach to AI regulation in comparison to the U.S. and EU.
In the EU, multiple agencies are involved in regulating various aspects of the digital economy, including AI. The current major focus is on the proposed Artificial Intelligence Act, which passed the EU Parliament but still needs to be agreed upon by the European Commission and the Council of the EU. The Act puts emphasis on the precautionary approach, risk assessment, and risk-based graduated controls. Several stakeholder organizations have shared their support as well as criticisms of the Act.
- AI Helps People with Facial Paralysis to Communicate. There are plenty of current articles, blog posts and internet musings that shed a less than positive light on the interface between AI and humanity. However, we read these two studies and smiled. Both studies involve patients who have lost the ability to speak. In the first study, researchers at the University of California, San Francisco placed a rectangle with hundreds of electrodes on the surface of the cortex of the study participant’s brain. AI algorithms were then trained to recognize patters in brain activity and connect that activity to speech. The team was able to produce 78 words per minute (natural conversation is approximate 160 words per minute) with a median error rate of 25.5%. They also built an AI generated avatar of the participant and trained that avatar to speak in the participant’s voice (learned from a recording in her wedding video). In the second study, a research team at Stanford utilized arrays of electrodes introduced just a few millimeters below the brain surface in those areas known to be associated with speech. The team trained an algorithm on phonemes and obtained a 50-word vocabulary with a 9.1% error rate. When the vocabulary was increased to 125,000 words, the error rate rose to 23.8%. Both studies provide promising evidence for a future where lost speech can be restored to a conversational level.