May 12, 2023

The AI Act Progresses Ahead With Approval of Key European Parliament Committees

On 11 May 2023, the European Parliament Internal Market and Consumer Protection (IMCO) and Civil Liberties, Justice and Home Affairs (LIBE) committees voted by a large majority to adopt a compromise position on the draft text of the proposed AI Act. The AI Act is a landmark legislative proposal set to be one of the first and most significant set of rules on artificial intelligence. This compromise text approved by the Committees makes some key changes to the European Commission’s initial draft of the AI Act, outlined below.

Definition of AI

The definition of AI has been updated to align with that of the Organisation for Economic Cooperation and Development (OECD) with the aim to develop a common understanding of what AI is. Under the compromise text, AI systems are defined as a ‘machine-based system designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate outputs such as predictions, recommendations, or decisions, that influence physical or virtual environments.’

Prohibited AI practices

The compromise text substantially amends the list of prohibited AI practices to include bans on intrusive and discriminatory uses of AI systems. For example, it includes ‘real time’ remote biometric identification systems in public spaces and biometric categorisation systems using sensitive characteristics (e.g. gender, race, ethnicity, citizenship status, religion, political orientation), predictive policing systems  (based on profiling, location or past criminal behaviour), emotion recognition systems in law enforcement, border management, workplace, and educational institutions; and indiscriminate scraping of biometric data from social media or CCTV footage to create facial recognition databases (violating human rights and right to privacy).

High-risk AI systems

The compromise text introduces an additional layer to the classification of high-risk systems, meaning that an AI model that falls under the Annex III categories would only be deemed high-risk if it posed a ‘significant risk of harm to the health, safety or fundamental rights’. The classification of high-risk areas has also been expanded to cover harm to people’s health, safety, fundamental rights or the environment and the use of AI systems to influence voters in political campaigns and in recommender systems used by social media platforms.

Foundation models

The initial draft of the AI Act did not cover AI systems without a specific purpose. Therefore, some key amendments have been prompted by the recent rise of generative AI systems such as ChatGPT. 

Under the compromise text, ‘general purpose AI systems’ are distinguished from ‘foundation models’, with the latter meaning AI models developed from algorithms and trained on a large scale of data to accomplish a wide range of downstream tasks, including those for which they were not specifically developed or designed.

Foundation models would be subject to stricter requirements under the compromise text proposals. For instance, providers of foundation models would have to assess and mitigate risks, comply with design, information and environmental requirements and register in the EU database. Generative foundation models, like ChatGPT, would also have to comply with additional transparency requirements.

Next steps

This compromise text will need to be endorsed by the whole European Parliament, which is expected to occur in mid-June 2023. The AI Act will then be able progress towards the trilogue negotiations between the European Parliament, Commission and Council to agree upon a final text.

The Faegre Baker Daniels website uses cookies to make your browsing experience as useful as possible. In order to have the full site experience, keep cookies enabled on your web browser. By browsing our site with cookies enabled, you are agreeing to their use. Review Faegre Baker Daniels' cookies information for more details.