On December 8th, 2023, the European Parliament and the Council of the European Union reached a provisional agreement on the Artificial Intelligence Act (the “AI Act”).
In the past few months the AI Act has been negotiated as part of the “trialogue” between the Council of the European Union, the European Parliament and the European Commission. Following the difficulties that arose in the past month, as discussed in our previous client update, the parties have now reached a provisional agreement that allows to move forward with the legislation process.
The main elements of the provisional agreement are summarized below:
- Prohibited uses. The following uses of AI are prohibited: biometric categorization systems that use sensitive information (for example, political beliefs, sexual orientation, race, etc.), scraping of facial images in order to create facial recognition databases, social scoring, emotion recognition in the workplace and educational institutions, AI systems that manipulate human behavior to circumvent a person’s own free will, and AI systems used to exploit people’s vulnerabilities.
- Requirements for “high-risk” AI systems. Certain obligations would be imposed on AI systems that are classified as “high-risk” (due to their significant potential harm to health, safety, fundamental rights, environment, democracy and the rule of law), such as a requirement to conduct mandatory rights impact assessment as well as a requirement to provide explanations about decisions based on high-risk AI systems.
- Requirements for GPAI systems. General-purpose AI (GPAI) systems (AI systems that can be used for various purposes, such as ChatGPT) and foundation models (large systems capable to competently perform a wide range of distinctive tasks) will have to adhere to transparency requirements before they are placed in the market. High-impact GPAI models with systemic risk would be subject to more stringent obligations, including conducting model evaluations, assessing and mitigating systemic risks, reporting obligations to the Commission, etc.
- Remote biometric identification systems. Law enforcement authorities would be authorized to use remote biometric identification systems subject to certain safeguards, including judicial supervision.
- R&D Exemption. The AI Act would not apply to AI systems that are used for the sole purpose of research and innovation.
- National Security Exemption. The AI Act will not apply to systems that are exclusively used for military or defense purposes.
- Sanctions. Non-compliance with the AI Act would lead to the imposition of fines amounting to up to 35 million euro or 7% of the company’s global turnover, depending on the type of violation and the size of the company.
Now the parties will continue working on amending the proposed text of the AI Act in accordance with the provisional agreement. The final text of the AI Act would need to be submitted to the representatives of the member states for endorsement and confirmed by European Parliament and the Council of the European Union. The AI Act would apply two years after its entry into force, with certain exceptions.
Gornitzky’s AI group offers a broad range of legal services tailored to address the evolving legal and regulatory challenges in the field of AI. For more information about our AI practice visit our AI page.
Please feel free to contact us with any questions that you have on this matter.