European Union AI Regulation

The AI Act

Summary of the 272 pages draft by ChatGPT-4


  • Subject and Scope: The AI Act aims to ensure a high level of health, safety, and fundamental rights protection while excluding national security from its scope.


  • AI System Definition: Adjusted to align closely with international standards, focusing on excluding simple, traditional software systems. Guidelines on the definition will be developed.


  • Prohibited AI Practices: Includes prohibitions on real-time biometric identification in public spaces by law enforcement (with exceptions), untargeted scraping of facial images, certain uses of emotion recognition, and individual predictive policing, among others.


  • High-risk AI Systems: Additional safeguards for post-remote biometric identification by law enforcement and detailed classification of high-risk AI systems, including limitations on their scope.


  • Law Enforcement Exceptions: Derogations from conformity assessments for law enforcement and real-world testing without prior authorization, under specific conditions.


  • Fundamental Rights Impact Assessment: Certain deployers are required to conduct assessments for high-risk AI systems, facilitated by the AI Office.


  • Testing Outside Regulatory Sandboxes: High-risk AI systems can be tested under stringent conditions, including law enforcement, migration, asylum, and border control management.


  • General Purpose AI Models: Introduces obligations for these models, including risk assessments, cybersecurity protection, and incident reporting. Standards for compliance can be developed through industry codes of practice.


  • Governance and Enforcement: Establishes a centralized oversight system for general purpose AI models, including the creation of the AI Office and an enhanced role for the AI Board.


  • Derogation from Conformity Assessment: Maintains exceptions for certain AI systems from conformity assessment under specific conditions.


  • Existing AI Systems: Provides a transitional period for public authorities to comply with the regulation for high-risk AI systems and general purpose AI models already in use.


  • Penalties: Adjusts penalties for non-compliance, including fines up to 35 million EUR or 7% of annual turnover for prohibited practices.


  • Implementation and Transitional Periods: Sets forth deadlines for different provisions, including a 24-month general application period, with variations for specific elements.


Prohibited AI practices

  • Real-Time Biometric Identification in Public Spaces: Strictly limits the use of real-time biometric identification by law enforcement, with narrowly defined exceptions for substantial public interest cases.


  • Social Scoring by Public or Private Actors: Prohibits AI systems that evaluate individuals based on behavior or personal characteristics across various contexts, leading to discrimination or exclusion.


  • Untargeted Scraping of Facial Images: Bans the practice of untargeted scraping of facial images for creating or expanding facial recognition databases, to prevent mass surveillance.


  • Emotion Recognition: Restricts the use of AI for emotion recognition in workplaces and educational institutions, except for safety and medical reasons, due to concerns about scientific validity and privacy.


  • Biometric Categorization Based on Sensitive Traits: Forbids biometric categorization based on sensitive attributes such as race, sexual orientation, or political opinions, to prevent discrimination.


  • Predictive Policing: Prohibits AI systems from assessing the risk of individuals committing a criminal offense based solely on profiling or personal characteristics without a direct link to criminal activity and without human oversight.


  • Exploitative or Manipulative Practices: Bans AI systems designed to exploit vulnerabilities of specific groups or to manipulate behaviors in harmful ways, protecting individuals' autonomy and preventing harm.


High-risk AI systems

  • Critical Infrastructure: AI systems used in managing critical infrastructure that could put people's life and health at risk if they fail.


  • Education and Vocational Training: Systems determining access to educational and vocational training institutions, assessing students' exams, or evaluating participants in educational and vocational training programs.


  • Employment, Workers Management, and Access to Self-Employment: Including AI systems used for recruitment processes, evaluating workers' performance, and managing career advancements, which could significantly impact individuals' employment opportunities and work environment.


  • Essential Private and Public Services: AI applications that determine access to essential services like social security, social benefits, or health services, where incorrect or biased decisions could have serious implications on individuals' rights and well-being.


  • Law Enforcement: AI systems used in law enforcement that could affect individuals' fundamental rights and freedoms, including systems for profiling, risk assessments, or evidence evaluation.


  • Migration, Asylum, and Border Control Management: Systems involved in the management and control of migration, asylum requests, and border control operations, where the use of AI could have substantial impacts on individuals' rights.


  • Administration of Justice and Democratic Processes: Including AI used in the judiciary and democratic processes, where the use of AI could influence the fairness of proceedings or the outcome of elections and referendums.


  • Post-Remote Biometric Identification and Emotion Recognition Systems: Specific mention of post-remote biometric identification by law enforcement and the inclusion of certain AI systems related to biometric identification and emotion recognition in the high-risk category, subject to additional limitations and safeguards.


  • Governance for General Purpose AI Models: Introduces obligations for general purpose AI (GPAI) models used in high-risk systems, requiring documentation, risk assessments, and cybersecurity measures. Compliance can be demonstrated through industry-developed codes of practice.


  • Market Surveillance: Establishes the requirement for market surveillance authorities to ensure compliance with the AI Act, including for high-risk AI systems in the field of law enforcement, with designated authorities for oversight.