What is the EU AI Act?
The European Union's (EU) Artificial Intelligence (AI) Act, which came into force on August 1, 2024, is a groundbreaking regulation establishing a framework to ensure AI technologies are safe, transparent, and respect fundamental rights. Covering all types of AI, except those used for military, national security, research, and non-professional purposes, this regulation categorizes AI systems based on their potential harm, with significant implications for businesses and governments.
Key Dates and EU AI Act Implementation Timeline
The AI Act will be implemented in stages over 6 to 36 months, with various provisions becoming effective at different times:
- August 2024: AI Act enters into force.
- February 2025: Provisions on prohibited AI systems and general scope take effect, banning the use of AI systems with unacceptable risks.
- May 2025: Codes of Conduct finalized and applied.
- August 2025: Rules for general-purpose AI models (GPAIMs) and governance measures enforced.
- August 2026: Obligations for high-risk AI systems apply.
- August 2027: Full compliance required for all risk categories and GPAIMs placed on the market before August 2025.
- August 2030: Public authorities using high-risk AI systems must comply.
Who does the EU AI Act apply to?
The EU AI Act applies to any organization or entity that develops, deploys, or uses AI systems within the European Union. This includes companies both within the EU and those outside the EU if their AI systems are used within the EU market. The Act covers a wide range of AI applications, categorized by risk levels, with specific obligations for high-risk systems, general-purpose AI models, and other categories.
What are the unacceptable risks of the EU AI Act? What are the Article 5 prohibited AI practices.
Under the EU AI Act, unacceptable risks refer to AI systems that pose a clear threat to fundamental rights, safety, and values. These systems are strictly prohibited under Article 5 of the Act. The prohibited AI practices include:
- AI systems that manipulate human behavior to circumvent users' free will, such as voice-activated toys that encourage dangerous behavior in children.
- Social scoring systems used by governments or private entities to evaluate or rank individuals based on their behavior, socio-economic status, or personal characteristics, which can lead to unfair discrimination.
- Real-time remote biometric identification systems used in public spaces for law enforcement purposes, such as facial recognition, except in narrowly defined and regulated circumstances.
- AI systems used for predictive policing that forecast criminal behavior based on profiling, location, or past crimes, which can lead to biased or discriminatory outcomes.
These practices are banned outright to prevent harm and protect fundamental rights within the EU.
Practical Implications for Governments
Governments must establish national authorities by August 2025 to oversee AI compliance and market surveillance. These bodies will ensure that AI systems, especially those classified as high-risk, meet the required safety and transparency standards. They will also be responsible for enforcing penalties for non-compliance and coordinating with the EU-level AI Office.
Impact on Companies
For companies operating in the EU or planning to enter the market, understanding and adhering to the AI Act’s risk classifications is crucial:
- Minimal Risk AI: Includes systems like recommendation engines and spam filters, which face no mandatory obligations but can benefit from voluntary codes of conduct.
- Specific Transparency Risk AI: Systems such as chatbots and AI-generated content must clearly disclose their nature. Biometric categorization and emotion recognition systems must inform users of their presence and function, ensuring transparency and user consent.
- High-Risk AI: These systems, used in sensitive areas like recruitment and healthcare, must comply with stringent requirements, including risk mitigation, high data quality, activity logging, detailed documentation, and human oversight. Regulatory sandboxes will support innovation while ensuring compliance.
- Unacceptable Risk AI: AI systems that pose a clear threat to fundamental rights are banned. This includes manipulative AI, social scoring systems, and certain biometric applications, with limited exceptions for law enforcement under strict conditions.
- General-Purpose AI Models (GPAIMs): These models must disclose AI-generated content, prevent illegal content generation, and undergo thorough evaluations for systemic risks. Providers of GPAIMs placed on the market before August 2025 have until August 2027 to comply.
Strategic Considerations for Companies
- Compliance Planning: Establish internal teams to ensure compliance, especially for high-risk AI systems, and document all necessary procedures.
- Product Development Management: Careful planning is essential, as significant modifications to AI systems may require re-evaluation under the AI Act.
- Transparency and Reporting: Implement clear disclosure practices and safeguards to ensure AI-generated content is appropriately labeled and compliant.
- Engagement with Authorities: Maintain active communication with national authorities to stay updated on compliance requirements.
Conclusion
The EU AI Act marks a significant regulatory shift, impacting all businesses dealing with AI in Europe. By understanding and adapting to the risk classifications, companies can ensure compliance, avoid penalties, and leverage the regulation to enhance trust and competitiveness in the AI landscape.
About Nebuly
Nebuly is an LLM user experience platform. We help organizations understand and analyze the output of their LLM models, enabling them to offer compliant, safe and user friendly AI powered user experiences. If you'd like to learn more about LLM user experience and -analytics, please book a meeting with one of our experts here.