Cookie Preferences

    We use cookies for analytics and to identify companies visiting our site (not individuals). Essential cookies are always active. Learn more

    European AI Regulation

    EU AI Act

    The world's first comprehensive AI law. Navigate risk-based requirements and prepare your AI systems for the European market.

    Try AI Trust Assessment

    What Is the EU AI Act?

    The EU AI Act (Regulation 2024/1689) is the European Union's landmark regulation establishing a comprehensive legal framework for artificial intelligence. Adopted in 2024, it creates harmonized rules for the development, deployment, and use of AI systems within the EU.

    The regulation takes a risk-based approach, categorizing AI systems based on their potential for harm. Higher-risk systems face stricter requirements, while lower-risk applications have minimal obligations. This graduated approach aims to foster innovation while protecting fundamental rights.

    Importantly, the Act has extraterritorial reach—it applies to any organization whose AI systems are used within the EU, regardless of where the organization is based. This makes it effectively a global regulation for AI companies serving European markets.

    Implementation Timeline

    Feb 2025Prohibited AI practices become enforceable
    Aug 2025GPAI (General Purpose AI) requirements apply
    Aug 2026Most provisions including high-risk AI requirements
    Aug 2027Full applicability including embedded AI systems

    Risk-Based Classification

    The EU AI Act categorizes AI systems into four risk levels, each with different requirements

    Unacceptable Risk

    AI systems that pose a clear threat to safety, livelihoods, or rights. These are banned outright.

    Examples: Social scoring, manipulative AI, real-time biometric identification in public spaces

    High Risk

    AI systems in critical areas that must meet strict requirements before deployment.

    Examples: Employment decisions, credit scoring, medical devices, critical infrastructure

    Limited Risk

    AI systems with transparency obligations. Users must know they're interacting with AI.

    Examples: Chatbots, deepfakes, emotion recognition systems

    Minimal Risk

    Most AI systems fall here with no specific requirements, though codes of conduct encouraged.

    Examples: Spam filters, AI-enabled games, recommendation systems

    High-Risk AI Requirements

    If your AI system is classified as high-risk, you must meet these requirements

    Risk Management System

    Ongoing identification, analysis, and mitigation of risks

    Data Governance

    Training data quality, relevance, and bias testing

    Technical Documentation

    Comprehensive records of system design and operation

    Record Keeping

    Automatic logging of events during system operation

    Transparency

    Clear information to deployers about system capabilities and limitations

    Human Oversight

    Mechanisms enabling human monitoring and intervention

    Accuracy & Robustness

    Appropriate levels of accuracy, security, and reliability

    Conformity Assessment

    Self or third-party assessment before market placement

    Global Impact

    Why the EU AI Act Matters

    World's first comprehensive AI regulation with global reach (affects any AI used by EU residents)

    Extraterritorial scope means non-EU companies must comply if serving EU markets

    Heavy penalties: up to 35 million EUR or 7% of global turnover for violations

    Creates de facto global standard as companies adopt EU requirements worldwide

    Specifically targets generative AI and foundation models with additional requirements

    Phased implementation through 2027 gives time to prepare

    Significant Penalties

    Non-compliance can result in fines up to €35 million or 7% of global annual turnover (whichever is higher) for prohibited AI practices. Even high-risk violations can reach €15 million or 3% of turnover.

    How ZIVIS Helps

    Risk Classification

    Determine which risk category your AI systems fall into and understand the specific requirements that apply to each.

    Compliance Gap Analysis

    Comprehensive assessment of your current AI practices against EU AI Act requirements, with prioritized remediation roadmap.

    Technical Documentation

    Support developing the technical documentation and quality management systems required for high-risk AI conformity assessment.

    GPAI Compliance

    Specialized guidance for General Purpose AI (including LLMs and foundation models) on transparency obligations and systemic risk assessments.

    Ready for EU AI Act Compliance?

    Start preparing now. Our assessment identifies your risk classification and compliance gaps.

    Learn About Our Framework