EU AI Act meets security: what the risk classification means for your AI system
“The legal team read the EU AI Act. The engineering team hasn’t. Compliance is due in five months.”
TL;DR
The EU AI Act has teeth: 35 million euros or 7% of global revenue. Prohibited practices enforceable since February 2025. GPAI obligations since August 2025. Full high-risk compliance due August 2026. Security obligations include documented adversarial testing, incident reporting, cybersecurity protections, and consistent behavior under adversarial input. Most AI teams are reading legal summaries. This post translates it into engineering. For the defense architecture that satisfies the robustness requirements, see Defense-in-depth for LLM applications.

What is the risk classification?
Four tiers, each with escalating obligations.
Prohibited (enforceable February 2, 2025). AI practices that are banned entirely: social scoring systems that disadvantage people based on behavioral patterns, real-time biometric surveillance in public spaces (with narrow exceptions for law enforcement), AI designed to manipulate vulnerable populations, and emotion recognition in workplaces and educational institutions. If your system does any of these, the obligation is simple: stop.
High-risk (compliance due August 2, 2026). AI systems used in critical domains where failure has significant consequences: critical infrastructure management, education and vocational training (systems that determine access), employment (recruiting, hiring, evaluation), essential services (credit scoring, insurance, emergency services), law enforcement, migration and border management, and justice administration. Most enterprise AI deployments that touch decisions about people will land here.
Limited risk. Systems that must be transparent: chatbots must disclose they’re AI, deepfake content must be labeled, emotion recognition systems must inform users. The obligation is disclosure, not restriction.
Minimal risk. Everything else. No specific obligations. Most AI applications (content recommendation, spam filtering, image editing) fall here.
The classification that matters for security teams is high-risk and GPAI with systemic risk. These are where the technical obligations live.
What are the security obligations for high-risk systems?
The EU AI Act doesn’t use the word “security” in isolation. It embeds security requirements throughout the technical obligations.
Documented adversarial testing. High-risk systems must undergo testing against adversarial inputs. The regulation requires consistent response behavior despite input variations. Read that carefully: “consistent response despite input variations” is a legal mandate for adversarial robustness. A system that jailbreaks under specific prompts fails this requirement.
Risk assessment. Identify and mitigate risks specific to the AI system’s intended purpose and foreseeable misuse. This maps to threat modeling: what are the attacks, what’s the impact, what controls are in place? For AI systems, this means assessing prompt injection, data extraction, privilege escalation, and the other OWASP LLM Top 10 categories.
Logging and auditability. The system must maintain logs of its operation, including automated decisions, with sufficient detail for post-incident investigation. For AI systems, this means logging prompts, responses, tool calls, and data access.
Cybersecurity protections. High-risk systems must have an “adequate level” of cybersecurity protection. The regulation doesn’t specify what “adequate” means, but it’s interpreted as defense-in-depth appropriate to the risk level. For LLM systems, this includes the input validation, semantic guards, privilege separation, and output filtering described in the defense-in-depth architecture.
Human oversight. High-risk systems must be designed for effective human oversight. For AI agents, this means HITL gates for high-impact decisions, the ability for humans to override or halt the system, and clear indicators of when the system is operating autonomously.
What are the GPAI obligations?
General-Purpose AI (GPAI) models have specific obligations that took effect August 2, 2025. GPAI models with systemic risk (determined by compute threshold or Commission designation) face additional requirements.
All GPAI providers must:
- Maintain technical documentation describing the model
- Provide instructions for use to downstream deployers
- Publish a sufficiently detailed summary of training data content
- Comply with the Copyright Directive
- Adopt the GPAI Code of Practice (voluntary framework submitted by independent experts)
GPAI with systemic risk must additionally:
- Conduct documented model evaluations including adversarial testing
- Assess and mitigate systemic risks
- Track and report serious incidents to the AI Office and national authorities without undue delay
- Ensure adequate cybersecurity protections
Pre-existing GPAI models (placed on market before August 2, 2025) have until August 2, 2027 to fully comply. New models must comply from day one.
How does this map to NIST AI RMF?
Organizations already implementing NIST AI RMF are building toward EU AI Act compliance, but the mapping isn’t one-to-one.
| EU AI Act Requirement | NIST AI RMF Function | Gap |
|---|---|---|
| Risk assessment | MAP: Identify and assess risks | NIST is broader; EU AI Act requires AI-specific threats |
| Documentation | GOVERN: Create policies and documentation | NIST includes organizational; EU Act focuses on technical |
| Adversarial testing | MEASURE: Quantify risk through testing | NIST recommends; EU Act mandates |
| Incident response | MANAGE: Implement controls and monitoring | EU Act adds reporting obligations |
| Human oversight | GOVERN + MANAGE | EU Act has specific design requirements |
NIST is voluntary. The EU AI Act is mandatory with penalties up to 35 million euros or 7% of global annual revenue, whichever is higher. For context: 7% of a $10 billion revenue company is $700 million. The penalties are designed to be material for large technology companies.
NIST’s Control Overlays for Securing AI Systems (COSAIS, in development) will provide more specific control mappings based on NIST SP 800-53, potentially bridging the gap between the frameworks. ISO 42001 (AI Management System) provides the management system standard that complements both.
What does incident reporting look like?
GPAI providers with systemic risk must report serious incidents to the EU AI Office and relevant national authorities “without undue delay.” The regulation doesn’t define a specific hour count, but “without undue delay” in EU regulatory context typically means as soon as the organization becomes aware, with follow-up detail within days.
What’s reportable: Serious incidents involving the AI system that resulted in or could result in harm. This includes: security breaches where the AI system was exploited, safety failures where the system produced harmful outputs, and systemic risks that materialized in production.
For AI security teams, this means having an incident response plan that specifically covers AI-related incidents, not just traditional security incidents. Prompt injection that leads to data exfiltration, jailbreaking that produces harmful content at scale, and model failures that affect critical decisions are all potentially reportable.
Key takeaways
- Three enforcement waves: prohibited practices (Feb 2025), GPAI obligations (Aug 2025), high-risk compliance (Aug 2026)
- Penalties: 35 million euros or 7% of global revenue. Designed to be material.
- Security obligations include adversarial testing, risk assessment, cybersecurity protections, logging, and human oversight
- “Consistent response despite input variations” is a legal mandate for adversarial robustness
- GPAI with systemic risk: model evaluations, incident reporting, cybersecurity protections
- NIST AI RMF maps partially to EU AI Act but is voluntary. EU AI Act is mandatory.
- AI security teams need AI-specific incident response plans. Prompt injection exploits are potentially reportable.
FAQ
What are the enforcement deadlines?
February 2025: prohibited practices. August 2025: GPAI obligations. August 2026: high-risk system compliance. August 2027: pre-existing GPAI models. Penalties up to 35 million euros or 7% of global revenue.
What security obligations does the Act impose?
Documented adversarial testing, risk assessment, cybersecurity protections, logging of automated decisions, consistent behavior under adversarial input, and human oversight design. For GPAI with systemic risk: model evaluations and incident reporting.
How does risk classification work?
Four tiers: Prohibited (banned), High-risk (critical domains, full compliance), Limited risk (transparency required), Minimal risk (no obligations). Most enterprise AI affecting decisions about people lands in high-risk.
How does EU AI Act map to NIST?
Risk assessment maps to NIST MAP. Documentation to GOVERN. Testing to MEASURE. Incident response to MANAGE. The key difference: NIST is voluntary, EU AI Act is mandatory with penalties.
Want to work together?
I take on projects, advisory roles, and fractional CTO engagements in AI/ML. I also help businesses go AI-native with agentic workflows and agent orchestration.
Get in touch