EU AI Act August 2026 Deadline: What Mid-Market Companies Must Do Right Now
The EU AI Act's high-risk AI system requirements become mandatory on August 2, 2026. That's less than 100 days from now. For mid-market companies deploying AI agents, chatbots, or automated decision systems, non-compliance with high-risk obligations carries fines up to €15 million or 3% of global annual revenue — while violations of prohibited AI practices can reach €35 million or 7%.
Here's what you actually need to do, stripped of legal jargon.
What Counts as "High-Risk"
Not every AI system triggers compliance obligations. High-risk systems include:
- HR and recruitment tools — resume screening, candidate ranking, interview analysis
- Credit scoring and financial assessment — loan approvals, risk pricing, insurance underwriting
- Law enforcement support — any AI used in policing, border control, or judicial processes
- Critical infrastructure management — AI controlling energy, water, transport, or healthcare systems
- Education access — exam scoring, admissions decisions, student assessment
If your company uses AI for any of these, you have a compliance deadline.
The 90-Day Checklist
Days 1-30: Inventory and Classify
- Audit every AI system you operate or procure. Document what each system does, what data it processes, and who it affects.
- Classify each system by risk level. Most marketing chatbots and content generators fall below the high-risk threshold. Most HR and financial tools fall above it.
- Identify third-party AI providers. You're responsible for compliance even when using vendor-provided AI — check that your vendors are preparing too.
Days 31-60: Build Documentation
- Technical documentation for each high-risk system: training data, model architecture, performance metrics, known limitations.
- Risk management plan describing how you monitor, mitigate, and respond to AI-related risks.
- Human oversight procedures — who can override the AI, when, and how. The Act requires meaningful human control, not just a "human in the loop" label.
Days 61-90: Implement Governance
- Quality management system — processes for testing, validating, and continuously monitoring high-risk AI systems.
- Transparency measures — users must be informed when they're interacting with AI or subject to AI-driven decisions.
- Data governance — ensure training data meets quality, relevance, and bias-mitigation standards.
- Registration — high-risk AI systems must be registered in the EU database before deployment.
What's Already in Effect
Prohibited AI practices have been banned since February 2, 2025. If your company uses AI for:
- Social scoring (ranking individuals based on behavior or personal characteristics)
- Real-time biometric surveillance in public spaces (with narrow law enforcement exceptions)
- Manipulative or exploitative AI targeting vulnerable groups
Stop immediately. The fines for prohibited practices are the highest tier.
What AIwire Thinks
Most mid-market companies we've spoken with are underprepared. The 90-day window is tight but manageable if you start now. Priority one: audit and classify. Many companies will discover they have fewer high-risk systems than they fear — but you can't know that without looking. Budget €15-50K for compliance depending on how many high-risk systems you operate. That's far less than the fine for non-compliance.