The EU AI Act has been law since 2024. For most of that time it has existed as a future problem — something to prepare for, to assign a compliance team to, to worry about eventually. August 2, 2026 is when eventually arrives.
On that date, the remaining provisions of the Act become enforceable. High-risk AI systems must have passed conformity assessments. AI-generated content must be labelled under Article 50’s transparency obligations. Every EU member state must have an operational AI regulatory sandbox. And companies that are not in compliance face fines of up to €35 million, or 7% of their total global annual revenue — whichever is higher.
That second number is the one that matters. Seven percent of global revenue is not a fine. It is an existential event for most companies.
What “High-Risk AI” Actually Means Under the Act
The EU AI Act does not regulate all AI equally. It uses a risk-tiered framework, and the obligations that kick in on August 2 apply primarily to what the Act classifies as high-risk systems. That category is broader than most companies assume.
High-risk AI includes systems used in hiring and recruitment — any AI that filters CVs, scores candidates, or makes employment decisions. It includes AI used in education to evaluate students. It includes AI used in credit scoring, access to essential services, law enforcement, and border management. It includes AI embedded in safety-critical infrastructure.
If your company uses an AI tool to screen job applications, score leads by creditworthiness, evaluate employee performance, or make decisions that affect people’s access to services — there is a reasonable chance that system falls into the high-risk category and requires a conformity assessment, detailed documentation, and EU registration before August 2.
The Transparency Rules Are Coming for Everyone
Article 50 of the Act — which requires AI-generated content to be labelled — does not apply only to high-risk systems. It applies broadly. Text, images, audio, and video generated by AI systems and presented to users must be disclosed as AI-generated in a clear and accessible manner.
This has significant implications for media, marketing, e-commerce, and any business that uses generative AI to produce content at scale. The requirement does not say the content must be bad or misleading to require disclosure. It says it must be labelled. That is a fundamentally different standard than what most Western companies currently operate under.
Anthropic’s Claude Mythos Preview is already being deployed in select enterprise environments. The race to build and deploy frontier models is accelerating. The EU’s position is that all of this deployment needs to happen inside a regulatory framework. August 2 is when that framework has teeth.
Why Most Companies Are Not Ready
Legal analysis published by Kennedy’s and Baker Botts in March 2026 indicates that a significant share of organizations deploying AI in Europe have not completed the documentation, risk assessments, and conformity evaluations required for high-risk applications. The compliance infrastructure that the Act demands — audit logs, human oversight mechanisms, data governance documentation — takes months to build properly, and the deadline is less than four months away.
Part of the problem is classification uncertainty. The boundary between high-risk and lower-risk AI is not always obvious, and companies have been waiting for enforcement guidance before committing to compliance postures. That guidance is arriving just as the deadline approaches, leaving little runway to implement what the guidance requires.
The Global Implications
The EU AI Act does not only apply to European companies. It applies to any company deploying AI systems that affect people in the EU — including American, Asian, and Nigerian companies whose products are used by European users. The territorial reach of the Act mirrors the approach the EU took with GDPR, which reshaped global data practices because non-compliance meant exclusion from the European market.
The AI Act is likely to have the same effect. Companies that want to operate in Europe will need to meet European standards. Those standards — transparency, human oversight, conformity assessment, risk documentation — will inevitably influence how AI is governed globally, because multinational companies cannot maintain two entirely separate compliance frameworks forever.
August 2 is not the end of the AI Act’s implementation. Some provisions phase in over six years. But it is the moment the Act stops being a policy document and starts being an enforcement mechanism. The fine schedule is real. The compliance requirements are real. The companies treating this as someone else’s problem are running out of time to change that position.