The EU AI Act entered into force in August 2024. By August 2026, most of its requirements apply. If your firm uses AI tools in any meaningful way, that timeline is now.
This is not a piece about whether you should be worried. You should take it seriously. But I also see a lot of unnecessary panic out there. So here is a plain-language explanation of what the Act actually requires, what it does not, and why some of this is good news.
What the Act does
The AI Act is risk-based regulation. It sorts AI systems into four buckets:
- Unacceptable risk: banned outright. Social scoring, real-time biometric surveillance in public spaces. The dystopian stuff.
- High risk: serious compliance requirements. Medical devices, legal decisions, credit scoring, employment screening.
- Limited risk: transparency obligations. Your chatbot needs to say it is a chatbot. Fair enough.
- Minimal risk: largely unregulated. Spam filters, AI in games, the autocomplete in your email.
For most professional service firms, you are looking at high risk or limited risk, depending on what your AI is doing.
What counts as high risk?
The formal definition is deliberately broad. In practice, high-risk uses in professional services include:
- Using AI to assist in legal judgements or dispute resolution
- Using AI for credit risk assessment
- Using AI in recruitment or HR decisions
- Using AI in healthcare diagnosis or treatment recommendations
Now here is where it gets nuanced. If your firm uses AI to assist these processes rather than make final decisions, the classification becomes less clear. The Act focuses on intended purpose, not just output. That distinction matters.
There are also four exceptions that automatically place an AI system in the low-risk category, even if it operates in a high-risk domain:
- It performs a specific procedural task, like structuring a questionnaire
- It improves quality, like making text more readable
- It detects anomalies, flagging unusual patterns for a human to review
- It handles preparatory work, like guiding someone through an intake process
Most AI tools that knowledge workers use every day fall into these categories. That matters, because it means the Act is far less restrictive for everyday work than the headlines suggest.
What the Act does not do
Let me be clear about this one, because I hear a lot of confusion.
The Act does not prohibit using AI at work. It does not require you to stop using ChatGPT for drafting emails. It does not mandate specific technical architectures.
What it does require, for high-risk systems:
- A conformity assessment before deployment
- Registration in an EU database
- Ongoing human oversight
- Technical documentation
- Logging of system inputs and outputs where relevant
That is a serious list. But it is manageable, especially if you make smart choices about your AI infrastructure upfront.
The data residency question
Here is where the AI Act and GDPR shake hands. If your AI use involves personal data (client names, financial records, medical information) then GDPR requirements around data transfers apply in full.
Running your AI on EU infrastructure does not automatically satisfy the AI Act. But it simplifies the compliance picture by keeping data residency straightforward.
And this is where I see many organisations trip up. They pick an AI tool first and think about data residency second. Do it the other way around. Know where your data lives before you start feeding it into anything.
For a deeper look at where your data actually goes with standard AI tools, read our data residency explainer.
What about open source AI?
The AI Act creates specific exemptions for AI systems and models released under open source licences. But here is what you actually need to know: those exemptions are aimed at model makers, not at you as a deployer.
If you are a law firm running Llama for document review or a financial advisor using Mistral for client communications, the open source exemption does not change your compliance obligations. Your requirements as a deployer are identical whether you use an open source or proprietary model.
Where the choice does matter is on the practical side. Open source gives you visibility into how a model works, which makes documentation and transparency requirements easier to satisfy. It gives you control over where the model runs, which simplifies data residency. And it means you are not dependent on a vendor’s compliance roadmap. You can verify and demonstrate conformity yourself.
For firms in regulated industries, that level of control is not just convenient. It is a genuine advantage. If you are building your AI strategy entirely on closed, US-hosted models, you are making your own compliance life unnecessarily hard.
So what should you actually do?
- Classify your AI use. Map out every AI tool your organisation uses and determine whether it falls into high risk, limited risk, or minimal risk. For most day-to-day knowledge work, you will find you are in the clear.
- Get your data residency sorted. Know where your data goes. If personal data is involved, European infrastructure is the path of least resistance.
- Document what you do. Even for low-risk tools, having clear documentation of what AI you use, how you use it, and what oversight exists is just good practice. It protects you and it builds trust with clients.
- Write an AI policy. It does not need to be a 40-page legal document. A clear, honest statement about how your organisation uses AI and what guardrails are in place. We published ours in the Voys Handbook, feel free to steal from it.
- Do not let compliance paralyse you. The worst response to the AI Act is to stop experimenting. The regulation is designed to make AI safer, not to make it disappear. Use it as a framework, not a roadblock.
This post is for informational purposes. It is not legal advice. For specific guidance on your organisation’s compliance obligations, consult a qualified legal professional.