Klai ← Blog

You open ChatGPT, paste in a contract, and ask it to summarise the key risks. It works brilliantly. Thirty seconds later you have a clean summary.

Then someone in your legal team asks: where did that contract go?

It is a reasonable question. And the honest answer, for most AI tools, is: somewhere in the United States. Into a data centre operated by a company you did not choose, under terms of service that can change every few months, processed by systems you cannot audit.

What the terms of service actually say

Most AI providers are upfront about this if you read carefully. OpenAI, Anthropic, Google and Microsoft all process data on US infrastructure by default. Enterprise tiers with additional data handling agreements exist, but they cost significantly more and require negotiation.

The standard consumer and small-business tiers typically include clauses that allow the provider to use your data to improve their models. The opt-out mechanisms exist, but they require active configuration and are not always retroactive.

Why this matters for compliance

For businesses operating under GDPR, the picture is complicated. Sending personal data to a non-adequate third country requires either standard contractual clauses or an adequacy decision. The US regained adequacy status in 2023 under the EU-US Data Privacy Framework, but that framework has faced legal challenges before and will face them again.

Beyond GDPR, sector-specific rules often go further. Medical records, legal correspondence, and financial data each carry their own regulatory frameworks with stricter requirements than general data protection law.

What private AI infrastructure actually means

Running AI on European infrastructure does not mean running worse AI. The underlying models open source models keep getting better every day. What changes is where output is being generated: on servers within the EU, under EU law, without the data ever leaving a jurisdiction you understand.

Klai runs on European infrastructure. Every query, every document, every session stays within the EU. We do not train on your data. We do not share it. We do not log it unless you ask us to.

That is not a policy that can change without your knowledge. It is the architecture.

If your organisation uses AI in a regulated context, you also need to understand the EU AI Act. Read our plain-language guide to the EU AI Act.


If you are evaluating AI tools for a compliance-sensitive environment, the question to ask every vendor is simple: where does inference happen, and what are the contractual guarantees around data handling? The answer tells you everything.