Cloud AI = Third-Party Data Processing

The first thing to understand is that using a cloud AI tool is not the same as using a piece of software that runs on your computer. When you send a document to a cloud AI service for analysis — or paste client data into a prompt — you are transmitting that data to the AI provider's servers. The AI provider is processing your data on their infrastructure.

Under UK GDPR and its EU equivalent, this arrangement makes the AI provider a data processor under Article 28. That creates specific obligations:

Most businesses using cloud AI tools casually — pasting client data into ChatGPT, Claude, or similar — have not completed a DPIA, have no DPA in place with the provider, and have not assessed the lawful basis for the data transfer. This is not a minor technicality. It is a systematic compliance failure.

ICO Enforcement Priority

The ICO has explicitly identified AI-related data processing as an enforcement priority. In 2025, the ICO opened investigations into several organisations for using AI tools to process personal data without adequate legal basis. The message is clear: "we did not realise it was GDPR-relevant" is not a defence.

What Data Cloud AI Tools Actually See

It is worth being concrete about what the AI provider receives when you use their service. A cloud AI tool processes:

Some AI providers offer "zero data retention" modes or enterprise tiers that contractually prevent training on your data. Even with those controls in place, the data is still transmitted to and processed on the provider's servers — the third-party processing relationship still exists, and the DPA requirement still applies.

The International Transfer Problem

The major cloud AI providers are US-based companies. When you send data to their services, it is almost certainly processed on servers in the United States or other non-UK/EU jurisdictions. This creates a cross-border data transfer that requires one of the following legal bases under UK GDPR:

Some large AI providers have put SCCs in place as part of their enterprise offering. But the default terms for consumer or SME tiers of cloud AI products typically do not include the required transfer mechanism documentation. Using these products to process personal data without checking the transfer basis is a compliance gap.

Legitimate Interest vs. Consent for AI Processing

Organisations that have thought about the lawful basis for their AI data processing often land on Legitimate Interest as the most practical option. This requires a three-part test: a legitimate interest must be identified, the processing must be necessary for that interest, and a balancing test must confirm the interest is not overridden by the data subjects' rights and freedoms.

For routine business processing — using AI to summarise internal documents, draft correspondence, or process non-sensitive data — Legitimate Interest may well be an appropriate basis. But it requires a documented Legitimate Interest Assessment (LIA), and it does not resolve the Article 28 processor obligations or the international transfer requirements.

For special category data (health data, legal proceedings, financial data in some contexts), Legitimate Interest is generally insufficient. Explicit consent or another Article 9 condition is required — which makes using cloud AI tools for these categories particularly problematic.

Sectors Most at Risk

Legal
Client confidentiality + special category data (legal proceedings). SRA obligations. Solicitor-client privilege at risk if data is transmitted to third parties.
Healthcare
Health data is special category under Article 9. Patient records processed by cloud AI require explicit consent or Article 9(2) condition. ICO scrutiny is high.
Finance
FCA regulated firms have additional confidentiality obligations. Using client financial data in cloud AI without proper controls creates dual regulatory exposure.
HR and Recruitment
Employee data and candidate data are frequently processed with AI tools. Special category data (health, disability, criminal records) common in HR context.

Why On-Premise Eliminates the Risk

On-premise AI deployment changes the data protection analysis fundamentally. When the AI model runs on your own server, within your own infrastructure, no personal data is transmitted to any third party. There is no data processor under Article 28 because the AI is not operated by a third party — it is operated by you, on your hardware.

This eliminates:

The Design Principle

On-premise AI is privacy by design — the principle enshrined in Article 25 UK GDPR. Rather than putting controls around a third-party processing arrangement, the architecture removes the third-party processing relationship entirely. This is the strongest possible compliance position.

Practical Compliance Checklist

If your organisation is using AI tools to process personal data, use this checklist to identify your current compliance gaps:

AI and GDPR Compliance Checklist

Getting to a Compliant Position

For most UK businesses processing sensitive or regulated data, the cleanest path to GDPR AI compliance is on-premise deployment. It removes the most significant compliance obligations at the architecture level rather than requiring ongoing management of complex third-party data sharing arrangements.

Our GDPR-compliant AI deployment service includes documentation of the data architecture confirming that personal data remains within your infrastructure — documentation that you can use to demonstrate GDPR on-premise AI compliance to the ICO, to clients, and to your professional regulator if required.

If your business operates in a regulated sector, read our guides for AI for UK law firms and our overview of on-premise AI in the UAE for context on how different sectors are addressing data sovereignty. You can also review our deployment process or book a free consultation to discuss your specific GDPR AI compliance requirements.