Cloud AI = Third-Party Data Processing
The first thing to understand is that using a cloud AI tool is not the same as using a piece of software that runs on your computer. When you send a document to a cloud AI service for analysis — or paste client data into a prompt — you are transmitting that data to the AI provider's servers. The AI provider is processing your data on their infrastructure.
Under UK GDPR and its EU equivalent, this arrangement makes the AI provider a data processor under Article 28. That creates specific obligations:
- You must have a Data Processing Agreement (DPA) with the AI provider
- The DPA must specify the subject matter, duration, nature, and purpose of the processing
- The processor may only process data in accordance with your documented instructions
- You must ensure the processor provides sufficient guarantees of GDPR compliance
- If the processor is based outside the UK or EU, additional international transfer safeguards apply
Most businesses using cloud AI tools casually — pasting client data into ChatGPT, Claude, or similar — have not completed a DPIA, have no DPA in place with the provider, and have not assessed the lawful basis for the data transfer. This is not a minor technicality. It is a systematic compliance failure.
The ICO has explicitly identified AI-related data processing as an enforcement priority. In 2025, the ICO opened investigations into several organisations for using AI tools to process personal data without adequate legal basis. The message is clear: "we did not realise it was GDPR-relevant" is not a defence.
What Data Cloud AI Tools Actually See
It is worth being concrete about what the AI provider receives when you use their service. A cloud AI tool processes:
- Your prompt text: which often contains names, addresses, financial details, health information, or other personal data you have provided for context
- Uploaded documents: contracts, reports, medical records, HR files — the full content of any document you share
- Conversation history: which accumulates over a session and may contain more personal data than any single message
- Metadata: usage patterns, timestamps, and in some cases IP addresses and device information
Some AI providers offer "zero data retention" modes or enterprise tiers that contractually prevent training on your data. Even with those controls in place, the data is still transmitted to and processed on the provider's servers — the third-party processing relationship still exists, and the DPA requirement still applies.
The International Transfer Problem
The major cloud AI providers are US-based companies. When you send data to their services, it is almost certainly processed on servers in the United States or other non-UK/EU jurisdictions. This creates a cross-border data transfer that requires one of the following legal bases under UK GDPR:
- An adequacy decision for the destination country (the US-UK Data Bridge partially addresses this for US providers enrolled in the framework)
- Standard Contractual Clauses (SCCs) incorporated into the DPA
- Binding Corporate Rules
- Explicit consent of the data subject (impractical for routine business processing)
Some large AI providers have put SCCs in place as part of their enterprise offering. But the default terms for consumer or SME tiers of cloud AI products typically do not include the required transfer mechanism documentation. Using these products to process personal data without checking the transfer basis is a compliance gap.
Legitimate Interest vs. Consent for AI Processing
Organisations that have thought about the lawful basis for their AI data processing often land on Legitimate Interest as the most practical option. This requires a three-part test: a legitimate interest must be identified, the processing must be necessary for that interest, and a balancing test must confirm the interest is not overridden by the data subjects' rights and freedoms.
For routine business processing — using AI to summarise internal documents, draft correspondence, or process non-sensitive data — Legitimate Interest may well be an appropriate basis. But it requires a documented Legitimate Interest Assessment (LIA), and it does not resolve the Article 28 processor obligations or the international transfer requirements.
For special category data (health data, legal proceedings, financial data in some contexts), Legitimate Interest is generally insufficient. Explicit consent or another Article 9 condition is required — which makes using cloud AI tools for these categories particularly problematic.
Sectors Most at Risk
Why On-Premise Eliminates the Risk
On-premise AI deployment changes the data protection analysis fundamentally. When the AI model runs on your own server, within your own infrastructure, no personal data is transmitted to any third party. There is no data processor under Article 28 because the AI is not operated by a third party — it is operated by you, on your hardware.
This eliminates:
- The Article 28 DPA obligation (no third-party processor)
- The international transfer requirement (no data leaving your infrastructure)
- The risk of the AI provider changing their data handling terms
- The risk of the AI provider's systems being breached and your data compromised
- The need to explain to clients or regulators what third parties process their data
On-premise AI is privacy by design — the principle enshrined in Article 25 UK GDPR. Rather than putting controls around a third-party processing arrangement, the architecture removes the third-party processing relationship entirely. This is the strongest possible compliance position.
Practical Compliance Checklist
If your organisation is using AI tools to process personal data, use this checklist to identify your current compliance gaps:
- Identify all AI tools currently in use by your organisation
- Determine whether each tool processes personal data (hint: if it does anything useful, it probably does)
- For each cloud AI tool: confirm whether a DPA exists with the provider
- For each cloud AI tool: confirm the international transfer mechanism (adequacy, SCCs, or other)
- Confirm the lawful basis for AI data processing is documented (and is appropriate for the data categories involved)
- Complete a Data Protection Impact Assessment (DPIA) for high-risk AI processing
- Review your privacy notice — does it accurately describe AI-assisted processing?
- Assess whether any special category data is being processed by AI tools and whether Article 9 conditions are met
- Consider whether on-premise deployment would eliminate the identified compliance risks
Getting to a Compliant Position
For most UK businesses processing sensitive or regulated data, the cleanest path to GDPR AI compliance is on-premise deployment. It removes the most significant compliance obligations at the architecture level rather than requiring ongoing management of complex third-party data sharing arrangements.
Our GDPR-compliant AI deployment service includes documentation of the data architecture confirming that personal data remains within your infrastructure — documentation that you can use to demonstrate GDPR on-premise AI compliance to the ICO, to clients, and to your professional regulator if required.
If your business operates in a regulated sector, read our guides for AI for UK law firms and our overview of on-premise AI in the UAE for context on how different sectors are addressing data sovereignty. You can also review our deployment process or book a free consultation to discuss your specific GDPR AI compliance requirements.