CleanMyPrompt
2026-05-02CleanMyPrompt Team3 min read

EU AI Act Readiness Checklist for Teams Using ChatGPT, Claude, and Gemini

A practical May-August 2026 checklist to reduce legal and security risk when employees use AI prompts with personal or sensitive data.

eu-ai-actcompliancegdprsecurity

If your team uses ChatGPT, Claude, Gemini, or other AI assistants in day-to-day work, the highest-risk step is usually the input prompt itself.

By August 2026, EU AI Act obligations become materially more operational for many organizations, while GDPR data minimization obligations already apply today. The most practical control you can implement now is a pre-submission prompt hygiene workflow.

This checklist is designed for operations, security, legal, and IT teams that need something actionable, not theoretical.

Who this checklist is for

Use this if your organization:

  • Operates in the EU or processes EU personal data
  • Allows employees to use AI assistants for support, legal, analytics, or internal documentation
  • Needs evidence of technical and organizational controls for audits

EU AI readiness checklist (May-August 2026)

1) Define what data is allowed in prompts

Create a one-page policy table and share it internally:

Data Class Allowed in AI Prompts? Required Control
Public Yes None
Internal Yes Basic cleaning
Confidential Conditional Redaction + review
Restricted / Special Category No (or legal exception only) Escalation workflow

If teams do not have this table, usage becomes ad-hoc and impossible to audit.

2) Enforce data minimization before AI submission

Before text enters any AI tool, remove what is not required for the task:

  • Names, emails, phone numbers
  • Account IDs, order IDs, national IDs
  • API keys, access tokens, private endpoints
  • Unnecessary timestamps and location details

Use a client-side step so content is cleaned locally first. For example, Remove PII from text can be used as a pre-submission checkpoint.

3) Add multilingual redaction coverage

Many teams clean English text but miss non-English identifiers in real-world workflows.

At minimum, test your controls with:

  • English
  • German
  • French
  • Spanish
  • Italian

Typical misses in multilingual data include honorifics, local ID formats, and street/address variations.

4) Include document workflows (PDF + scanned docs)

AI usage is not only chat text. Teams also paste text extracted from:

  • Invoices
  • Contracts
  • HR forms
  • Support attachments

Your workflow should cover both:

  • Text PDFs (layout reconstruction before redaction)
  • Scanned PDFs/images (OCR before redaction)

A safe sequence is: extract text -> clean/ redact -> human review -> paste to AI.

5) Keep an auditable record of controls

For compliance reviews, store lightweight evidence that controls exist and are used:

  • Timestamp of cleaning activity
  • Mode used (standard/squeeze/json)
  • Categories detected (PII/secrets)
  • Optional before/after token counts

You do not need to store original content to prove process maturity.

6) Restrict direct copy-paste into AI when possible

Where practical, provide guardrails:

  • Browser extension for paste interception and local cleaning
  • Team guidance in onboarding docs and runbooks
  • Spot checks in high-risk departments (support, legal ops, finance)

The goal is not to block AI. The goal is to make safe behavior the default behavior.

7) Review third-party processor posture

If any step uploads content to external OCR, DLP, or prompt tools, document:

  • Data processing terms
  • Retention period
  • Deletion guarantees
  • Cross-border transfer position

If you can run critical cleaning locally in-browser, your risk surface drops significantly.

8) Define incident response for prompt leaks

Have a short response playbook ready:

  1. Confirm scope (what data, which system, which users)
  2. Revoke leaked keys/tokens immediately
  3. Preserve event logs for investigation
  4. Assess notification obligations with legal/privacy
  5. Patch workflow gap and retrain affected teams

Fast self-audit (10-minute version)

If you answer "No" to any of these, you have an immediate gap:

  • Do we have a written AI prompt data classification policy?
  • Do we clean prompts before external AI submission?
  • Can we handle multilingual PII reliably?
  • Do we cover scanned PDFs and images, not just plain text?
  • Can we show evidence that controls are used in practice?

Recommended implementation path

Week 1:

  • Publish policy table
  • Roll out a shared pre-submission cleaning workflow

Week 2:

  • Add multilingual test samples
  • Cover PDF/OCR path for document-heavy teams

Week 3:

  • Add lightweight audit logging and monthly review cadence

Week 4:

  • Run tabletop incident drill for an AI prompt leak scenario

Final note

This article is operational guidance, not legal advice. Legal interpretation depends on your organization, sector, and data flows.

If you want a practical starting point, run one real prompt through this sequence today: classify -> redact -> review -> submit.

Then standardize it across your team.

Try CleanMyPrompt

Strip PII, compress tokens, and clean text for AI — 100% in your browser. No sign-up required.

Try It Free