AI Disclosure Kit

EU AI Act Article 50 — what you actually need to do

Article 50 of Regulation (EU) 2024/1689 imposes transparency obligations on four categories of AI systems. It's enforceable from 2 August 2026. Here's the practical summary for builders.

The four categories

50(1) — Chatbots and AI assistants interacting with humans

If your system talks to people (chatbot, voice assistant, AI agent), users must be told they're interacting with an AI — unless it's obvious. "Obvious" is a narrow exception; default to disclosure.

50(2) — Synthetic content generators

If your AI produces synthetic audio, image, video, or text, outputs must be marked in a machine-readable format (e.g., C2PA metadata, SynthID watermark). A visible disclosure alone is not sufficient — watermarking is required.

50(3) — Emotion recognition & biometric categorisation

If you deploy systems that recognise emotions or categorise people biometrically, affected individuals must be informed AND you must have a valid GDPR processing ground. This is often more legally complex than the disclosure itself.

50(4) — Deepfakes & AI-generated public-interest text

Deepfake content must be disclosed as artificially generated. AI-generated text published "to inform the public on matters of public interest" must also be disclosed (narrow scope — journalism, public health, government).

Penalties

Article 50 violations: up to €7.5 million or 1.5% of global annual turnover (whichever is higher). SMEs get the lower of the two. Enforcement begins 2 August 2026.

Common misconceptions

Tools & next steps

Generate a disclosure →   See pricing

Sources: Regulation (EU) 2024/1689 Article 50; European Commission Code of Practice on Transparency (draft, November 2025 → final June 2026); Center for Data Innovation SME compliance survey (late 2025).