EU AI Act Article 5 Enforcement: First Wave of Compliance Questions

Article 5 has been enforceable for a month. In December I wrote that the early enforcement period would be more about tone-setting than about test cases. That has proven mostly correct. What has surprised me is how quickly the interpretive questions have crystallized — and how many of them are not what I expected.

The Commission guidance, in February

The Commission published draft guidance on Article 5 on February 4 — two days after the prohibitions took effect, slightly later than industry hoped but earlier than the Brussels timeline usually delivers. The draft is open for consultation until April. It is 140 pages and is the most useful single document on the prohibition regime.

Highlights of substantive interpretive content:

The guidance also includes an annex of forty-three example use cases with the Commission's classification analysis. The annex will be the most-cited document in compliance memos for the next several years, regardless of what shape the final guidance takes.

Early complaints

National market surveillance authorities have begun receiving complaints. Several patterns:

  1. NGO-driven complaints against workplace surveillance vendors. The European Trade Union Confederation has filed coordinated complaints in five member states against three vendors of webcam-based attention-monitoring tools. These are squarely within Article 5(1)(f) on a straightforward read; we expect at least one near-term enforcement action.
  2. Targeted-advertising complaints. Privacy NGOs (NOYB, EDRi) have filed complaints framing certain ad-tech targeting practices as "manipulation" within 5(1)(a). These are stretches; the AI Office will likely not pursue them as Article 5 violations, though they may be redirected to GDPR or DSA enforcement.
  3. Facial-image scraping complaints. Two complaints against U.S.-based facial-recognition vendors for continuing to operate in or serve EU clients. These were predictable and we expect enforcement action.
  4. Public-sector deployments. A complaint against a German municipality's use of CCTV-based crowd-density analysis at public events. This raises hard questions about real-time biometric identification under 5(1)(h) and the public-safety exception.

The interpretive questions I did not expect

Three issues have emerged that I did not flag in December and that compliance teams should be paying attention to.

The "or effect" problem. Article 5(1)(a) and (b) prohibit AI systems with the "objective or effect" of materially distorting behavior. The "or effect" formulation is a strict-liability hook that the Commission guidance does not engage with cleanly. If your AI system has an unintended manipulative effect on a vulnerable user — without any design intent to that end — are you in violation? The guidance says intentionality is "not required," but in practice some causal contribution test is going to need to fill the gap. Until it does, the conservative posture is that any high-engagement consumer-facing AI feature is subject to a "manipulative effect" inquiry on first complaint.

The B2B carve-out that isn't. Several large enterprise software vendors had assumed that B2B AI products were largely outside Article 5 because the prohibitions reference "natural persons." That intuition is half-right. Many enterprise AI products operate on or affect natural persons — a workplace productivity tool that infers employee emotion, even if sold B2B, is squarely inside the emotion-inference prohibition. Enterprise procurement teams should be asking pointed questions about Article 5 status of vendor products.

Foundation model providers' upstream exposure. A genuine puzzle. If a frontier model is fine-tuned by a deployer into a system that violates Article 5, is the upstream model provider liable? On a strict reading, no — Article 5 obligations attach to the placing on the market or use of the prohibited system, not to the provision of upstream components. But Article 25 contemplates that a downstream party who "substantially modifies" a system can become the provider for compliance purposes, leaving the original provider's exposure unclear. The Commission guidance does not directly address this. We expect litigation and clarifying guidance.

What enforcement will look like for the rest of 2025

The first formal enforcement decisions will probably come in Q3, with the most legible violations — workplace emotion-inference, facial-image scraping — leading. Penalties in the early cases will probably be calibrated to send a signal rather than to maximize. Expect mid-eight-figure fines on the largest violators rather than maxed-out 7%-of-turnover cases.

National authorities will diverge. The French CNIL and the German BNetzA are well-resourced and have signaled active enforcement intent. Several smaller member states have not yet designated their market surveillance authorities; enforcement in those jurisdictions will lag for some time. For multinational compliance, the question is no longer "what is the EU AI Act position" but "what is the position of the most aggressive national authority," which is mostly going to be France or Germany.

Action items at the one-month mark

If you have not done so, get the Commission's draft guidance circulated to your AI inventory team. Run the forty-three example annex against your own product portfolio. The biggest avoidable error in the next few months will be classification work that did not anticipate the Commission's interpretive choices on "or effect," cross-context scoring, and emotion-inference scope. Your January memos are probably out of date.