The European Parliament has voted by a large majority to delay key compliance deadlines under the EU AI Act and back a ban on nudify apps, in a dual move that extends breathing room for industry while tightening protections against image-based abuse.
The EU AI Act, which entered into force in August 2024, is the world's first comprehensive legal framework for artificial intelligence. It takes a risk-tiered approach: the stricter the potential harm, the heavier the regulatory requirements. But translating that framework into operational compliance has proved more complex — and slower — than the original timeline anticipated.
High-Risk Deadlines Pushed to Late 2027
Under the measures approved by Parliament, developers of high-risk AI systems — those deemed to pose a "serious risk" to health, safety, or fundamental rights — would now have until December 2027 to meet compliance requirements, according to The Verge. That represents a significant extension from earlier deadlines embedded in the Act's original rollout schedule.
Companies building AI systems that fall under sector-specific safety legislation — covering products such as medical devices and toys — face an even longer runway, with a proposed deadline of August 2028. These extensions reflect the layered complexity of aligning AI rules with pre-existing product safety regimes that themselves carry their own compliance obligations.
The votes signal both political support for the landmark law and a candid acknowledgment that the original timeline was optimistic.
The European Parliament's approval by a large majority indicates broad cross-party consensus on the need for adjustment, rather than a fractious compromise. That political cohesion matters: it reduces the risk that the delays will be read as a weakening of legislative intent.
Nudify App Ban Wins Parliamentary Backing
Alongside the deadline extensions, Parliament backed proposals to ban nudify apps — AI tools that generate synthetic non-consensual nude images of real people, typically women. These applications have proliferated rapidly and are increasingly implicated in image-based sexual abuse and harassment.
The ban, if enacted into binding law, would operate within the EU's jurisdiction — covering apps offered to users in the 27 member states, regardless of where the developer is based. The measure aligns with the EU's broader Digital Services Act framework, which places obligations on platforms to remove illegal content, though the AI Act's explicit prohibition would add a distinct legal hook targeting the tools themselves rather than only their distribution.
It is important to note that parliamentary votes on these measures represent legislative approval of the direction of travel, but the precise enforcement mechanisms and binding legal text still require finalization through the EU's regulatory process. The European Commission retains the role of issuing delegated acts and guidance that will determine how rules are applied in practice.
What the Delays Mean for Developers
For AI developers currently mapping their compliance roadmaps, the extensions provide material relief. High-risk AI systems — which include applications in hiring, credit scoring, critical infrastructure, and law enforcement — require extensive documentation, human oversight mechanisms, and conformity assessments. Building those capabilities takes time and, for smaller developers, significant resources.
The sector-specific August 2028 deadline is particularly significant for medtech and consumer electronics companies, which must already navigate CE marking and product safety legislation. Harmonizing those regimes with the AI Act's requirements is a genuine technical and legal challenge, and the longer deadline reflects lobbying from industry associations that argued the original schedule was unworkable.
Watermarking requirements — rules obliging providers to label AI-generated content — were also reported to be part of the package under consideration, according to The Verge, though full details of that provision were not available in the published source material.
Enforcement Remains the Central Question
The EU AI Act assigns enforcement responsibility to national market surveillance authorities, coordinated through the newly established European AI Office, which sits within the European Commission. For general-purpose AI models, the AI Office holds direct supervisory authority — a significant centralization of power at the EU level.
Delays to compliance deadlines do not affect the prohibition on unacceptable-risk AI systems, which took effect in February 2025. That category includes social scoring, real-time biometric surveillance in public spaces (with narrow exceptions), and AI systems that manipulate people subliminally. Those bans are already legally operative across the bloc.
The nudify app ban, once formally adopted, would likely fall under the unacceptable-risk or prohibited category, giving national authorities and the AI Office grounds to act against non-compliant services — including those offered by non-EU companies to European users.
What This Means
For businesses building AI in or selling into Europe, the extended deadlines offer more time but not indefinite deferral — the EU AI Act's core obligations remain on course, and the nudify ban signals that Parliament is prepared to draw hard lines where AI tools cause direct harm to individuals.
