Regulation
EU Council Votes to Delay High-Risk AI Rules by Up to 16 Months
The Council’s agreed position pushes back high-risk AI obligations to December 2027 for standalone systems and August 2028 for embedded ones, while adding new prohibitions on AI-generated non-consensual intimate imagery.
The European Union Council agreed its negotiating position today on a sweeping simplification of the AI Act, the bloc’s landmark artificial intelligence regulation. Part of the broader “Omnibus VII” package aimed at reducing regulatory burden on European businesses, the proposal delays the application of high-risk AI rules by up to 16 months — pushing the compliance deadline for standalone high-risk systems from August 2026 to December 2027, and for high-risk AI embedded in other regulated products to August 2028.
The delay is designed to give companies, particularly small and medium enterprises, more time to prepare for the complex compliance requirements that the AI Act imposes on systems deemed high-risk — those used in critical areas like healthcare, law enforcement, employment, and education. But the Council did not simply soften the regulation: it simultaneously added a new prohibition on AI systems used to generate non-consensual intimate images and child sexual abuse material, expanding the AI Act’s list of banned practices.
Trilogues with the European Parliament will now begin. The Parliament is expected to push back on the extent of the delay, with some MEPs arguing that weakening timelines sends the wrong signal at a moment when AI capabilities are advancing faster than regulators anticipated. Industry groups, meanwhile, broadly welcomed the extension. The next few months of negotiation will determine whether Europe’s AI rulebook arrives with enough teeth — and soon enough — to matter.