The European Union is moving one step closer to refining its landmark EU AI Act, with the European Council proposing new amendments aimed at simplifying regulations while addressing emerging risks from artificial intelligence.
On Friday, the Council released its position on updates to the EU AI Act, including a new ban on AI nudification tools and stricter standards around the use of sensitive personal data. The proposal is part of the broader “Omnibus VII” legislative package designed to streamline the EU’s digital regulatory framework and reduce compliance burdens for businesses.
While the changes are intended to make the rules more practical for companies, the latest amendments also reflect growing concerns about the misuse of AI technologies and the need for stronger safeguards.
EU AI Act Amendments Target Harmful AI Content
One of the most significant changes proposed under the updated EU AI Act is a new prohibition targeting AI tools capable of generating non-consensual sexual or intimate imagery.
According to the Council, the new provision explicitly bans “AI practices regarding the generation of non-consensual sexual and intimate content or child sexual abuse material.” The move comes as regulators across Europe increasingly confront the real-world harms caused by AI-generated deepfake content.
The proposal follows a similar step earlier this week when members of the European Parliament approved their own version of the ban. The alignment between the two bodies suggests that restrictions on AI nudification tools are likely to remain in the final version of the EU AI Act once negotiations conclude.


The push for stricter rules comes after a high-profile incident involving the Grok chatbot developed by xAI and integrated into the social platform X (formerly Twitter). Beginning in late December, the chatbot reportedly generated millions of non-consensual intimate images that quickly spread online, triggering widespread backlash.
In response, the European Commission launched a formal investigation into the platform and its AI features earlier this year.
For policymakers, the episode underscored the speed at which generative AI tools can create and distribute harmful content—and why the EU AI Act needs mechanisms to address such risks.
Changes to High-Risk AI System Regulations
Alongside the new prohibition, the proposed reforms also adjust the timeline for implementing rules on high-risk AI systems, a key component of the EU AI Act.
The European Commission previously suggested delaying the implementation of these rules by up to 16 months, allowing regulators time to develop the technical standards and tools needed to enforce them effectively.
Under the Council’s proposal, the revised deadlines would be:
- 2 December 2027 for stand-alone high-risk AI systems
- 2 August 2028 for high-risk AI systems embedded in products
These extensions aim to provide organizations with clearer guidance and sufficient preparation time while still ensuring that the regulatory framework remains enforceable.
At the same time, the Council reinstated a requirement for providers to register AI systems in the EU database for high-risk technologies, even when companies believe their systems qualify for exemptions. The measure is intended to strengthen transparency and oversight under the EU AI Act.
Stronger Safeguards for Sensitive Data
Another key amendment focuses on how organizations process sensitive personal data when developing or testing AI systems.
The Council’s proposal restores the “strict necessity” standard for using special categories of personal data in bias detection and correction processes. This means organizations must clearly justify why such data is required before using it to improve algorithmic fairness.
The change reflects ongoing debate within Europe about balancing innovation with strong privacy protections—particularly as AI systems rely on increasingly large datasets.
In addition, the updated EU AI Act proposal postpones the deadline for establishing national AI regulatory sandboxes until December 2027. These sandboxes are designed to allow companies to test AI technologies in controlled environments under regulatory supervision.
Simplifying Rules Without Weakening Oversight
The broader objective behind the proposed amendments is to simplify the complex network of digital regulations affecting businesses across the EU.
As part of the Digital Omnibus initiative, the European Commission has been working to reduce administrative burdens while improving the consistency of AI rules across member states.
Marilena Raouna, Deputy Minister for European Affairs of the Republic of Cyprus, emphasized the importance of balancing innovation with regulatory clarity.
“Streamlining the AI rules is essential for ensuring the EU’s digital sovereignty. As presidency, we worked on this proposal with urgency, reaching a swift agreement to facilitate the timely application of the AI act. The proposal will bring greater legal certainty, make the rules more proportionate and ensure more harmonised implementation across member states. We are ready to work with our co-legislators in our common efforts to support our companies, facilitate innovation and build a more competitive Europe.”
The Council’s proposal also introduces new guidance obligations for regulators. Under the revised EU AI Act, the European Commission would provide clearer instructions to help companies comply with high-risk AI requirements while minimizing compliance costs.
What Happens Next for the EU AI Act
With the Council now formally adopting its negotiating position, discussions will move to the next stage. The proposal will be negotiated with the European Parliament to finalize the updated framework.
While the process may still involve revisions, the latest developments signal that Europe remains committed to shaping global AI governance through the EU AI Act—balancing innovation, business competitiveness, and safeguards against emerging technological risks.
As generative AI tools continue to evolve rapidly, the debate around how they should be regulated is far from over. But the Council’s latest proposal makes one thing clear: Europe is determined to tighten protections where AI misuse threatens privacy, safety, and trust in digital technologies.
