AI Updates in the EU: How Europe is Shaping AI Regulations

The European Union is moving from general declarations to concrete actions in regulating artificial intelligence. With the adoption of the AI Act in 2024, the EU has laid the foundation for a major transformation of AI legal regulations. The Act is just part of a broader ecosystem gradually forming around AI. In this article, we’ll explore the tools and initiatives accompanying the implementation of the new legislation, why it’s important for businesses, and why you need to start preparing for the new rules now.
AI Continent Action Plan: A Strategy for Digital Transformation
In April 2025, the European Commission presented the AI Continent Action Plan – a large-scale initiative aimed at making Europe a competitive hub for artificial intelligence. The plan includes the creation of AI Factories – centers that will help startups and businesses implement AI; the launch of up to five AI Gigafactories – powerful computational hubs for training models; and attracting around €200 billion in investments through a special financial platform, InvestAI.
A key component of the plan is the development of regulatory and legal infrastructure, including the launch of the AI Act Service Desk and the preparation of new legislation – the Cloud and AI Development Act. This indicates that the EU is not only setting requirements but also creating the conditions for their practical implementation.
AI Act Service Desk: Support for Businesses
In April 2025, the European Commission announced a tender to establish the AI Act Service Desk – a consultation service that is expected to launch in the summer of 2025. This service will be a single point of access for companies seeking clarifications on how the AI Act applies, including:
- Determining whether their AI system is subject to regulation;
- Understanding technical standards and documentation requirements;
- Getting advice on complying with the new rules.
Special attention will be given to small and medium-sized businesses, which will receive step-by-step guidance to adapt to the AI Act. The service will operate at least until August 2027, with the possibility of extending it for another year, until 2028, depending on its effectiveness. This means that businesses will have long-term access to regulatory support during the transition period for implementing the AI Act.
Code of Practice for General-Purpose AI Systems (GPAI)
In April 2025, the AI Office published the third draft of the Code of Practice for General-Purpose AI Systems (GPAI). The document, developed with input from around 1,000 stakeholders, includes:
- Requirements for transparency in model architecture;
- Policies on copyright and the use of training data;
- Procedures for assessing and mitigating systemic risks.
The final version is expected to be approved by August 2025. While the Code is formally voluntary, it is increasingly being used as a benchmark for assessing the integrity of developers of generative AI, particularly large language models. Non-compliance with it could become a risk factor during regulatory audits and even a competitive disadvantage for companies working with public or corporate clients.
Cloud and AI Development Act: Regulating AI Infrastructure
In 2025, the European Commission began a public consultation on the Cloud and AI Development Act, which will run until June 4, 2025. This is the first regulatory act aimed at creating unified rules for the cloud infrastructure used for deploying and training AI. The focus is on large data centers, cloud providers, platform certification, and ensuring cybersecurity resilience. The Act is also aimed at reducing dependence on non-European suppliers and ensuring digital sovereignty within the EU. The first draft of the document is expected to be published by the end of 2025. The goal is to at least triple the capacity of EU data centers within the next 5–7 years.
Recommendations for Businesses
The key provisions of the AI Act will come into force in 2026, but it’s essential to start preparing now. The European Commission clearly demonstrates that compliance with AI product requirements is not just a formality but a prerequisite for trust and market access in Europe. Below are some practical steps to get started:
- Conduct a basic self-assessment. Determine whether your AI system falls into the high-risk category: what functions it performs, which people or processes it affects, and whether mechanisms for monitoring and oversight are in place.
- Document how your system works. Prepare a clear explanation of the system’s operation: what modules the AI includes, how it processes data, and how decisions are made. This will serve as the foundation for further risk assessments and AI Act compliance.
- Implement minimal policies. Even without official certifications, transparency, risk assessment, grievance collection, and decision review mechanisms can be implemented in internal documentation today.
- Act proactively. Implementing the AI Act requirements takes time – from audits to changes in products and processes. Delaying this could cost a company not only fines (up to €35 million or 7% of global turnover) but also customer trust, partner relationships, and market share.
Conclusion
Companies should not delay adaptation: compliance with EU requirements is not just a barrier but an opportunity to build a competitive strategy based on trust, transparency, and product quality. If you are developing or using AI systems, operating in the European market, or planning to expand there, now is the best time to assess your risks, prepare your documentation, and implement the necessary policies.
Legal professional in AI, Barbashyn Law Firm
AI Horizon Conference
AI entrepreneurs, investors and leaders will gather at the AI Horizon Conference to connect the AI world.
