The EU AI Act: How to prepare for what enters into force in August 2025?

In my previous two blog posts, I have described the AI Act in general: what are the justifications for it and how it is said to support innovation in Part I, and info on sections that have been in force since the beginning of February in Part II. The regulation will indeed enter into force gradually, and the next “milestone” on this journey will come in August 2025. How should we in the field of artificial intelligence prepare?
Chapters of the act that will enter into force in August – an exec summary
Below you find a list of the chapters of the AI Act that will enter into force in August and summaries of the main things I believe you should be aware of. There are also dedicated paragraphs in this blog post on the chapters I feel are most important for us working in the field as providers. A word of warning though, this is not the lightest blog post, but I will try to keep to the essential. I have compiled the definitions of the key terms at the end of the blog post, also links to the definition section of the act are provided.
- Chapter III, Section 4, Notifying authorities and notified bodies
- In my view this is more about the bodies assessing compliance with the regulation and not directly related to the suppliers of AI systems or models.
- Chapter V, General Purpose AI Models
- Familiarize yourself with the obligations to providers of general-purpose AI models if you identify yourself as such, and check the definition of the term
- Keep your eyes open –> a set of codes of practice will be produced by May 2025 to help demonstrate compliance with the obligations
- Familiarise yourself with the term “systemic risk” before August: a risk that is inherent in the high performance of general-purpose AI models and has a significant impact on the Union market, and that can spread widely throughout the value chain (read the full definition in the definition section of the regulation)
- Note that if a general-purpose AI model was brought to market before 2.8.2025, there is until 2.8.2027 to demonstrate compliance with the obligations. So if your general-purpose model is launched, for example, in September, the requirements must be met immediately at launch.
- Chapter VII, Governance of the AI Act
- By the beginning of August the member states should publish info on the authorities supervising the regulation and how to contact them
- Chapter XII, Penalties (with the exception of Article 101 on fines imposed on providers of general-purpose AI models)
- This was a difficult Chapter to understand… fortunately, I got help from Sitra’s legal expert Meeri Toivanen!
- The sanctions will be applied from 2 August 2025 and legislative work on the rules and provisions on sanctions is currently ongoing in Finland (legislative proposal, in Finnish)
In addition, by the beginning of August, the Commission is expected to provide guidance on how suppliers of high-risk AI systems must report serious incidents in the future. Serious incident is also a term defined in the regulation. However, as far as I understand, the obligation to report incidents will not yet enter into force, but instructions on how to comply with this obligation will be published at the beginning of August.
A deeper dive to Chapter V: Obligations and codes of practice for providers of general-purpose AI models
As said, the chapter on general-purpose AI models enters into force at the beginning of August. You may recall that the regulation on high-risk AI systems will enter into force in August 2026, but due to the rapid development of technology, general-purpose AI models are regulated earlier. Let’s first remind ourselves that a general-purpose AI model is an AI model that is very generic, capable of competently performing a wide range of stand-alone tasks, and can be integrated into downstream systems. For example, large language models such as GPT-4.0 and Llama belong to this category.
To prepare, I recommend the following actions:
- Read more about the obligations for a provider of a general-purpose AI model (brief summary below):
- Timeliness and accessibility of documentation
- Policy to enforce Union law on copyright and related rights
- Publicly available summary of the content used in the training of the general-purpose AI model
- Understand what is a general-purpose AI model that involves systemic risk (such models must be brought to the attention of the Commission, the Commission will maintain a list of these)
- Familiarise yourself with the obligations for providers of such models
- Please note that these obligations include a mention of incident reporting, so it is possible that for such models this obligation will come into effect earlier than what is otherwise described in the regulation
- If at all possible, read the entire Chapter V carefully, because I will not be able to highlight all the exceptions and special cases in this blog post
A few more words about the codes of practice, these are indeed mentioned several times in the regulation. They are a “tool” intended to facilitate demonstrating that providers of general-purpose AI models comply with the obligations set out in the AI Act. The idea is that providers will be able to demonstrate compliance with obligations through codes of practice before a uniform standard is published. According to the regulation, the codes of practice should be ready at the beginning of May 2025 so that we, as providers and developers, have enough time to prepare to demonstrate that we operate in accordance with them. If the codes of practice are not ready on time the Commission can provide common rules for the implementation of the relevant obligations.
Chapter VII and the administrative aspects rolling out in August
The regulation brings with it a certain administrative machinery to the Member States. One part of this machinery is the so-called “notified body/bodies”, which stands for conformity assessment body (see the exact definition here). The regulation stipulates that such a body must be established/notified and that the sections of the regulation related to these bodies will enter into force on 2 August 2025. The legislative work defining how exactly this will be organised in practice is still underway in Finland. If I understood correctly, the proposed legislation is going to be discussed in Finnish Parliament in mid-April (here is a summary of the preparation, in Finnish). Also, if I understood correctly, according to the proposal, the supervisory tasks of the AI Act in Finland would be divided between different authorities. In summary, by the beginning of August, we should have clarity on who will monitor the compliance of the AI Act and how to reach out to them.
When and what are the consequences of non-compliance?
The text of the regulation on sanctions and the provisions concerning them nearly twisted my brain. Fortunately, legal expert Meeri Toivanen from Sitra helped with the interpretation of the text – a warm thank you to Meeri!
The regulation states that the provisions on sanctions must be applied as of 2 August 2025. It is likely that Member States don’t have the provisions readily available, hence the Act states that Member States must lay down rules on penalties and notify them to the Commission by 2 August 2025. In Finland, that regulatory work is already well underway, and this perspective is included in the aforementioned proposed legislation. In other words, violating the parts of the AI Act currently in force will have consequences from 2 August 2025 on. It should be noted, however, that some of the fines and other measures imposed by the Commission on providers of general-purpose AI models will not enter into force at the beginning of August 2025, but only starting 2 August 2026.
According to the regulation, sanctions for breaching the obligations of the AI Regulation can take the form of monetary fines, warnings or other measures, as long as they are effective, proportionate and dissuasive as a whole. According to the Finnish proposed legislation, the amount of the administrative fine is at least EUR 1000 if the penalty is imposed on a natural person, and in other cases (typically companies) at least EUR 10 000. For example, in the case of prohibited AI systems, the administrative fine is a maximum of €35 million or, if the offender is an undertaking, a maximum of 7% of its total annual turnover in the preceding financial year, whichever is greater. The maximum amounts of fines vary depending on which part of the act the breach relates to and whether the offender is an SME or a large company (see Article 99 for a more detailed breakdown of the fines).
Can’t take in all the details? Here in brief
Phew, that was quite some heavy text, but hopefully this helped you understand what to expect from August and you can better prepare for the parts of the regulation that will come into force. Here is a summary of the most important points:
- It is important for providers of general-purpose AI models to note that the part of the regulation relevant to them will enter into force on 2 August 2025
- Guidance on verifying compliance with obligations to be issued in May (codes of practice)
- A general-purpose AI model with a systemic risk must be notified to the Commission
- Providers of general-purpose AI models that are placed on the market before 2.8.2025 will have until 2.8.2027 to take measures to comply with the obligations of the Act
- The administrative machinery of the regulation in Finland will be specified by 2 August 2025
- Violation of the parts of the Act in force will have consequences from 2 August 2025
- In August, there will also be instructions on how suppliers of high-risk AI systems will report serious incidents
Kaisa Belloni, Senior Project Manager
“Working with AI fits me like a glove. I enjoy solving problems, bringing clarity to complex situations, and helping people. At Ai4Value, I primarily work as a project manager and scrum master, but my experience with strategic projects in global corporations, as well as my strong mathematical background, have given me the skills to consult, for example, during the preparation phase of AI projects. Don’t hesitate to reach out for collaboration!”
Definitions of terms as described in the act:
All definitions are available in the EU AI Act, below quoted the terms & definitions relevant to this blog text.
‘notified body’ means a conformity assessment body notified in accordance with this Regulation and other relevant Union harmonisation legislation
‘serious incident’ means an incident or malfunction of an AI system that directly or indirectly leads to any of the following: a) death of a person or serious damage to a person’s health; b) severe or irreversible disruption to the management and operation of critical infrastructure. c) failure to fulfil obligations under Union law to protect fundamental rights; d) serious damage to property or the environment
‘general-purpose AI model’ means an AI model, including when such an AI model is trained on a large amount of data using large-scale self-monitoring, which is very general in nature and capable of competently performing a wide range of distinct tasks, regardless of how the model is placed on the market, and which can be integrated into various downstream systems or applications, with the exception of AI models used for research, development or prototyping activities prior to placing them on the market
‘general-purpose AI system’ means an AI-system that is based on a general-purpose AI-model and is capable of serving a variety of purposes, both in direct use and integrated with other AI-systems
‘systemic risk’ means a risk inherent in the performance of general-purpose AI models with significant impacts, which has a significant impact on the Union market due to their coverage or actual or reasonably foreseeable adverse impacts on public health, safety, public safety, fundamental rights or society as a whole, and which can be widely disseminated throughout the value chain