EU AI Act: What is obligatory at the moment?

In my last week’s blog post I listed my take on the key things to grasp from the EU AI Act. There I wrote about understanding what is the purpose of this regulation and dug a bit into how it claims to support innovation. Now let’s move on to the second of the three key aspects I believe are important to grasp:

  • 1) Why the regulation is in place?
  • 2) What is obligatory as of Feb 2?
  • 3) How to prepare for the requirements entering into force next (Aug 25)?

Indeed, the first two parts of the regulation entered into force on February 2, 2025: these are a list of prohibited AI practices and the general provisions of the regulation, AI literacy being the actionable item there. Even though a lot of the regulation is risk-based, to my understanding these obligations apply to all those in scope of the regulation.

Certain AI practices are prohibited

The European Union sees that certain types of AI systems pose an unacceptable risk of harm to the health, safety or fundamental rights of natural persons and are hence prohibited. Note that the harm can also come through the AI system influence on decision making. Here is my short summary of the prohibited practices’ list:

  • Subliminal, purposefully manipulative or deceptive techniques
  • Exploiting natural persons’ vulnerabilities
  • Classification of natural persons based on their social behaviour or personality characteristics
  • Predicting the probability of a natural person to commit a crime
  • Creation of facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage
  • Inferring emotions of a natural person in the areas of workplace and education institutions
  • Biometric categorisation systems based on biometric data to deduce race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation
  • ‘Real-time’ remote biometric identification systems in publicly accessible spaces for the purposes of law enforcement (exceptions apply)

If any of these rings a bell I recommend you comb through the details of the regulation text even though the descriptions of the prohibited practices are lengthy. Note that there are also some exceptions to these prohibitions which are detailed in the regulation.

What are the general provisions?

This is chapter 1 of the regulation and it consists of 4 articles. These describe the overall regulation and its scope, list key terminology and definitions, and set a rule regarding AI literacy. To my understanding the only truly regulatory aspect here is the one related to AI literacy. However, this chapter is a good read overall, especially the glossary provided in article 3 which gives good insight into the terms used in the context of the act.

Providers and deployers at the forefront of AI literacy

The regulation states that providers and deployers of AI systems shall take measures to ensure a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf. To be precise, it also states that providers and deployers are to do this “to their best extent”. Sounds to me like there might be quite some room for interpretation here? Also, before we can move into planning concrete actions based on this, we need to understand what does AI literacy mean.

In the glossary I mentioned earlier it is defined as follows: ‘AI literacy’ means skills and understanding that allow providers, deployers and affected persons, taking into account their respective rights and obligations in the context of this regulation, to make an informed deployment of AI systems, as well as to gain awareness about the opportunities and risks of AI and possible harm it can cause. I believe the gist in this lengthy definition is that anyone who works with AI should have the skills and understanding needed to a) make informed AI deployments and b) gain awareness of the possible risks and harm an AI system can cause.

AI literacy sounds simple but how to concretize?

My understanding is that the regulation does not explicitly say how, for example, a provider will demonstrate that necessary measures have been taken and that a sufficient level of AI literacy has been reached. Also the term “affected person” deserves some thought – does this group extend beyond what is normally considered the users of an IT system? I believe so.

For a company like ours it is relatively straightforward to demonstrate that our people have the necessary skills and understanding, but when talking about the affected persons and users of our solutions this becomes trickier. I believe every responsible software provider has also up until today addressed the question of how are users trained, but is risk awareness a constant in end user trainings? And to what extent are providers of systems expected to promote AI literacy among affected persons? Affected persons are mentioned in the definition of AI literacy, but not explicitly in the regulation article. It will be interesting to see how all of this will evolve into concrete actions and daily practices for us working with AI. I believe it is a safe bet to start by checking your internal training materials as well as user training materials and see how those are used to promote AI literacy.

In summary, no drama here

So, in summary, what is obligatory at the moment? Two things: 1) don’t do the prohibited things and 2) take measures to weave AI literacy into the practices of AI providers and developers. Despite the questions that still feel open and unclear, I don’t think the regulation is asking too much, and don’t see anything dramatically limiting in these obligations. Prohibitions are not related to any mainstream AI activity and promoting AI literacy should be on the agenda anyways!

Tune in next week, I will go through my thoughts regarding how to prepare for the parts of regulation coming into force in August. Have a lovely week!

Kaisa Belloni, Senior Project Manager

“Working with AI fits me like a glove. I enjoy solving problems, bringing clarity to complex situations, and helping people. At Ai4Value, I primarily work as a project manager, but my experience with strategic projects in global corporations, as well as my strong mathematical background, have given me the skills to consult, for example, during the preparation phase of AI projects. I hope we get the chance to collaborate!”