Legal news

The EU Regulates Artificial Intelligence : New European Commission Guidelines on Prohibited Practices

Artificial intelligence ("AI") is at the heart of medical and technological innovations, but its fast development raises significant ethical and legal questions. The European Union aims to address these concerns through the publication of new guidelines on prohibited AI practices, issued as part of the implementation of Regulation (EU) 2024/1689 (the "AI Act").

Published on February 4, 2025, these non-binding guidelines provide clarifications on the application of the prohibitions under Article 5 of the AI Act, which classifies certain AI uses as posing an "unacceptable risk" and, therefore, strictly prohibits them.

Among the most affected sectors is healthcare, where both the use of AI and the data collected in this context are considered particularly sensitive.
The guidelines detail the various types of prohibited practices, offering concrete examples of each prohibited practices while also distinguishing permissible practices—those considered "out of scope".

Reminder : Prohibited AI Practices Identified by the AI Act

The AI Act explicitly prohibits the following practices :

  • Manipulation and deception by AI, i.e., the use of deceptive techniques to significantly influence behavior and cause harm
  • Exploitation of vulnerabilities, based on age, disability, or socio-economic situation
  • Social scoring, or evaluating individuals based on their social behavior or personal characteristics, leading to disproportionate treatment
  • Criminal risk assessment, an Minority Report IA system type, that predicts the likelihood of someone committing a crime based solely on profiling or personality traits
  • Facial image scraping, where AI collects facial images from the internet or surveillance cameras to develop facial recognition databases
  • Emotion detection
  • Biometric categorization, used to infer political opinions, sexual orientation, etc
  • Real-time remote biometric identification for law enforcement purposes.

Illustrations of Prohibited and Permitted Practices

For example, regarding the prohibited practice of manipulation and deception by AI, the guidelines provide two illustrations :

🔴 Example of a prohibited practice
An AI-powered medical assistant that encourages users to purchase products promising unrealistic benefits for their mental health, potentially worsening their condition and financially exploiting them by pushing them to buy ineffective and expensive products. This could lead to significant psychological and financial harm.

✅ Example of a permitted practice
A therapeutic chatbot that employs subliminal techniques to guide users toward a healthier lifestyle and help them quit bad habits, such as smoking. The Commission considers that, even if users experience some physical discomfort and psychological stress due to their efforts to quit smoking, this chatbot cannot be regarded as causing significant harm as long as there is no hidden attempt to influence decision-making beyond promoting healthy habits.

The guidelines also offer illustrations related to the prohibition on emotion detection :

🔴 Example of a prohibited practice
Using emotion recognition to assess employee well-being, motivation levels, or satisfaction at work or in learning environments for students does not constitute a "medical use" and would therefore be prohibited.

✅ Example of a permitted practice
Conversely, emotion recognition may be deployed to assist employees or students with autism and improve accessibility for blind or deaf individuals. Similarly, care-assistance robots using emotion recognition systems during medical examinations or voice monitors analysing emergency calls fall within the scope of AI’s medical applications.

Thus, these European Commission guidelines aim to ensure a uniform and consistent application of the AI Act. They seek to define clear boundaries to prevent AI abuses in sensitive areas. Engaging with feedback from various industry stakeholders will be particularly valuable in refining these guidelines.

Read more on the subject of AI :