Prohibited artificial intelligence (AI) practices
Systems using artificial intelligence are playing an increasingly important role in our daily lives. However, they also pose a number of risks that require the establishment of specific rules.
Since 2 February 2025, the provisions on prohibited practices related to the use of artificial intelligence (AI) technology have been in force, introduced by the Regulation (EU) 2024/1689 of the European Parliament and of the Council on artificial intelligence (AI Act).
The purpose of the introduced prohibitions is to protect fundamental human rights and ensure the safe and ethical use of artificial intelligence.
What practices are prohibited when using AI systems?
The regulations on prohibited practices cover eight main categories of practices considered particularly unethical or dangerous.
- Use of subliminal, manipulative and deceptive techniques
Firstly, it is prohibited to use AI systems that use subliminal, purposefully manipulative or deceptive techniques that may cause significant harm and distort the informed decisions of natural persons. An example would be hidden messages in video or audio content that influence consumers’ purchasing decisions without their informed consent.
- Exploiting the weaknesses of individuals or groups of people
Another prohibited practice is the unethical exploitation of person’s “weaknesses”. AI systems may not use the psychological or physical characteristics of a person (e.g. age, disability or a specific social or economic situation) in a manner that leads to their manipulation or exploitation.
- Social scoring
It is prohibited to use AI systems for the purpose of the evaluation or classifications of natural persons or group of persons based on their social behaviour or personal characteristics or personality traits. Such treatment must occur outside the context in which the data used as first collected or be unjustified or disproportionate to the social behaviour being assessed or its seriousness.
- Profiling
Artificial intelligence may not be used to make risk assessments of natural persons in order to assess or predict the risk of a natural person committing a criminal offence, based solely on the profiling of a natural person or on assessing their personality traits. However, this prohibition does not apply to AI systems used to support the human assessment of the involvement of a person in a criminal activity based on objective and verifiable facts.
- Scraping
Another prohibited practice is scraping, which is the process of obtaining data from the internet or CCTV footage and its use for facial recognition.
- Emotion recognition
AI systems cannot also be used to draw conclusions about a natural person’s emotions in the workplace or educational institutions, except where the AI systems are put in place for medical or safety reasons.
- Biometric categorisation
The EU regulation prohibits the use of biometric categorisation systems that categorise individually natural persons based on their biometric data in order to deduce or infer their race, political opinions, trade unions membership, religious or ideological beliefs, sexuality or sexual orientation. An exception is the labelling and filtering of lawfully acquired biometric datasets and categorising of biometric data in the area of law enforcement.
- Remote biometric identification
It is not permissible to use ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purposes of law enforcement. However, the AI act provides for specific situations that allow such activities. These include targeted searches for victims of crimes, missing persons, the prevention of a specific threat or a terrorist attack as well as localisation or identification of a person suspected of having committed a criminal offence.
Who do the established prohibitions apply to?
The regulations contained in the AI Regulation apply to suppliers placing AI systems on the market or putting AI systems or general purpose AI models into service (regardless of whether they originate from the EU or outside the EU), deployers of AI systems based in the EU, importers and distributers of AI systems, or suppliers or manufactures if the outputs generated by the AI system are used in the European Union.
What are the consequences of using prohibited AI practices?
The AI Regulation provides for administrative fines for failure to comply with the use of prohibited AI practices. The fine can be up to EUR 35 million or, in the case of an enterprise, up to 7% of its total annual global turnover from the previous financial year, whichever is higher.
Member States are required to introduce provisions on administrative fines and take all necessary measures to ensure their proper and effective implementation.
In Poland, the draft act on artificial intelligence systems prepared by the Ministry of Digital Affairs is currently (March 2025) at the legislative stage (review).