UnacceptableRisk AI

What is a Prohibited AI system?

According to the legal definition provided (Art. 5, sec. XIV), Prohibited Artificial Intelligence (AI) includes any system whose entire life cycle (development, training, use, commercialization, import, or export) is banned.

The reason for this prohibition is that such systems represent an unacceptable risk to the fundamental pillars of our society.

Unacceptable Risks Identified

An AI system is prohibited if it threatens:

  • Human dignity

  • Fundamental rights

  • Life

  • Public safety

  • Social peace

  • Democratic stability

Specific Examples of Prohibited AI (Art. 5)

The article details a (non-exhaustive) list of systems that are considered prohibited:

1. Behavioral Manipulation

Systems that use subliminal manipulation techniques or extreme psychological persuasion to alter human behavior without informed consent.

2. Social Scoring

Mechanisms that establish coercive or discriminatory social scoring systems. This applies whether they are implemented by public or private entities.

3. Mass Biometric Surveillance

Systems intended for mass biometric surveillance (such as real-time facial recognition) in public spaces, especially if carried out without express judicial authorization and strict institutional control.

4. Exploitation of Vulnerabilities

AI designed to exploit cognitive, emotional, or autonomy vulnerabilities of specific groups, such as:

  • Children and adolescents

  • The elderly

  • Persons with disabilities

  • Persons in situations of psychosocial vulnerability

5. Weaponry Development

Systems developed for the creation, enhancement, or use of weaponry exclusively for armed forces. This also includes AI that facilitates the development of chemical, biological, radiological, or nuclear weapons prohibited by international treaties.

6. Materials for Mass Harm

AI intended for the creation or dissemination of explosives, toxic agents, or any substance designed to cause mass harm to people or property.

7. Facilitation of Illicit Activities

Systems that facilitate or instruct on:

  • Terrorism

  • Commission of cybercrimes

  • Unauthorized hacking of personal data

  • Identity theft or bank account theft

  • Any form of attack against critical infrastructure systems

Consequences and Application of the Prohibition

The law is strict regarding both the consequences and the manner of applying the ban.

Clear Sanctions (Art. 33)

Article 33, section IV, reinforces this stance by defining Prohibited AI as "Systems whose operation is expressly banned by this Law due to their unacceptable risk."

The same article warns that their:

  • Development

  • Commercialization

  • Use or

  • Implementation

...are subject to administrative, civil, and criminal sanctions.

Preventive Application (Art. 5)

Article 5 states that the prohibition must be applied in a broad and preventive manner. One does not wait for the harm to occur; rather, it is actively sought to be prevented, based on the following principles:

  1. Precautionary Principle: Acting with caution in the face of the possibility of serious harm, even if absolute scientific certainty does not exist.

  2. Reinforced Protection of Human Rights: Giving the highest priority to the defense of fundamental rights.

  3. Safeguarding Public Safety: Ensuring the well-being and protection of society.


Source: Information extracted from Art. 5, section XIV, and Art. 33, section IV of the INITIATIVE FOR THE FEDERAL LAW FOR THE ETHICAL, SOVEREIGN, AND INCLUSIVE DEVELOPMENT OF ARTIFICIAL INTELLIGENCE.