What is Limited Risk AI?


This category covers AI systems where the main risk isn't physical harm, but the potential to generate "moderate effects" (Art. 33, sec. II), such as user deception, manipulation, or confusion.

Art. 5 (sec. XII) defines them as systems with a "moderate risk" that is managed with one primary solution: Transparency.

In simple terms, the law requires these systems to be honest with you. They cannot pretend to be human or hide what they are doing.

Here are real-world examples for each category mentioned in the law:
1. Informational Virtual Assistants

What they are: Chatbots designed to answer questions or guide users.

2. Advertising Personalization Systems

What they are: The algorithms that track your behavior (what you watch, like, search for) to show you specific ads or content.

  • The algorithm that decides which ads you see in your Instagram or Facebook feed.

  • The "Recommended for You" system on Netflix, Spotify, or YouTube.

  • The "Customers who bought this also bought..." section on Amazon.

The Risk (and the Rule): The risk is a lack of transparency about why you are seeing certain content.

The law requires "minimum explanations" (Art. 33, sec. II), like the common "Why am I seeing this ad?" button.

3. Labeled Deepfakes ("Ultrafalsos")

What they are: AI-generated content (video, audio, images) that looks real.

Real-world examples:

A photorealistic image of a person who doesn't exist, created with tools like Midjourney or DALL-E.

Click to see obligations (www.blackboxmx.com/AI/Mexico /LimitedRiskObligations)