Mexico Ai Regulation

México está avanzando hacia un marco integral de gobernanza de la inteligencia artificial a través de la Ley Federal para el Desarrollo Ético, Soberano e Inclusivo de la Inteligencia Artificial. La iniciativa busca establecer una base legal para el desarrollo, la implementación, la supervisión y la promoción de sistemas de IA, haciendo énfasis en la ética, los derechos humanos y la soberanía tecnológica.

Estatus: Proyecto de ley en revisión por el Congreso Mexicano.


Consulta la Ley Federal para el Desarrollo Ético, Soberano e Inclusivo de la Inteligencia Artificial.

Main Objectives

  • Establecer un marco legal general para el desarrollo, la implementación, la supervisión y la promoción de la IA en todo México.

  • Garantizar que la innovación en IA respete y promueva los derechos humanos.

  • Definir conceptos técnicos para mejorar la claridad regulatoria.

  • Establecer la Plataforma Nacional de Auditoría Algorítmica (PNAA) como un organismo autónomo responsable de auditar conjuntos de datos, algoritmos y sistemas de IA.

  • Crear el Consejo Nacional de Inteligencia Artificial (CNIA) como un organismo público descentralizado con autonomía técnica y personalidad jurídica.

  • Implementar un sistema de semáforo basado en riesgos para clasificar los sistemas de IA según su impacto potencial y el nivel de supervisión requerido.


La iniciativa busca regular la Inteligencia Artificial mediante el establecimiento de un marco legal integral para su desarrollo, implementación y supervisión. Se centra en la IA ética, la protección de los derechos humanos y la promoción de la soberanía tecnológica de México.


Resumen Legislativo



The initiative contemplates a risk-based classification framework for AI systems, very similar to the model established by the European Union’s AI Act.

This structure allows proportional regulation — meaning that the higher the potential impact on people’s rights, the greater the level of oversight and accountability required.

Classification according to the “Risk Traffic Light System” proposed in the law:

(See: Federal Law for the Ethical, Sovereign, and Inclusive Development of Artificial Intelligence – Draft Text (Cámara de Diputados, 2025)


Compliance Roadmap

Every AI system (regardless of its type) carries specific obligations depending on its assigned risk category.

These obligations range from basic transparency requirements to complex algorithmic audits and human oversight protocols.

To explore the specific compliance requirements for each AI category, visit:

Minimal-Risk Systems Obligations:

º Regardless of their risk level, every AI system must:Respect ethical principles: 

º  Transparency, fairness, and accountability. 

º Ensure data privacy and security: Comply with ARCO rights and implement robust cybersecurity measures.

º Prevent bias and discrimination: Continuously monitor outputs and correct unfair or discriminatory results.

º Disclose automated interaction: Clearly label when users are engaging with an AI system.

º Avoid harmful behaviors: Prohibit outputs that promote violence, manipulation, or self-destructive actions.

º Guarantee verifiability and methodological transparency: Document algorithms, data sources, and decision-making processes.

º Provide training and resources: Offer guides, tutorials, or user support to promote responsible and informed use.

Limited-Risk Systems Obligations
High-Risk Systems Obligations

Compliance Roadmap:

Before you begin, it’s essential to determine your system’s risk level.

If your AI is classified as Limited Risk or High Risk, you will need to comply with additional obligations such as transparency reports, user consent mechanisms, algorithmic audits, and registration with CNIA or PNAA.

Note: Every AI system has legal and technical obligations depending on its category.

To evaluate how compliant your project already is, access the official AI Compliance Checklist – BlackboxMX and verify your level of readiness for regulatory approval.

Step-by-Step Compliance Roadmap


A. Risk Assessment

Classify your AI systems according to the law: prohibited, high, limited, or minimal risk.

This first step determines which obligations apply and whether your system requires registration or audits.

PNAA Risk Classification Guide


B. Algorithmic Impact Assessment (AIA)

Conduct an Algorithmic Impact Assessment for high-risk systems or sensitive sectors, identifying risks and mitigation measures.

For limited-risk systems, preparation for an AIA is recommended but not mandatory.

See AIA Template and Requirements


C. Mandatory Registration (RENSIA)

If applicable, obtain a risk determination and register your system in the National AI Registry.

Ensure full traceability and maintain auditable logs for verification.

National AI Registry (CNIA draft model)


D. Internal Review

Align your contracts, privacy policies, protocols, and legal notices with the new regulatory framework.

This ensures that governance and accountability mechanisms are properly embedded.

Legal Alignment Resources


E. Training and Oversight

Build internal capacity by training your team, documenting methodologies, designating responsible officers, and implementing internal audit mechanisms.

Strong internal oversight is key to demonstrating proactive compliance.

Training and Oversight Guidelines


Other Resources:


AI CLASSIFICATION BOOKLET