Pre-Deployment Obligations (The Authorization Process)

A High Risk system cannot simply be launched to the market. It must pass a rigorous, multi-stage evaluation and authorization process.

Step 1: Conduct the Algorithmic Impact Assessment (AIA) (Art. 49)

First, the developer must conduct an Algorithmic Impact Assessment (AIA). This is an in-depth risk analysis, prepared by a multidisciplinary team, which must include:

  • The purpose and context of use.

  • The types of data used, their sources, and processing.

  • Affected populations and differentiated risks (by gender, ethnicity, etc.).

  • Potential impacts on human rights, health, property, or privacy.

  • Mitigation measures and human oversight protocols.

Step 2: Certify Data Quality (Art. 37.VII)

Simultaneously, the developer must certify that the data used to train the AI is representative, ethical, verifiable, and free of structural biases.

Step 3: Prepare Technical Documentation (Art. 86)

The developer must create a detailed technical file (which must be kept updated) containing:

  • Purpose and algorithms (including versions).

  • Sources, nature, and quality of training data.

  • Assessments of performance, accuracy, bias, and errors.

  • Results of previous audits.

  • Measures taken to ensure explicability.

  • A log of relevant changes to the system architecture.

Step 4: Obtain PNAA Evaluation (Art. 25, 39)

Once the AIA and technical documentation are complete, the system must be evaluated by the National Algorithmic Audit Platform (PNAA). This entity will review the documentation, audit the system, and issue a risk opinion.

Step 5: Register with RENSIA (Art. 37.II, 39)

Only if the PNAA's opinion is favorable may the system be inscribed in the National Registry of Artificial Intelligence Systems (RENSIA). This registration is an indispensable requirement for operation.

2. Operational Obligations (Continuous Supervision)

Once authorized, the system is subject to permanent obligations.

A. Meaningful Human Oversight (Art. 37.IV)

This is a key obligation. The system must "guarantee meaningful human supervision in all automated decisions that may affect fundamental rights." The AI can assist, but a human must validate.

B. The AI "Black Box": Traceability and Logging (Art. 60 & 87)

The system must have total traceability. Art. 60 requires maintaining an "algorithmic log" of operations for at least five years. Art. 87 details what this log must contain to "reconstruct decisions" (with evidentiary value):

  • Relevant input data, parameters, and variables.

  • The generated output or result (the decision).

  • Human intervention, if any.

  • System identifier, version, date, time, and operator.

C. Periodic and Multidimensional Audits (Art. 37.III & 130)

The system must undergo periodic audits. Art. 130 specifies these audits must be:

  • Technical: On function, accuracy, failures, and efficiency.

  • Legal: On compatibility with fundamental rights and fairness.

  • Ethical: On biases, unintentional impacts, and social effects.

D. Security and Incident Protocols (Art. 37.VI)

The company must have clear protocols for system updates, security, incident reporting, and support for affected persons.


3. System Obligations to Guarantee User Rights

The law imposes direct obligations on systems and their operators to ensure that citizens can actively exercise their rights.

A. Obligation to Guarantee Explicability (Art. 37.V & 44.IV)

The High Risk system must be designed in such a way that it can explain its own decisions. It must guarantee that any person affected by an automated decision has the ability to know, in understandable language, the reasons or criteria behind that decision. This explicability must be adaptable to different profiles (ordinary citizens, judges, auditors, technicians).

B. Obligation to Provide Appeal Mechanisms (Art. 37.VIII)

The system provider has the obligation to implement and publicize accessible channels so that individuals can access, question, and appeal the decisions the AI makes about them. It is not enough to simply receive the automated decision; a clear process for requesting a review must exist (which links to the human oversight obligation).

C. Obligation to Assume Liability for Damages (Art. 52)

This is the ultimate accountability obligation. The system operator is legally responsible for damages caused by the AI. The law imposes strict liability ("responsabilidad objetiva"). This obligates the provider to repair the damage without the victim needing to prove that the company acted with intent (dolo) or negligence. If the high-risk system caused the harm, the operator is liable.


Source: Information extracted from Articles 25, 33, 37, 39, 44, 49, 52, 60, 86, 87, and 130 of the INITIATIVE FOR THE FEDERAL LAW FOR THE ETHICAL, SOVEREIGN, AND INCLUSIVE DEVELOPMENT OF ARTIFICIAL INTELLIGENCE.


It all begins with an idea. Maybe you want to launch a business. Maybe you want to turn a hobby into something more. Or maybe you have a creative project to share with the world. Whatever it is, the way you tell your story online can make all the difference. Don’t worry about sounding professional. Sound like you. There are over 1.5 billion websites out there, but your story is what’s going to separate this one from the rest. If you read the words back and don’t hear your own voice in your head, that’s a good sign you still have more work to do.