Limited Risk AI - Analysis of Obligations

For systems classified as Limited Risk (e.g., chatbots, recommendation systems, labeled deepfakes), the legal framework's primary focus is not on restricting use, but on mandating transparency to empower the end-user.

The obligations are categorized into two primary groups:

  1. External Obligations (User-Facing): Disclosures and controls available to the individual interacting with the system.

  2. Internal Obligations (Provider Duties): Documentation, mitigation, and compliance required of the developer or provider.

  1. External Obligations (User-Facing Disclosures)

These articles (primarily 36 and 33) are designed to ensure that an individual is not deceived or misled by a Limited Risk AI.

Art. 36.I: Disclosure of AI Interaction

  • The Law States: "Include functional transparency mechanisms, which allow the user to understand that they are interacting with an AI."

  • Explanation: This is the core anti-deception provision. An AI system in this category must not impersonate a human. A clear, conspicuous notification or label must be present to inform the individual of the AI's involvement.

  • Practical Example: A customer service chat interface that initiates with: "Welcome. This is a Virtual Assistant."

Art. 36.II: Access to Basic Information

  • The Law States: "Provide access to basic information about the operation, purpose, and limitations of the system."

  • Explanation: The user must have simple access (e.g., via an "About" or "More Info" hyperlink) to information answering three key questions:

    1. Purpose: What was this AI built to do? (e.g., "This system provides movie recommendations").

    2. Operation: How does it generally function? (e.g., "It analyzes viewing history and compares it to similar users").

    3. Limitations: What can it not do? (e.g., "It cannot process payments" or "Recommendations may not be 100% accurate").

Art. 36.III: The Opt-Out Mechanism

  • The Law States: "Permit the option to disconnect or not participate, when technically possible."

  • Explanation: The user must retain control over the interaction. This includes the ability to close a chat interface, or the option to disable personalization features, typically located within a settings menu.

Art. 33.II & 36.IV: Warnings and Misuse Prevention

  • The Law States: "[Systems] Require basic transparency mechanisms such as user warnings..." (Art. 33) and must "Adopt reasonable measures for the prevention of misuse or misrepresentation of results" (Art. 36).

  • Explanation: The system must provide clear warnings regarding its risks and integrate safeguards against misuse.

  • Practical Example: For a labeled deepfake, the "AI-Generated" or "Synthetic Content" watermark is the warning. For a personalization system, a mitigation measure (Art. 33) is a function allowing the user to flag irrelevant content (e.g., "Not interested in this ad"), which helps correct the algorithm.

B. Internal Obligations (Provider Duties & Documentation)

These are the compliance and accountability tasks imposed by law on the developer or provider. While not directly visible to the end-user, they are critical for oversight.

Art. 86: The AI Technical Documentation ("Logbook")

This article mandates that the provider maintain a comprehensive technical file for the AI system. This documentation must be kept current, stored securely, and be available for institutional review upon request.

The contents of this technical file must include:

  1. Purpose and Functional Scope (Sec. I): A formal document defining the AI's intended function and its operational boundaries.

  2. Algorithms and Model Versions (Sec. II): The system's technical "DNA." What type of algorithms are used? What specific model version is in production? (e.g., "Collaborative filtering recommendation model, v3.1.2").

  3. Training Data (Sec. III): Crucial documentation on the data used to "teach" the AI:

    • Sources: Where the data was obtained (e.g., "Publicly available web comments," "Anonymized customer purchase histories").

    • Nature: The type of data (e.g., "Images," "Text," "Navigational data").

    • Quality: How data quality, relevance, and integrity were assessed and ensured.

  4. Performance Assessments (Sec. IV): The results of quality assurance testing. This includes metrics on accuracy, failures, and, critically, any biases (e.g., related to gender, race, or other demographics) that were detected and measured.

  5. Audits (Sec. V): The results of any internal or external audits that have been conducted on the system's performance or compliance.

  6. Explicability (Sec. VI): Documentation of the measures taken to ensure the system is not an incomprehensible "black box."

  7. Change Log (Sec. VII): A history of all relevant updates, re-training, or significant changes to the AI's configuration or architecture.


Source: Information extracted from Articles 5, 33, 36, and 86 of the INITIATIVE FOR THE FEDERAL LAW FOR THE ETHICAL, SOVEREIGN, AND INCLUSIVE DEVELOPMENT OF ARTIFICIAL INTELLIGENCE.