Insights & Media

Newsletters

2023-11-10
Regulatory and Compliance Trends for High-Risk AIs

Newsletters

by Eduardo MagraniSenior Consultant in the TMT practice area

On 14 June 2023, it was approved the European Parliament's latest proposal for the Artificial Intelligence Act (AI Act or AIA), which aims to establish obligations for suppliers, distributors, and users of Artificial Intelligence systems. This proposal follows an increasing regulatory intervention by the EU in the technological area, which is required, considering the exponential development and use of AI systems.

The AI Act establishes different levels of requirements, distinguishing between AI systems that are prohibited for use, AI systems of high-risk, and systems that do not fall into any of these categories, but may still be subject to transparency obligations.

Regarding the prohibited AI practices, the AI Act establishes the prohibition of the following situations:

i) Use of subliminal techniques, with the aim of distorting the behaviour of a person or group of people, impairing their ability to make an informed decision, and leading the person to take a decision they would not have otherwise taken, in a way that causes or is likely to cause harm to that person, another person, or a group of people.

ii) Exploitation of the vulnerabilities of a person, or a group of people, related to personality traits, economic or social situation, age, or physical or mental disabilities, in order to distort their behaviour in a way that causes or is likely to cause harm to that person, another person, or a group of people.

  1. Systems that categorise individuals according to sensitive or protected attributes, or according to characteristics based on those attributes (exception for AI systems used for approved therapeutic purposes and based on the consent of the individual or their legal guardian).

iii) Social scoring systems, for the evaluation, or classification of individuals, based on their social behaviour, or on factual, inferred or predictable characteristics of the person or their personality, which results in the unfavourable treatment of individuals or groups, unrelated to the data generated or collected, or which results in unjustified or disproportionate treatment, compared to the seriousness of the behaviour;

iv) Use of real-time biometric identification systems in public open spaces (systems that evaluate the risk of occurrence of an offence based on the person's profile; systems to create or expand facial recognition databases; systems that infer emotions in the areas of law enforcement, border control, and in labour or educational institutions; systems to analyse footage in public open spaces).

Regarding the classification as high-risk AI systems, the AI Act establishes the following situations:

(i) Systems used as safety components of a product, for which is required a third-party conformity assessment, related to health and safety risks, for the placement of the product on the market;

(ii) AI-systems falling within the categories set out in Annex III:

  • Biometric or biometric-based systems;
  • Systems used to make inferences based on biometric data, including emotion recognition systems;
  • Safety component related to land, rail or air traffic, critical digital infrastructures, or the supply of water, gas, heat, or electricity;
  • Access to education or employment;
  • Access to public or private services, and social benefits;
  • Access to health and life insurance;
  • Systems that prioritise the resolution of emergences by rescue services;
  • Social scoring systems;
  • Use by public authorities of AI systems as polygraphs, or to assess the reliability of evidence and profiling, in the course of an investigation, or in crime statistics;
  • Systems for border control, and in the areas of migration and asylum;
  • Assistance in judicial decisions;
  • Systems for the purpose of influencing the outcome of elections or referendums;
  • Systems used for recommendations on very large social media platforms;

In any of these categories, the classification as a high-risk system also depends on the existence of a significant risk of harm to the health, safety, or fundamental rights of individuals.

The AI Act reinforces the obligations of providers of high-risk AI systems, by including, in particular, the following obligations:

  • Implementation of a risk management system;
  • Assessment of the impact on fundamental rights;
  • Transparency of the system;
  • Human oversight;
  • Implementation of a quality management system;
  • Maintenance of automatically generated records;
  • Registration of the system in the EU database;
  • Affixation of the CE marking on the system;

In particular, with regard to transparency obligations, the following requirements are stipulated for high-risk systems:

  • Development of instructions for use: it is required that the user can interpret and explain the output of the system, knowing how it works and what data it processes;
  • Information about the identity of the provider, the characteristics, and limitations of the system, as well as the risks to health, safety, and fundamental rights;
  • Information on human supervision, maintenance, and assistance of the system;

When the system is not considered a high-risk system, there may still exist some transparency obligations. However, the degree of the obligations varies, depending on the purpose of the AI-system.

For AI systems designed to interact with people, the AI Act only requires that the user is informed of the interaction with an AI system. However, when relevant, it must be provided information about the functions that utilise AI, the human oversight mechanism, the responsible for the decision-making process, and the existing rights and processes that allow the objection against the application of such systems.

For AI systems that generate or manipulate images, audio or video that appears to be authentic (deep fakes), and which features people, appearing to perform actions they have not actually performed, it is required that the user is informed that the content has been artificially generated or manipulated, as well as the name of the person who generated or manipulated the content, when possible (with the exception of cases authorised by law, or the exercise of freedom of expression, or freedom of the arts and sciences).

For emotion recognition systems, or biometric categorisation systems, it is required that the user is informed of the interaction with an AI system, and about the operating process of the system. It is also required the consent for the processing of biometric data.

The latest proposal of the AI Act also includes specific obligations for providers of a foundation model, stipulating the following obligations:

  • Demonstrate the mitigation of the system's risks to health, safety, fundamental rights, environment, and democracy;
  • Implement measures to ensure the adequacy of databases, and avoid bias;
  • Promote cybersecurity;
  • Increase energy efficiency;
  • Develop user instructions;
  • Develop a quality maintenance system;
  • Register the foundation model in the EU database;

The knowledge and implementation of the obligations set out in the AI Act, both by suppliers and distributors, is essential, considering the growing importance that it will acquire, as we witness the development of Artificial Intelligence systems.

It is relevant to emphasise that, as we have experienced with several EU laws, the AI Act may inspire the creation of legislation about AI systems in several countries, becoming a global standard for the compliance of AI systems.