To simplify AI regulation, use the GDPR’s high-risk criteria

GDPR
There are a multitude of contexts where we would expect an AI application to be regulated to protect users. The European Commission proposes a single AI regulation and a binary test for applying it: if the AI application is high-risk, then specific AI regulations apply. If not, then no specific AI regulations apply. The Commission proposes a definition of AI and two cumulative criteria for determining if an application is high risk. This two-criteria test gives the appearance of legal certainty, but it’s likely to create the opposite, for two reasons.

First, the two cumulative criteria proposed by the Commission will inevitably be incomplete, leaving some applications out. That’s the tradeoff for simple rules – they miss the mark in a small but significant number of cases. To work properly, simple rules must be supplemented by a general catch-all category for other high-risk applications that would not qualify under the two-criteria test. If you add a catch-all test (which would be necessary in our view), the goal of legal certainty would be largely defeated.

Second, the “high risk” criterion will interfere with other legal concepts and thresholds that already apply to AI applications. AI is just another form of software, and software-enabled applications are already highly regulated, typically at two levels. The first level consists of horizontal rules on fundamental rights, liability, property, data protection, cyber security and consumer protection that apply regardless of the specific application. The second level consists of vertical rules applicable to the particular industry, such as safety standards for cars or medical devices, or rules for bank algorithms.  Each of these bodies of regulation comes with its own risk thresholds: The GDPR, a horizontal regulation, has rules on high risk processing, requiring a data protection impact assessment. The NIS Directive has its own classification of “essential” services. The banking code, a vertical regulation, has special rules on software for “essential” banking functions. The medical device regulation, another vertical regulation, has classification levels based on risk. Adding a new AI regulation with a separate risk threshold system would almost certainly interfere with this already complex matrix of horizontal and vertical regulations, some of which are being updated to cover specific AI risks. To cite just one example, an AI application might not be considered “high risk” under the criteria proposed by the Commission, but would be considered “high risk” under the GDPR, an outcome that seems inconsistent and unacceptable[1]. In addition, certain vertical and horizontal regulations already contain provisions targeting AI-specific harms and remedies such as transparency, explainability, audits and record-keeping, which would potentially overlap with a new AI regulation applicable to high risk AI.

A better approach would be to identify gaps in existing horizontal and vertical regulations, which the Commission is already doing, and update those regulations as necessary to fill the gaps. To address other risks identified by the Commission, we suggest updating the GDPR’s criteria of “high risk” processing to make sure it covers any additional risks raised by the Commission. Any AI application that qualifies as high-risk under the general GDPR criteria would require a fundamental rights impact assessment which addresses not only privacy risks, but other risks to fundamental rights such as discrimination, human dignity, the right to an effective remedy, or freedom to access information. As part of the fundamental rights impact assessment, the operator of the AI system would then be required to propose mitigation measures to reduce each of the identified risks to an acceptable level. The adequacy of these measures would be evaluated by a regulator based on best practices. This is roughly the approach adopted by the State of Washington in connection with facial recognition algorithms, and is coherent with other risk-based regulatory approaches in Europe. As is the case for the GDPR, the operator of the algorithm would have the responsibility of determining whether an application creates a “high risk” for certain individual rights on the basis of general criteria. As to risks for public health and safety, or specific industry risks (like systemic risks for the banking system), these would continue to be addressed by vertical regulations. This would assure that every application that is considered “high risk” under the GDPR undergoes a broad fundamental rights (and not just data protection) impact assessment.  It would also ensure that existing regulatory approaches, for example in cybersecurity or platform regulation, are not confused by an additional layer of specific AI regulation.

The decision tree would be as follows:

  1. Define high risk using GDPR criteria on high risk processing, enhanced to take account of other potential fundamental rights impacts, such as discrimination.
  2. If high risk, then conduct fundamental rights impact assessment to identify and measure risks and propose mitigation measures. Evaluate sufficiency of mitigation measures already required by law, and if gaps, propose supplemental measures based on best practices and regulatory guidance. For health, safety, cybersecurity and vertical industry risks, refer to existing regulations, updated as necessary to incorporate specific AI risks.
  3.  If not high risk, no fundamental rights impact assessment is necessary and no mitigation measures other than those required by existing law (GDPR, consumer protection, cybersecurity, etc.).

One area where a specific AI regulation (and regulator) may be useful is to keep track of the broader social risks associated with AI, and ring alarm bells when regulatory intervention appears necessary. AI can have a cumulative adverse effect on the broader social ecosystem or democratic commons[2]. These broader effects might escape the attention of sector regulators, data protection authorities, and may not be taken into account in the impact assessment process proposed above. A regulator or multi-stakeholder body whose job is to define, observe and measure possible harms to broader social values and institutions might have a role as an early warning system, a form of canary in the AI coal mine. Developing KPIs for measuring societal harms from AI is itself a daunting and important exercise. As technology profoundly transforms society, there will be a combination of positive and negative effects, some of which will be hard to define let alone measure. If regulation is to hit its mark, the precise harms that regulation is intended to address must be defined and measured, so that the regulatory cure is adapted to the disease, and its effectiveness can be monitored.

 

__________________________________________________

By Winston Maxwell, Télécom Paris, Institut Polytechnique de Paris

__________________________________________________

[1] A high-risk processing under the GDPR creates a high risk for the rights and freedoms of individuals.

[2] Karen Yeung, « Algorithmic Regulation: a Critical Interrogation », King’s College London Dickson Poon School of Law
Legal Studies Research Paper Series: Paper No. 2017-27, p. 30.