Algorithmic explainability

 

1st report of the Operational AI Ethics initiative on the issues of explicability and liability of algorithms

 

 

 

 

Everyone would like the algorithms to be explainable, especially in the most critical areas such as health or air transport. There is a real consensus in this area.
Winston Maxwell, director, law and technology studies at Télécom Paris


This diagram presents a framework for defining the “right” level of explainability based on technical, legal and economic considerations.

 

 

The approach involves three logical steps:

– Define the main contextual factors, such as who is the recipient of the explanation, the operational context, the level of harm that the system could cause, and the legal/regulatory framework. This step will help characterize the operational and legal needs for explanation, and the corresponding social benefits.

– Examine the technical tools available, including post-hoc approaches (input perturbation, saliency maps…) and hybrid AI approaches.

– As a function of the first two steps, choose the right levels of global and local explanation outputs, taking into the account the costs involved.


There are operational reasons for explainability, including the need to make algorithms more robust.

There are also ethical and legal reasons for explainability, including protection of individual rights.

It’s important to keep these two sets of reasons separate.

 


Explainability takes raw information and makes it understandable to humans.

Explainability is a value-added component of transparency.

Both explainability and transparency enable other important functions, such as traceability, auditability, and accountability.

 


Global explanations give an overview of the whole algorithm.


Local explanations provide precise information on a given algorithmic decision.


Explanations may be needed regarding the learning algorithm, including information on the training data used.

Explanations relating to the learned algorithm will generally focus on a particular algorithmic decision (local explanations).



Post-hoc approaches try to imitate the functioning of the black-box model.

Hybrid approaches try to put explainability in the model itself.


A hybrid approach may teach the algorithm to look at the right area of the image, based on domain expertise, in this case radiology.


Hybrid AI approaches can focus on the inputs, on the network itself, or on the outputs.


Finding the right level of explainability requires a consideration of the benefits and costs associated with an explanation in a given situation. For an explanation to be socially-useful, total benefits should exceed costs.

 

Other publications

  • Valérie Beaudouin, Isabelle Bloch, David Bounie, Stéphan Clémençon, Florence d’Alché-Buc, et al.. Identifying the “Right” Level of Explanation in a Given Situation. 2020. ⟨hal-02507316⟩
  • Valérie Beaudouin, Isabelle Bloch, David Bounie, Stéphan Clémençon, Florence d’Alché-Buc, et al.. Flexible and Context-Specific AI Explainability: A Multidisciplinary Approach. 2020. ⟨hal-02506409⟩