Newsroom

Flexible and Context-Specific AI Explainability: A Multidisciplinary Approach

Visuel article X-AI

Telecom Paris’ Operational AI Ethics initiative has just published its first report « Flexible and Context-Specific AI Explainability: A Multidisciplinary Approach. »

 

« We try more and more to integrate explainability criteria into the machine learning so that the machine learns to reason in a way that is explainable to humans. »

– Florence d’Alché-Buc, Researcher in Computer Science and Applied Mathematics, Holder of the Télécom Paris « Data Science and Artificial Intelligence for Digitalised Industry and Services » Chair

The European Commission published on 19 February 2020 its White Paper « Artificial Intelligence: A European approach towards excellence and trust » presenting a human-centred, trustworthy artificial intelligence. For AI to be reliable, emerging algorithms need to be both transparent and explainable. However, their explainability actually hides a multitude of scenarios. In order to feed the reflection around this white paper, Télécom Paris has just published its report « Flexible and Context-Specific AI Explainability: A Multidisciplinary Approach » which identifies the different explainability scenarios by taking into account technical approaches and contextual elements.

To produce this report on the problems of algorithm explainability and responsibility, Télécom Paris researchers exchanged views with the various stakeholders (political, academic, industrial) in a multidisciplinary approach (mathematics, computer science, social sciences, etc.), in order to synthesize the state of the art in this field and to establish avenues for scientific recommendations to improve the explainability of algorithms.

They identify 4 contextual criterias that will guide the choice of the type of explicability:

  • The recipient of the explicability, i.e. the public targeted by the explanation. Its level will be different depending on whether it is a user or a regulator, for example.
  • The level of importance and impact of the algorithm. The explainability of an accident of an autonomous vehicle does not have the same level of importance as that of an algorithm of advertisements or videos recommendations.
  • The legal and regulatory framework, which is different according to geographical areas, as in Europe with the General Data Protection Regulation (GDPR).
  • The operational environment of explainability, such as its mandatory nature for certain critical applications, the need for certification before deployment, or the simplification of use by users.

 

It is essential to take into account the four contextual factors of explicability because, for industries, explicability is above all motivated by operational requirements that differs significantly from legal aspects.
Winston Maxwell, Director, Law and Technology Studies

 

 

Furthermore, the level of explicability requires a cost-benefit analysis taking into account the cost of databases storage, potential interferences with professional secrecy and the right to protection of personal datas. This is why an explanation is not always required, especially for an involvement that has little impact on the public.

 

This new report is the fruit of interdisciplinary work between eight Télécom Paris teacher-researchers from six academic branches of the school: applied mathematics, statistics, IT, economics, law and sociology. It is part of Télécom Paris’ new initiative « Operational AI Ethics », which aims to address AI ethical issues from an operational and interdisciplinary perspective and makes Télécom Paris a major actor in this field.

 

Our research work has been conducted within a multidisciplinary logic that brings together data sciences, applied mathematics, computer science, economics, statistics, law and sociology, in order to have in-depth reflections on definition, techniques and the needs for explainability, which are integrated into the broader notions of transparency and accountability.
David Bounie, Full Professor, specialist in digital finance, Head of the Department of Economics and Social Sciences

 

 

Following this publication, the Operational AI Ethics initiative has published a second paper « Identifying the « right » level of explanation in a given situation » which summarises the first one and provides further clarifications.