Flexible and Context-Speciﬁc AI Explainability: A Multidisciplinary Approach23 April 2020
« We try more and more to integrate explainability criteria into the machine learning so that the machine learns to reason in a way that is explainable to humans. »
– Florence d’Alché-Buc, Researcher in Computer Science and Applied Mathematics, Holder of the Télécom Paris « Data Science and Artificial Intelligence for Digitalised Industry and Services » Chair
The European Commission published on 19 February 2020 its White Paper « Artificial Intelligence: A European approach towards excellence and trust » presenting a human-centred, trustworthy artificial intelligence. For AI to be reliable, emerging algorithms need to be both transparent and explainable. However, their explainability actually hides a multitude of scenarios. In order to feed the reflection around this white paper, Télécom Paris has just published its report « Flexible and Context-Speciﬁc AI Explainability: A Multidisciplinary Approach » which identifies the different explainability scenarios by taking into account technical approaches and contextual elements.
To produce this report on the problems of algorithm explainability and responsibility, Télécom Paris researchers exchanged views with the various stakeholders (political, academic, industrial) in a multidisciplinary approach (mathematics, computer science, social sciences, etc.), in order to synthesize the state of the art in this field and to establish avenues for scientific recommendations to improve the explainability of algorithms.
They identify 4 contextual criterias that will guide the choice of the type of explicability:
- The recipient of the explicability, i.e. the public targeted by the explanation. Its level will be different depending on whether it is a user or a regulator, for example.
- The level of importance and impact of the algorithm. The explainability of an accident of an autonomous vehicle does not have the same level of importance as that of an algorithm of advertisements or videos recommendations.
- The legal and regulatory framework, which is different according to geographical areas, as in Europe with the General Data Protection Regulation (GDPR).
- The operational environment of explainability, such as its mandatory nature for certain critical applications, the need for certification before deployment, or the simplification of use by users.
Furthermore, the level of explicability requires a cost-benefit analysis taking into account the cost of databases storage, potential interferences with professional secrecy and the right to protection of personal datas. This is why an explanation is not always required, especially for an involvement that has little impact on the public.
This new report is the fruit of interdisciplinary work between eight Télécom Paris teacher-researchers from six academic branches of the school: applied mathematics, statistics, IT, economics, law and sociology. It is part of Télécom Paris’ new initiative « Operational AI Ethics », which aims to address AI ethical issues from an operational and interdisciplinary perspective and makes Télécom Paris a major actor in this field.