Newsroom

Clarifying the moral foundation of explainable AI

The ‘unpacking’ characteristic of XAI (Explainable Artificial Intelligence) is one of the reasons it has subsequently been argued to increase the trustworthiness of AI by helping with regulatory audits, identifying errors, and informing users about its outputs.
By Joshua Brand, PhD researcher at Télécom Paris, member of the Operational AI Ethics team.

XAI, however, also has moral worth as an instrumental means to preserve meaningful human control in AI. XAI helps preserve meaningful human control by permitting humans to justify a course of action in morally important situations. By allowing justification, XAI helps enable responsibility, which in turn conveys meaningful human control. In short, XAI is a necessary input to meaningful human control.