The data ethics challenges of explainable AI and their knowledge-based solutions
View/ Open
Date
2020-05Author
d'Aquin, Mathieu
Metadata
Show full item recordUsage
This item's downloads: 104 (view details)
Recommended Citation
d'Aquin, Mathieu. (2020). The data ethics challenges of explainable AI and their knowledge-based solutions. In I. Tiddi, F. Lécué, & P. Hitzler (Eds.), Knowledge Graphs for eXplainable Artificial Intelligence: Foundations, Applications and Challenges. Amsterdam: IOS Press.
Published Version
Abstract
Abstract. Explainable AI has recently gained momentum as an approach to overcome some of the more obvious ethical implications of the increasingly widespread
application of AI (mostly machine learning). It is however not always completely
evident whether providing explanations actually achieves to overcome those ethical issues, or rather create a false sense of control and transparency. This and other
possible misuses of Explainable AI leads to the need to consider the possibility that
providing explanations might itself represent a risk with respect to ethical implications at several levels. In this chapter, we explore through a series of scenarios
how explanations in certain circunstances might affect negatively specific ethical
values, from human agency to fairness. Through those scenarios, we discuss the
need to consider ethical implications in the design and deployment of Explainable
AI systems, focusing on how knowledge-based approaches can offer elements of
solutions to the issues raised. We conclude on the requirements for ethical explanations, and on how hybrid-systems, combining machine learning with background
knowledge, offer a way towards achieving those requirements.