https://www.youtube.com/watch?v=bYtyz7tGNyQ

FAccT 2022: Post-Hoc Explanations Fail to Achieve their Purpose in Adversarial Contexts

Sebastian Bordt, Michele Finck, Eric Raidl, Ulrike von Luxburg: Post-Hoc Explanations Fail to Achieve their Purpose in Adversarial Contexts. FAccT, 2022

Existing and planned legislation stipulates various obligations to provide information about machine learning algorithms and their functioning, often interpreted as obligations to “explain”. Many researchers suggest using post-hoc explanation algorithms for this purpose. In this paper, we combine legal, philosophical and technical arguments to show that post-hoc explanation algorithms are unsuitable to achieve the law’s objectives. Indeed, most situations where explanations are requested are adversarial, meaning that the explanation provider and receiver have opposing interests and incentives, so that the provider might manipulate the explanation for her own ends. We show that this fundamental conflict cannot be resolved because of the high degree of ambiguity of post-hoc explanations in realistic application scenarios. As a consequence, post-hoc explanation algorithms are unsuitable to achieve the transparency objectives inherent to the legal norms. Instead, there is a need to more explicitly discuss the objectives underlying “explainability” obligations as these can often be better achieved through other mechanisms. There is an urgent need for a more open and honest discussion regarding the potential and limitations of post-hoc explanations in adversarial contexts, in particular in light of the current negotiations of the European Union’s draft Artificial Intelligence Act.

Source of this “Tübingen Machine Learning” AI Video

AI video(s) you might be interested in …