How Does Predictive Uncertainty Quantification Correlate with the Plausibility of Counterfactual Explanations
More Info
expand_more
Abstract
Counterfactual explanations can be applied to algorithmic recourse, which is concerned with helping individuals in the real world overturn undesirable algorithmic decisions. They aim to provide explanations to opaque machine learning models. Not all generated points are equally faithful to the model, nor equally plausible. On the other hand, predictive uncertainty quantification is used to measure the degree of certainty a model has in its predictions. Previously, it has been shown that it is possible to generate more plausible counterfactual explanations utilising predictive uncertainty. This work investigates this further by using multiple models innately supporting uncertainty quantification and comparing the produced counterfactual explanations to those produced by the models' ordinary counter-part. Predictive uncertainty tends to enhance the plausibility of the counterfactuals on visual datasets. Furthermore, we are positive that predictive uncertainty correlates proportionally with plausibility. This correlation has important implications for both research and real-world applications, as it suggests that integrating uncertainty quantification in model development can improve the quality and trustworthiness of algorithmic explanations.