Interpretability and performance of surrogate decision trees produced by Viper
More Info
expand_more
Abstract
Machine learning models are being used extensively in many high impact scenarios. Many of these models are ‘black boxes’, which are almost impossible to interpret. Successful implementations have been limited by this lack of interpretability. One approach to increasing interpretability is to use imitation learning to extract a more interpretable surrogate model from a black box model. Our aim is to evaluate Viper, an imitation learning algorithm, in terms of performance and interpretability. To achieve this, we evaluate surrogate decision tree models produced by Viper on three different environments and attempt to interpret these models. We find that Viper generally produces high performance interpretable decision trees, and that performance and interpretability are highly dependent on context and oracle quality. We compare Viper performance to similar
imitation learning approaches, and find that it performs as good as or better than these approaches, though our comparison is limited by the differences in oracle quality.