Since the last decade, we are assisting a widespread use of “black box” Machine Learning algorithms, these are algorithms with excellent performance but whose outcomes are hard to understand to a human agent. However, there are some situation when it is important to understand wh
...
Since the last decade, we are assisting a widespread use of “black box” Machine Learning algorithms, these are algorithms with excellent performance but whose outcomes are hard to understand to a human agent. However, there are some situation when it is important to understand why a certain output is given,
and the field of explanability in Machine Learning has flourished in the last decade. In this work, we will go through some of these techniques. We will focus on model agnostic visualisation techniques introduced by Friedman (2001) and developed by Goldstein et al. (2015). Starting from the Partial Dependence plotting technique, we then analyse the Individual Conditional Expectation plot and its variants. Among them, we suggest the introduction of the so called “d-log-ICE” and we try identify scenarios where this techniques can bring better interpretability. We test our techniques on two models, the first one is based on the Boston Housing Dataset, and the second is an internal model at ABN Amro called “FLAG”.