A relatively novel approach of public participation is the Participatory Value Evaluation (PVE) in which a dilemma of a policymaker is provided to citizens (Mouter, Shortall, et al., 2021). In a PVE, citizens face a realistic choice task in which the policy dilemma is explained t
...
A relatively novel approach of public participation is the Participatory Value Evaluation (PVE) in which a dilemma of a policymaker is provided to citizens (Mouter, Shortall, et al., 2021). In a PVE, citizens face a realistic choice task in which the policy dilemma is explained to the participant (Hernandez et al., 2023). Participants have to divide a set budget over several options.
Currently, there are three main methods for analyzing PVE datasets: descriptive statistics, Latent Class Cluster Analysis (LCCA) and choice modelling. A tool that can identify relations between participant features and their preferences can provide additional insights to policy makers and researchers. If machine learning is to be applied to PVEs, it could be used to predict choice task outcomes based on features of the participant. However, the opaque algorithms in machine learning can make it difficult for a human to understand how the results were produced, which can make it difficult to interpret.
The field of Explainable AI has risen as a response to this issue: the aim of explainable AI methods is to give insight in the inner workings of the machine learning algorithm. An example we further research is SHAP (SHapley Additive exPlanations). This thesis focuses on the implementation of SHAP in PVE analysis, under the following research question: ”What additional insights does machine learning with SHAP provide for PVE quantitative analysis compared to conventional methods?”
To answer the research question, a case study is performed by applying the SHAP method to the PVE of the National Programme Regional Energy Strategy (NP RES). A Random Forest machine learning model provided the best fit to the dataset. When these results of SHAP analysis of the random forest model are compared to the results of the LCCA, it is apparent that SHAP provides more insights (26 versus 11). SHAP is able to reveal patterns on a smaller scale than LCCA. The resulting insights are different to the results of the LCCA analysis. Many of the insights from SHAP analysis are not seen in the LCCA.
Overall, it can be concluded that applying SHAP results in new insights that were not found with other methods used on the NP RES PVE case. This study has shown that SHAP can be a relevant tool to gather insights about the PVE data and the differences among participants. It can gather individual effects of demographic variables on the choices participants make. Therefore it can lead to more and refined policy advice to governments.
PVE experiments are still in their infancy. The ability of SHAP to provide additional insights into PVE experiments within this thesis provides an incentive to further use SHAP in PVE experiments, including academic PVEs. SHAP predominantly provides insights into the relation between participant characteristics and their valuation of options in the choice task, allowing for the diversity of groups within the participant population to be directly addressed. This direct address broadens the range of results and does justice to the diversity of our society.