The Effect of Temporal Supervision on the Prediction of Self-reported Emotion from Behavioural Features

More Info
expand_more

Abstract

Continuous affective self-reports are intrusive and expensive to acquire, forcing researchers to use alternative labels for the construction of their predictive models. The most predominantly used labels in literature are continuous perceived affective labels obtained using external annotators. However an increasing body of research indicates that the relation between expressed emotion and experienced emotion might not be as apparent as previously assumed. Retrospective self-reports provided by participants do capture experienced emotion, but models applied on these labels suffer from the lack of continuous annotations during training. In this work, we aim to answer whether this lack of temporal information can be remedied by using continuous external annotations as proxies for experienced emotion over time. Furthermore, we investigate whether weakly-supervised models can generate accurate continuous annotations to reduce the annotation burden for large datasets. Our results indicate that external annotation sequences bear little significant information for the prediction of self-reports. However, forcing models to reflect changes in external annotations by training models in a multitask fashion improves model performance, suggesting that such temporal supervision helps models to distinguish relevant segments in input data. Besides this, we find that weakly-supervised models can to a certain extent capture changes over time, but in general yield poor results compared to fully-supervised models.

Files