Machine Learning (ML) models influence all aspects of our lives. They also commonly are integrated in recommender systems, which facilitate users’ decision-making processes in various scenarios, such as e-commerce, social media, news and online learning. Training performed on lar
...
Machine Learning (ML) models influence all aspects of our lives. They also commonly are integrated in recommender systems, which facilitate users’ decision-making processes in various scenarios, such as e-commerce, social media, news and online learning. Training performed on large volumes of data is what ultimately drives such systems to provide meaningful recommendations. However, a lack of standardized practices has been observed when it comes to data collection and annotation methods for ML datasets. This research paper systematically identifies and synthesizes the state of standardization with regard to data collection and annotation reporting in the recommender systems domain, through a systematic literature view into the 100 most-cited recommender systems papers from the most impactful venues within the Computing and Information Technology field. Multiple facets of the employed techniques are touched upon, such as reported human annotations and annotator diversity, label quality, and the public availability of training datasets. Recurrent use of just a few benchmark datasets, poor documentation practices, and reproducibility issues in experiments are some of the most striking findings uncovered by this study. We discuss the necessity of transitioning from pure reliance on algorithmic performance metrics to prioritizing data quality and fit. Finally, concerns are raised when it comes to biases and socio-psychological factors inherent in the datasets, and further exploration of embedding these early in the design of ML models is suggested.
@en