Data Model for Computer Vision Explainability, Fairness, and Robustness

More Info
expand_more

Abstract

In recent years, there has been a growing interest among researchers in the explainability, fairness, and robustness of Computer Vision models. While studies have explored the usability of these models for end users, limited research has delved into the challenges and requirements faced by researchers investigating these requirements. This study addresses this gap through a mixed-method approach, involving 20 semi-structured interviews with researchers and a comprehensive literature analysis.
Through this investigation, we have identified a practical need for a data model that encompasses the essential information researchers require to enhance explainability, fairness, and robustness in Computer Vision applications. We developed a data model that holds the potential to improve transparency and reproducibility within this field, speed up the research process, and facilitate comprehensive evaluations, whether quantitative or qualitative, of proposed methodologies. To refine and demonstrate the practicality of the data model, we have populated it with four existing datasets. Additionally, we have conducted two user studies to validate the model's usability. We found that participants were enthusiastic about using the data model. Some potential uses described by the participants were comparing models and datasets, accessing (niche) datasets and models, creating and exploring datasets, and having access to ground truth explanations. However, participants also had concerns about the data model, mainly with its usability being restricted to people with database knowledge and the richness of data in the database. Nonetheless, hope that this research constitutes the first step for data modelling for researchers in the field of Trustworthy AI.

Files

Thesis.pdf
(pdf | 6.86 Mb)
Unknown license