B.J.W. Dudzik
7 records found
1
Counterfactual explanations can be applied to algorithmic recourse, which is concerned with helping individuals in the real world overturn undesirable algorithmic decisions. They aim to provide explanations to opaque machine learning models. Not all generated points are equally f
...
Counterfactual Explanations (CE) are essential for understanding the predictions of black-box models by suggesting minimal changes to input features that would alter the output. Despite their importance in Explainable AI (XAI), there is a lack of standardized metrics to assess th
...
Adversarial Training has emerged as the most reliable technique to make neural networks robust to gradient-based adversarial perturbations on input data. Besides improving model robustness, preliminary evidence presents an interesting consequence of adversarial training -- increa
...
In recent years, the need for explainable artificial intelligence (XAI) has become increasingly important as complex black-box models are used in critical applications. While many methods have been developed to interpret these models, there is also potential in enhancing the mode
...
Counterfactual explanations (CEs) can be used to gain useful insights into the behaviour of opaque classification models, allowing users to make an informed decision when trusting such systems. Assuming the CEs of a model are faithful (they well represent the inner workings of th
...
This research delves into the exploration of translation methods between affect representation schemes within the domain of text content analysis. We assess their performance on various affect analysis tasks while concurrently developing a robust evaluation framework. Furthermore
...
Continuous affective self-reports are intrusive and expensive to acquire, forcing researchers to use alternative labels for the construction of their predictive models. The most predominantly used labels in literature are continuous perceived affective labels obtained using exter
...