Hybrid Intelligence (HI) is the combination of human and machine intelligence, expanding human intellect instead of replacing it. Information in HI scenarios is often inconsistent, e.g. due to shifting preferences, user's motivation or conflicts arising from merged data. As it pr
...
Hybrid Intelligence (HI) is the combination of human and machine intelligence, expanding human intellect instead of replacing it. Information in HI scenarios is often inconsistent, e.g. due to shifting preferences, user's motivation or conflicts arising from merged data. As it provides an intuitive mechanism for reasoning with conflicting information, with natural explanations that are understandable to humans, our hypothesis is that Dung's Abstract Argumentation (AA) is a suitable formalism for such hybrid scenarios. This paper investigates the capabilities of Argumentation in representing and reasoning in the presence of inconsistency, and its potential for intuitive explainability to link between artificial and human actors. To this end, we conduct a survey among a number of research projects of the Hybrid Intelligence Centre. Within these projects we analyse the applicability of argumentation with respect to various inconsistency types stemming, for instance, from commonsense reasoning, decision making, and negotiation. The results show that 14 out of the 21 projects have to deal with inconsistent information. In half of those scenarios, the knowledge models come with natural preference relations over the information. We show that Argumentation is a suitable framework to model the specific knowledge in 10 out of 14 projects, thus indicating the potential of Abstract Argumentation for transparently dealing with inconsistencies in Hybrid Intelligence systems.@en