The aim of this research is to evaluate the believability of Lilobot, a conversational agent meant to act as a virtual child for training helpline workers. Numerous aspects of believability are explored by means of a user study involving a questionnaire and interview with 10 part
...
The aim of this research is to evaluate the believability of Lilobot, a conversational agent meant to act as a virtual child for training helpline workers. Numerous aspects of believability are explored by means of a user study involving a questionnaire and interview with 10 participants. Questionnaire results indicate that improvement to the chatbot's believability is likely necessary.
The findings from the interviews are that the use of emoticons and acknowledging the context of the application raise believability, while unresponsiveness and repeated utterances lower it. While Lilobot did express valid and real emotions, study participants suggested improving the appropriateness of its reactions and expanding its vocabulary.