With the advancement of technology and the integration of Large Language Models, interactive robots are entrusted with more significant tasks that contribute to human well-being. This transition is particularly significant in educational settings, where social robots assume diver
...
With the advancement of technology and the integration of Large Language Models, interactive robots are entrusted with more significant tasks that contribute to human well-being. This transition is particularly significant in educational settings, where social robots assume diverse roles such as teacher, tutor, and instructional tool, aiming to enhance learning experiences. Engagement emerges as a central theme in Human-Robot Interaction studies, especially within pedagogical contexts, where it has a significant impact on learning outcomes. However, optimizing engagement rather than maximizing it may yield superior learning results. This thesis investigates the correlation between the employment of engagement strategies and learning outcomes in prompt-driven discussions with robots, focusing on facilitating self-reflection and understanding personal values among children. Due to limitations with the target population, a total of 55 university students conversed with the robot in two different sessions, where they discussed different situations, how they would react in each, and the reasoning behind their behavior, using the Schwartz values as a basis for the choices. The participants were divided into two conditions, to examine the effect of techniques such as motivational interviewing, cues to images, feedback, and the use of a memory model on the quality of their arguments and their ability to recollect the interaction. Results suggest that the strategies lead to significantly more reflective arguments for the first session, but they equalize in the second interaction. No significant effect was found between condition and participants' identifying their value profile which led to an examination of the assumptions made by the robot. Participants who agreed with the assumptions were more likely to identify their values correctly, but, generally, the value model was unable to fully capture the intricacies of human motivation. Participants found the robot equally likable, a possible result of the novelty effect. On recollection quality, the data shows that the effect of the session is too strong to allow room for the condition. Overall, this study contributes valuable insights into the influence of engagement strategies on learning outcomes in educational interactions, offering guidance for designing more effective and engaging interactions in the future.