The use of social robots increased in the past few years. Current technology, however, lacks in deploying a single robot for different applications without the help of a human being. Current solutions are time-consuming, labour intensive and hard to generalize. Being aware of its
...
The use of social robots increased in the past few years. Current technology, however, lacks in deploying a single robot for different applications without the help of a human being. Current solutions are time-consuming, labour intensive and hard to generalize. Being aware of its surroundings, in terms of environment and context, the robot can select the appropriate application that the situation needs. We propose a multi-modal, knowledge-based hybrid scene classification method for applying awareness to the robot. As scene we refer to the combination of the environment and the context of the surroundings; a study on how to describe a scene has been done through knowledge-engineering methods that comprehend an anonymous online questionnaire and observations. The method inputs features of the type of objects, audio, and human detection and understanding; and outputs the probabilities of the possible social roles for the robot (Receptionist, Tutor and Waiter). The classification is based on a hybrid approach and trained and validated on a real-time multi-modal data-set collected by a mobile robot. The training experiment aimed to collect the data-set, to select the features that describe different roles and to calculate their weights. The validation experiments aimed to measure the performance and the generalization of the method. Results show that the robot was able to successfully classify the Receptionist role with an accuracy of 83.4%; the Tutor role with 82.7%; and finally, the Waiter role with 55.9%. On average, the method generalizes for 74% of unseen data.