New Article Published in International Journal of Social Robotics
11 April 2023
Photo: UHH/Knowledge Technology
Knowledge Technology has got a journal article published in International Journal of Social Robotics.
Title: A trained humanoid robot can perform human-like crossmodal social attention and conflict resolution
Authors: Di Fu, Fares Abawi, Hugo Carneiro, Matthias Kerzel, Ziwei Chen, Erik Strahl, Xun Liu, Stefan Wermter
Due to the aging population and life digitalisation, humanoid robots could be seen as potential community resources to accompany the elderly, support remote work, and improve individuals' mental and physical health. To enhance human-robot interaction, it is essential for robots to become more socialised via processing multiple social cues in a complex real-world environment. For this aim, our study adopted the neurorobotic paradigm of gaze-triggered audio-visual crossmodal conflict resolution to make an iCub robot express human-like social attention responses. For the human study, a behavioural experiment was conducted on 37 participants. To improve ecological validity, we designed a round-table meeting scenario with three animated avatars. Each avatar wore a medical mask to cover facial cues of the nose, mouth, and jaw. The central avatar was capable of gaze shifting, while the peripheral two were capable of sound generation. The gaze direction and the sound localisation were either congruent or incongruent. We observed that the central avatar's dynamic gaze could trigger crossmodal social attention with better human performance in the audio-visual congruent condition than in the incongruent condition. For the robot study, our saliency prediction model was trained to implement social cue detection, audio-visual saliency prediction, and selective attention. After finishing the model training, the iCub robot was exposed to similar laboratory conditions as the participants. While the human performance was overall superior, our trained model demonstrates that it can replicate similar attention responses as humans regarding the congruency and incongruency performance.