TRAIL at RO-MAN 2024
26. August 2024, von Theresa Pekarek Rosin
This August, two of our doctoral candidates (Ferran Gebellí and Tamlin Love) attended the 33rd IEEE International Conference on Robot and Human Interactive Communication (RO-MAN) held in Pasadena, California, USA from 26 to 30 August, to present their published work.
Ferran Gebellí, supervised by Raquel Ros at PAL Robotics, presented the paper "Co-designing Explainable Robots: A Participatory Design Approach for HRI, " presenting a user-centric approach to designing explainable robot systems (preprint). The paper provides methodologies and tools to implement participatory design processes to achieve understandable and intuitive robot applications. The methodology is applied in an example use case for a geriatric unit at an intermediate care center, where the in-situ iterative co-design forces users to interact with the system and identify the parts that are not well understood, thus with a need for more explainability.
In addition, to the full paper, Ferran Gebellí also presented a Late Breaking Report named "Evaluating the Impact of Explainability on the Users' Mental Models of Robots over Time" in the poster session (preprint). This work explores how explanations affect the users' understanding of robots from a theoretical perspective. The work suggests a framework to analyze the evolution of the mental models over time across the dimensions of completeness and correctness and argues how explainability should aim to shape understanding.
Tamlin Love, supervised by Antonio Andriella and Guillem Alenyà Ribas at the Institut de Robòtica i Informàtica Industrial, presented his paper “What would I do if...? Promoting Understanding in HRI through Real-time Explanations in the Wild” (preprint), which investigates how the format of automatically generated explanations impacts user understanding “in-the-wild”. The team placed a robot in a public space and had it interact with passers-by. Those who approached the robot would be provided with an explanation (using one of three possible formats) for the robot’s decisions (e.g. deciding to wave at one person over another), and participants were asked to predict what the robot would do in a novel situation. Results from the study show that providing causal reasons for a robot’s decisions does indeed aid understanding, but explanations that describe what the robot would do in hypothetical situations (called counterfactual statements), do not help at all.
In addition to his full paper, Tamlin Love also presented a short contribution at the WARN workshop (Weighing the benefits of Autonomous Robot persoNalisation), titled “Personalising Explanations and Explaining Personalisation” (preprint), which explores the potential for research in explainability to address drawbacks in robot personalisation, and vice-versa.
Congratulations to both doctoral candidates on their achievements!