Research
The TRAIL consortium has identified two complementary research areas to improve the interpretability of robotic systems, thus improving the transparency and predictability, and eventually leading to more effective interactions and ultimately more trust in human-robot interaction. These research areas are Behaviour Transparency and Decision Transparency.
Behaviour Transparency
The user has to be able to intuitively understand the internal state and planned actions of a robot in order to understand the current behaviour. This communication of internal state information to an external observer can be achieved through a combination of different channels, from interpretable movements and gestures8, over facial or body expressions and other intuitively interpretable audio-visual signals and emotional cues9, to a more explicit information transfer using speech and natural language processing10. Different complementary channels can be meaningful for different forms of behaviour transparency, and a social robot has to be able to decide on the best and most transparent combination of cues for a behavioural information transfer.
Decision Transparency
Explaining the reasoning and the underlying knowledge base that led to a decision is a fundamental part of human social understanding. Robots, and especially systems based on Deep Neural Networks, either lack the ability of introspection to analyse the process that led to a decision or lack the ability to make information on the process explicit and transparent. For example, a neural network that classifies objects is often not able to identify the features that led to a certain classification decision, and it is often unclear, even if the reason is known, how this information can be communicated effectively to the user.
Both concepts – behaviour and decision transparency – are closely linked, since often knowledge about how a decision was made is needed to display it through behaviour. This means that roboticists who build interpretable systems will need a deep understanding of both concepts. TRAIL will employ a structured doctoral training programme to prepare DCs:
- to research on the state of the art on machine learning algorithms for different areas in human-robot interaction, to be able to adapt and improve future robot systems for specific tasks and in particular towards transparency in different application environments;
- to implement tools that enable introspection of complex robot controllers as well as select the mechanisms on how best to communicate the gained information to an external user through intuitive behaviour and interaction;
- to be able to conduct meaningful experiments, within a responsible research and innovation framework, to further our understanding of intuitive human-robot interaction and the effects of different methods on the utility of robotic systems;
- to develop and practice transferable skills for a flexible career in academic, industry and entrepreneurship via individual, group-based and network-wide activities.
Because of this highly interdisciplinary field, DCs will be trained to understand different transparent AI research methodologies, integrate synergies across disciplines, and efficiently communicate novel transparent AI concepts to a diverse audience, from scientists of different fields to end-users and industrial stakeholders.
As an MSCA doctoral network programme, TRAIL will address both scientific as well as training objectives to fulfil its overall goal of educating well-trained and well-prepared DC researchers.