About TRAIL
Motivation
After several years of positive projections, some special-purpose domestic robots have finally reached the market, with the IFR Report on Service Robots showing an increase of 25% for domestic and 22% for entertainment robots. This increase is largely due to simple devices, like lawnmowers and vacuum cleaners, but several reports predict a massive market growth in the next years for other companion robots due to new markets in healthcare, rehabilitation, and logistics, fuelled by improvements in the field of Artificial Intelligence (AI). This is challenged by findings from industry (IBM4), who found out that, while 82% of enterprises are considering AI for their products, 60% fear liability issues due to a lack of transparency when it comes to decision-making (aka, the “black box” problem). If companies want to sell intelligent robots, novel research solutions for human users’ interpretability and transparency will be key to acceptance.
Compared to industrial environments, where a robot has its own workspace, or small domestic devices, which are perceived as non-threatening, larger domestic companion robots need to immerse themselves as smoothly as possible in the social environment to not permanently interrupt the day-to-day activities of the user. This is still a big challenge for state-of-the-art robots which offer increasingly good and robust hardware solutions in terms of actuators, sensors, and sensor processing but lack eXplainable Artificial Intelligence (XAI) and especially behaviour that human users would perceive as intuitive. However, exactly this ability to appear as a transparent interaction partner - and not as a tool the user has to adapt to - will make the difference between a cooperative, transparent robot companion and a mere mechanistic tool for specific tasks.
One of the users’ main problems with current robots is that our well-trained abilities to interpret the behaviour of our interaction partner fail when dealing with robots. Current approaches use beeping sounds or verbal cues which make the interaction often tedious and limit the motivation of users to interact. Most of the time, lay users cannot interpret the current state of a robot, not even on a simple level, e.g., to know when a robot is listening or how it is processing the last request. Therefore, an increasingly important issue for the acceptance of robots in human homes is not only the pertinence but also the transparent interpretability of the robot's behaviour and the underlying decision-making processes. However, while new performant machine learning technologies (mostly based on artificial neural networks) can learn accurately and generalise well, with such “black-box systems” the trust of users that the system has learned the correct behaviour is hard to earn. Being able to question a decision to understand the behaviour of the robot is paramount to earn the trust of the users. However, to fulfil this need, machine learning approaches have to be extended by systems that can analyse and modulate the behaviour of the machine learning system to provide transparent, interpretable information to the users. This is fully in the focus of the EU’s new Ethics Guidelines for Trustworthy AI that ask for a human-centric approach to AI and the traceability of an AI system. For the future development of AI, the guidelines state several points as desired Key Guidance for Realising Trustworthy AI:
- “Facilitate the traceability and auditability of AI systems, particularly in critical contexts or situations.”
- “This means that processes need to be transparent, the capabilities and purpose of AI systems openly communicated, and decisions – to the extent possible – explainable to those directly and indirectly affected.”
- “Foster training and education so that all stakeholders are aware of and trained in Trustworthy AI.”
Connections between partners
In our project TRAIL (Transparent Interpretable Robots), we aim at directly addressing these fundamental design and research training needs. We want to gain a better understanding of the complementarity of the (1) external behaviour level and (2) the internal decision level by training and educating a new generation of Doctoral Candidates (DC) on transparent interpretable robots. In order to train researchers towards the goal of interpretable companion robots, we have put together a highly interdisciplinary and cross-sectorial consortium, consisting of partners with long-standing expertise in cutting-edge deep neural network technology, artificial intelligence, mathematics, human-robot interaction, and psychology. We will start on the decision level to interpret deep neural learning using methods from mathematics and computer science, and we will analyse what AI knowledge can be extracted in a transparent manner. In parallel, on the behaviour level, the disciplines of human-robot interaction and psychology will be key in order to understand how to present the extracted knowledge and behaviour in an intuitive and natural way to integrate the robot into human-robot interaction. The importance and need of this research for the industry is particularly also visible with the commitment of PAL Robotics as a beneficiary, and HONDA, SoftBank Robotics, United Robotics Group and Seed Robotics as partners, to directly support TRAIL’s efforts and ensure its success.