The SFB/Transregio 169/1 project aims to initiate interdisciplinary research between computer science, neuroscience, and psychology in Hamburg and Beijing in order to set up a collaborative research centre with focus on human-robot-collaboration, artificial intelligence, neuroscience and psychology while focusing on the topic of cross-modal learning. The long-term challenge is to understand the neural, cognitive and computational evidence of cross-modal learning and to use this understanding for (1) better analyzing human performance with cross-modal correspondence (2) building effective cross-modal computational systems. The range of duties in sub-project C06 include the development and evaluation of visual-haptic tele-robotic-systems.
Telepresence gives a human operator the illusion to be able to explore and interact in a remote environment by technical means, such as immersive displays, tracking and capturing technology. In order to display the remote environment, usually mixed reality (MR) technology is used (see figure above). In this context, the term presence describes the illusion to be in a physical environment (place illusion), which provides plausible behavior to the human user (plausibility illusion). The great potential of telepresence systems lies in the support of natural interactions with a remote environment by head and body movements, and to support co-presence as well as social presence; the illusion to be with other people present in an environment. While our society is determined to provide inclusion for everybody, so far, supporting technologies for humans with limitations regarding their movement abilities, for instances, caused by injuries, aging, or disabilities, are often missing. Telepresence setups have the potential to provide novel ways of inclusion and act as an inclusive digital media for otherwise isolated levels of the population. For instance, users may lay in a comfortable pose, for instance, on a motion chair with novel immersive displays, while they can experience a computer-mediated virtual environment (VE).
Tele-operation means steering of a robot at a distance with devices that are controlled remotely by a human operator. In particular, the remote environment is captured by different sensors attached to the remote robotic multi-sensor system, and displayed back again to the user, in particular as visual, haptic, and auditory feedback. As input interfaces either ground or body-based haptic devices such as gloves or force feedback systems have been proven to be promising interface for tele-operations. Typical tele-operations include, but are not limited to pointing, touching, grasping and dexterous manipulation. Usually, the remote robotic multi-sensor system and local haptic input device are composed of different sensors and actuators, and provide different degrees of freedom (DoF) (see figure below). This disparity requires a non-trivial mapping, and results in at least two challenges. First, for (semi-)immersive interaction it is essential that the operator accepts the virtually displayed robot system as an extension of his/her own body into the remote environment, and also feels a sense of agency (SoA) between her/his actions and corresponding actions of the remote robot system. Second, multisensory associations and crossmodal correspondences must be specified between (a) the limited sensory feedback and input DoF provided by the haptic device and (b) the multiple sensory properties combined with numerous DoF of the remote robot system (or vice versa).