Research Projects
Our objective is to research into artificial intelligence and knowledge technology for intelligent systems. However, our new approaches are often motivated by nature, e.g. from the brain, cognition and neuroscience. We want to study and understand nature-inspired, in particular hybrid neural and symbolic representations, in order to build next-generation intelligent systems. Such systems include for instance adaptive interactive knowledge discovery systems, learning crossmodal neural agents with vision and language capabilities, or neuroscience-inspired continually learning robots.
Modeling a Robot’s Peripersonal Space and Body Schema for Adaptive Learning and Imitation (MoreSpace)

We expect the resulting framework to improve the capabilities of robotic agents to handle conflicting sensor data and to improve human-robot interaction scenarios in the context of the different morphologies of the NICO and NICOL robots. Our experiments will take place in a table-top scenario and mainly involve object manipulation experiments, including block-stacking and tool-use tasks. Together with our collaborators, we will first conduct robot-robot interaction and later human-robot-interaction experiments.
PI: Prof. Dr. Stefan Wermter
Associate: Dr. Matthias Kerzel, Dipl.-Ing. Erik Strahl, Dr. Cornelius Weber, NN
Sicheres Sprachdialogsystem zur multimodalen Integration von Dienstleistungen (SIDIMO)

Systems that support elderly people in their home environments offer their target group an extension on an independent and self-determined way of living. However, existing speech recognition systems for home environments usually transfer recorded data to third-party servers, which raises a series of privacy issues. Additionally, while language as an intuitive means of communication can facilitate interactions of inexperienced users with assistance systems, ASR systems often struggle with elderly voices due to irregularities in speech patterns or indistinct pronunciations.
The goal of SIDIMO is to develop a speech recognition system, which 1) adjusts existing network architectures for German ASR to work locally, keeping private data in the possession of the user, and 2) can continually adapt to the changing speech of elderly users to increase speech recognition accuracy.
Duration: 01.10.2021 - 30.09.2023
PI: Prof. Dr. Stefan Wermter
Associate: Theresa Pekarek Rosin, Dr. Matthias Kerzel
VeriKAS – Verification of learning AI applications
Pl: Prof. Dr. Stefan Wermter
Associates: Dr. Sven Magg, Dr. Matthias Kerzel, Wenhao Lu
LemonSpeech: Next Generation Speech Recognition and Speech Control for German

Duration: August 1st, 2021 - July 31th, 2022
Mentor: Prof. Dr. Stefan Wermter
Associates: Dr. Johannes Twiefel, Matthias Möller, Felix Hattemer
Details:
https://lemonspeech.com
Learning Conversational Action Repair for Intelligent Robots (LeCAREbot)
What are the principal mechanisms required to capture the robustness and interactivity of human communication, given the situational, noisy and often ambiguous nature of natural language? And how, and to what extent, can we integrate these mechanisms within an embodied functional model that is computationally and empirically verifiable on a physical robot?
We address these research questions by investigating the linguistic phenomenon of conversational repair (CR) – a method to edit and to re-interpret previously uttered sentences that were not correctly understood by the hearer. As an example, consider the following scenario:
A human operator issues the underspecified command “Pick up the apple”. The operator intends to let the robot pick up the red apple, but the robot assumes that the human refers to the green apple (possibly because it is closer to the robot). As it picks up the green apple, the observed course of actions deviates from the user’s intended course of actions. Shortly after the human recognizes this error, he utters a repair command that lets the robot pick up the red apple.
Our own and other existing computational models for human-robot dialog consider conversational repair in the case of non-understandings, but they do not consider misunderstandings. In the LeCAREbot project, we will investigate how misunderstandings can be addressed in human-robot dialog. Herein, we develop compositional state representations for mixed verbal-physical interaction states, and we will develop a hybrid neuro-symbolic approach to learn such state abstractions. We will combine the state abstraction with model-based hierarchical reinforcement learning to realize robotic scenarios where conversational repair plays a critical role.
PIs: Dr. Manfred Eppe, Prof. Dr. Stefan Wermter
Details: LeCAREBot project
LingoRob - Learning Language in Developmental Robots

Programme: DAAD, Programm Projektbezogener Personenaustausch Frankreich 2017
Duration 01.01.2017 - 31.12.2018
PIs: Dr. Xavier Hinaut, Dr. Cornelius Weber, Prof. Dr. Stefan Wermter
Staff: Johannes Twiefel, Dr. Dr. Doreen Jirak
Details: LingoRob Project
Ideomotor Transfer for Active Self-emergence (IDEAS)
Humans possess very sophisticated learning mechanisms that allow them, for example, to learn a sports discipline or task, and to transfer certain movement patterns and skills from this discipline to improve their performance in another discipline or task. This is possible, even if the transfer task is not immediately related to the initially learned task. It is this important transfer learning ability that enables humans to solve problems they have never encountered before, in ways that are beyond the current capabilities of robots and artificial agents. However, the neural foundations of transfer learning and its role in the emergence of a self are still mostly unknown, and there exists no generally accepted functional neural model of transfer learning.
Dr. Manfred Eppe and Prof. Stefan Wermter from the Knowledge Technology group have been awarded a grant within the DFG priority programme The Active Self to address this issue.
PIs: Dr. Manfred Eppe, Prof. Dr. Stefan Wermter
Details: IDEAS project
KI-SIGS: AI Space for Intelligent Health Systems - AP390.1: Detection of Whole-body Posture and Movement

Duration: April 1st, 2020 - March 31th, 2023
Pls: Prof. Dr. Stefan Wermter
Associates: Dr. Matthias Kerzel, Nicolas Duczek
Details:
KI-SIGS project page
https://ki-sigs.de
The Neural Microcircuits of Problem-solving and Goal-directed Behavior

What are the neuro-computational principles of the advanced problem-solving capabilities that we observe in higher animals like corvids and primates, and how can we use those principles as the basis for a computational model in an acting robot?
Dr. Manfred Eppe and Prof. Stefan Wermter from the Knowledge Technology group have been awarded a grant within the Experiment! programme of the Volkswagen Stiftung (acceptance rate for this programme is below 5%) to address this question within the project "The Neural Microcircuits of Problem-solving and Goal-directed Behavior".
The aim of this high-risk/high-gain project is to design and build a computational model of neural problem-solving
PIs: Dr. Manfred Eppe, Prof. Dr. Stefan Wermter
Details: Neural p
Topological Deep Multiple Timescale Recurrent Neural Network for Cognitive Learning and Memory Modelling towards Human-level Synthetic Intelligence (NeuroSynthelligence)

PIs: Prof. Chu Kiong Loo, Prof. Dr. Stefan Wermter, Dr. Cornelius Weber
Details: NeuroSynthelligence Project
Neural Network-based Robotic Grasping via Curiosity-driven Reinforcement Learning (NeuralGrasp)

PIs: Prof. Dr. Stefan Wermter, Dr. Cornelius Weber
Associates: Burhan Hafez
Details: NeuralGrasp Project
Bio-inspired Indoor Robot Navigation (BIONAV)

PIs: Prof. Dr. Stefan Wermter, Dr. Cornelius Weber
Associates: Xiaomao Zhou
Details: BIONAV Project
DFG-Transregio "Cross-Modal Learning" (CML) Project A5 - Neurorobotic models for crossmodal joint attention and social interaction

PIs: Prof. Dr. Stefan Wermter, Prof. Dr. Xun Liu
Associates: Dr. Hugo Carneiro, Fares Abawi, Di Fu
Details:
A5 on www.crossmodal-learning.org
www.crossmodal-learning.org
DFG-Transregio "Cross-Modal Learning" (CML) Project C4 - Neurocognitive models of crossmodal language learning

PIs: Prof. Dr. Zhiyuan Liu, Dr. Cornelius Weber, Prof. Dr. Stefan Wermter
Associates: Dr. Jae Hee Lee, Tobias Hinz
Details:
C4 on www.crossmodal-learning.org
www.crossmodal-learning.org
DFG-Transregio "Cross-Modal Learning" (CML) Project Z3 - Integration initiatives for model software and robotic demonstrators

PIs: Prof. Dr. Jianwei Zhang, Prof. Dr. Stefan Wermter, Prof. Dr. Fuchun Sun
Associates: Dr. Matthias Kerzel, Dr. Burhan Hafez
Details:
Z3 on www.crossmodal-learning.org
www.crossmodal-learning.org
SOCRATES - Social Cognitive Robotics in The European Society

PIs: Prof. Dr. Stefan Wermter
Project Managers: Dr. Sven Magg, Dr. Cornelius Weber
Associates: Alexander Sutherland, Henrique Siqueira
Details: www.socrates-project.eu and http://www.socrates-project.eu/research/emotion-wp2/
European Training Network: Safety Enables Cooperation in Uncertain Robotic Environments (SECURE)
The majority of existing robots in industry are pre-programmed robots working in safety zones, with visual or auditory warning signals and little concept of intelligent safety awareness necessary for dynamic and unpredictable human or domestic environments. In the future, novel cognitive robotic companions will be developed, ranging from service robots to humanoid robots, which should be able to learn from users and adapt to open dynamic contexts. The development of such robot companions will lead to new challenges for human-robot cooperation and safety, going well beyond the current state of the art. Therefore, the SECURE project aims to train a new generation of researchers on safe cognitive robot concepts for human work and living spaces on the most advanced humanoid robot platforms available in Europe. The fellows will be trained through an innovative concept of project- based learning and constructivist learning in supervised peer networks where they will gain experience from an intersectoral programme involving universities, research institutes, large and SME companies from public and private sectors. The training domain will integrate multidisciplinary concepts from the fields of cognitive human-robot interaction, computer science and intelligent robotics where a new approach of integrating principles of embodiment, situation and interaction will be pursued to address future challenges for safe human-robot environments.
This project is funded by the European Union’s Horizon 2020 research and innovation programme
Coordinator: Prof. Dr. S. Wermter
Project Manager: Dr. S. Magg
Associates: Chandrakant Bothe, Egor Lakomkin, Mohammad Zamani
Details: European Training Network "SECURE"
Cross-modal Learning (CROSS)

Coordinators: Prof. Dr. S. Wermter, Prof. Dr. J. Zhang
Collaborators: Prof. Dr. B. Röder, Prof. Dr. A. K. Engel
Details: CROSS Project
Echo State Networks for Developing Language Robots (EchoRob)

Leading Investigator: Dr. X. Hinaut, Prof. Dr. S. Wermter
Associates: J. Twiefel, Dr. S. Magg
Details: EchoRob Project
Cooperating Robots (CoRob)
Leading Investigator: Prof. Dr. S. Wermter
Associates: Dr. M. Borghetti, Dr. S. Magg, Dr. C. Weber
Details: CoRob Project
Neural-Network-based Action-Object Semantics for Assistive Robotics

Gestures and Reference Instructions for Deep robot learning (GRID)
Leading Investigator: Prof. Dr. S. Wermter
Associates: P. Barros, Dr. S. Magg, Dr. C. Weber
Details: Grid Project
Teaching With Interactive Reinforcement Learning (TWIRL)
Reinforcement Learning has been a very useful approach, but often works slowly, because of large-scale exploration. A variant of RL, that tries to improve speed of convergence, and that has been rarely used until now is Interactive Reinforcement Learning (IRL), that is, RL is supported by a human trainer who gives some directions on how to tackle the problem.
Leading Investigator: Prof. Dr. S. Wermter
Associates: F. Cruz, Dr. S. Magg, Dr. C. Weber
Details: TWIRL Project
Cross-modal Interaction in Natural and Artificial Cognitive Systems (CINACS)
Spokesperson CINACS: Prof. Dr. J. Zhang
Leading Investigator: Prof. Dr. S. Wermter, Dr. C. Weber
Associates: J. Bauer, J. D. Chacón, J. Kleesiek
Details: CINACS Project
Cognitive Assistive Systems (CASY)

Spokesperson CASY: Prof. Dr. S. Wermter
Leading Investigator: Prof. Dr. S. Wermter, Prof. Dr. J. Zhang, Prof. Dr. C. Habel, Prof. Dr.-Ing. W. Menzel
Details: CASY Project
Neuro-inspired Human-Robot Interaction

The aim of the research of the Knowledge Technology Group is to contribute to fundamental research in offering functional models for testing neuro-cognitive hypotheses about aspects of human communication, and in providing efficient bio-inspired methods to produce robust controllers for a communicative robot that successfully engages in human-robot interaction.
Leading Investigator: Prof. Dr. S. Wermter, Dr. C. Weber
Associates: S. Heinrich, D. Jirak, S. Magg
Details: HRI Project
Robotics for Development of Cognition (RobotDoC)

Leading Investigator: Prof. Dr. S. Wermter, Dr. C. Weber
Associates: N. Navarro, J. Zhong
Details: RobotDoC Project
Knowledgeable SErvice Robots for Aging (KSERA)

The main aim is to design a pleasant, easy-to-use and proactive socially assistive robot (SAR) that uses context information obtained from sensors in the older person's home to provide useful information and timely support at the right place.
Leading Investigator: Prof. Dr. S. Wermter, Dr. C. Weber
Associates: N. Meins, W. Yan
Details: KSERA Project
What it Means to Communicate (NESTCOM)

Leading Investigator: Prof. Dr. S. Wermter
Associates: Dr. M. Knowles, M. Page
Details: NESTCOM Project
Midbrain Computational and Robotic Auditory Model for focused hearing (MiCRAM)

We collaboratively develop a biologically plausible computational model of auditory processing at the level of the inferior colliculus (IC). This approach will potentially clarify the roles of the multiple spectral and temporal representations that are present at the level of the IC and investigate how representations of sounds interact withauditory processing at that level to focus attention and select sound sources for robot models of focused hearing.
Leading Investigator: Prof. Dr. S. Wermter, Dr. H. Erwin
Associates: Dr. J. Liu, Dr. M. Elshaw
Details: MiCRAM Project
Biomimetic Multimodal Learning in a Mirror Neuron-based Robot (MirrorBot)

This project develops and studies emerging embodied representation based on mirror neurons. New techniques including cell assemblies, associative neural networks, Hebbian-type learning associate visual, auditory and motor concepts. The basis of the research is an examination of the emergence of representations of actions, perceptions, conceptions, and language in a MirrorBot, a biologically inspired neural robot equipped with polymodal associative memory.
Details: MirrorBot Project