A Neurorobotic Experiment for Crossmodal Conflict Resolution in Complex Environments
Crossmodal conflict resolution is a crucial
The Audio-Visual localization task consisted of the subjects having to select which avatar (out of the 4 avatars in the scene) they believe the auditory cue is coming from. The 4 avatars may move their lips and/or arm in temporal correspondence with an auditory cue. The latter consists of a vocalized combination of 3 syllables (all permutations without repetition composed of ”ha”, ”
- Baseline: Auditory cue and static avatars.
- Moving Lips: Auditory cue and one avatar with moving lips.
- Moving Arm: Auditory cue and one avatar with a moving arm.
- Moving Lips+Arm: Auditory cue and one avatar with moving lips and arm.
- Moving Lips–Arm: Auditory cue and one avatar with moving lips and another avatar with a moving arm.
We propose a deep learning model processing both spatial and feature-based information in which low-level areas (such as the visual and auditory cortices) are predominantly unisensory, while neurons in higher-order areas encode multisensory representations. The proposed architecture comprises 3 input channels (audio, face, and body motion) and a hidden layer that computes a discrete behavioural response on the basis of the output of these unisensory channels.
The neural networks and training strategies developed within this research is available on our GitHub page:
Our source code and corpus are distributed under the Creative Commons CC BY-NC-SA 3.0 DE license. If you use this corpus, you have to agree with the following items:
- To cite our reference in any of your papers that make any use of the database and/or source code. The references are provided at the end of this page.
- To use the corpus and/or source code for research purpose only.
- To not provide the corpus and/or source code to any second parties.
Parisi, G. I., Barros, P., Fu, D., Magg, S., Wu, H., Liu, X., Wermter, S. A Neurorobotic Experiment for Crossmodal Conflict Resolution in Complex Environments. Submitted to: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2018.
Pablo Barros - barros"AT"informatik.uni-hamburg.de
German I. Parisi - parisi"AT"informatik.uni-hamburg.de