CROSS is a project aiming to prepare and initiate research between the life sciences (neuroscience, psychology) and computer science in Hamburg and Beijing in order to set up a collaborative research centre positioned interdisciplinarily between artificial intelligence, neuroscience and psychology while focusing on the topic of cross-modal learning. Our long-term challenge is to understand the neural, cognitive and computational evidence of cross-modal learning and to use this understanding for (1) better analyzing human performance and (2) building effective cross-modal computational systems.
The long-term goal of our research is to understand the neural, cognitive and computational mechanisms of cross-modal learning and to use this understanding for (1) enhancing human performance and (2) building artificial cross-modal systems with behavior and performance similar to that of animals and humans. The term "cross-modal learning" refers to the fusion of complementary information from multiple sensory modalities in such a way that the learning that occurs within any individual sensory modality can be combined with or enhanced by information from one or more other modalities. Cross-modal learning is crucial for human understanding of the world, and the effective human acting in a complex world. Examples are ubiquitous, such as: learning to grasp and manipulate objects, learning to walk, learning to read and write, learning to understand language and its referents, etc. In all these examples, visual, auditory, somatosensory or other modalities must be integrated, and learning must be cross-modal. In fact, the broad range of acquired human skills are cross-modal, and many of the most advanced human capabilities, such as those involved in social cognition, require learning from the richest combinations of cross-modal information.
This project pursues four key objectives which specify the goals for the preparatory work needed to establish the planned Hamburg-Beijing collaborative research centre.