Two papers accepted at ECAI 2023
17 July 2023, by Chris Biemann
The '26th European Conference on Artificial Intelligence' (ECAI 2023) accepted the following papers:
- "Using Self-Supervised Dual Constraint Contrastive Learning for Cross-modal Retrieval" - Xintong Wang, Xiaoyu Li, Liang Ding, Sanyuan Zhao, and Chris Biemann
Abstract: In this work, we present a self-supervised dual constraint contrastive method for efficiently fine-tuning the vision-language pre-trained (VLP) models that have achieved great success on various cross-modal tasks, since full fine-tune these pre-trained models is computationally expensive and tend to result in catastrophic forgetting restricted by the size and quality of labeled datasets. Our approach freezes the pre-trained VLP models as the fundamental, generalized, and transferable multimodal representation and incorporates lightweight parameters to learn domain and task-specific features without labeled data. We demonstrated that our self-supervised dual contrastive model performs better than previous fine-tuning methods on MS COCO and Flickr 30K datasets on the cross-modal retrieval task, with an even more pronounced improvement in zero-shot performance. Furthermore, experiments on the MOTIF dataset prove that our self-supervised approach remains effective when trained on a small, out-of-domain dataset without overfitting. As a plug-and-play method, our proposed method is agnostic to the underlying models and can be easily integrated with different VLP models, allowing for the potential incorporation of future advancements in VLP models.
- "Dimensions of Similarity: Towards Interpretable Dimension-Based Text Similarity" - Hans Ole Hatzel, Fynn Petersen-Frey, Tim Fischer and Chris Biemann.
Abstract: This paper paves the way for interpretable and configurable semantic similarity search, by training state-of-the-art models for identifying textual similarity guided by a set of aspects or dimensions. The similarity models are analyzed as to which interpretable dimensions of similarity they place the most emphasis on. We conceptually introduce configurable similarity search for finding documents similar in specific aspects but dissimilar in others. To evaluate the interpretability of these dimensions, we experiment with downstream retrieval tasks using weighted combinations of these dimensions. Configurable similarity search is an invaluable tool for exploring datasets and will certainly be helpful in many applied natural language processing research applications.
The papers will soon be available in our "Publications" section.