CROPE: Evaluating In-Context Adaptation of Vision and Language Models to Culture-Specific Concepts
Jul 1, 2025ยท
,,,ยท
0 min read
Malvina Nikandrou

George Pantazopoulos
Nikolas Vitsakis
Ioannis Konstas
Alessandro Suglia
Abstract
As Vision and Language models (VLMs) are reaching users across the globe, assessing their cultural understanding has become a critical challenge. In this paper, we introduce CROPE, a visual question answering benchmark designed to probe the knowledge of culture-specific concepts and evaluate the capacity for cultural adaptation through contextual information. This allows us to distinguish between parametric knowledge acquired during training and contextual knowledge provided during inference via visual and textual descriptions. Our evaluation of several state-of-the-art open VLMs shows large performance disparities between culture-specific and common concepts in the parametric setting. Moreover, experiments with contextual knowledge indicate that models struggle to effectively utilize multimodal information and bind culture-specific concepts to their depictions. Our findings reveal limitations in the cultural understanding and adaptability of current VLMs that need to be addressed toward more culturally inclusive models.
Type
Publication
In 2025 Annual Conference of the North American Chapter of the Association for Computational Linguistics