While Vision-Language Models (VLMs) have achieved competitive performance in various tasks, their comprehension of the underlying structure and semantics of a scene remains understudied. To investigate the understanding of VLMs, we study their capability regarding object properties and relations in a controlled and interpretable manner. To this scope, we introduce CIVET, a novel and extensible framework for systemati\textbf{C} evaluat\textbf{I}on \textbf{V}ia controll\textbf{E}d s\textbf{T}imuli. CIVET addresses the lack of standardized systematic evaluation for assessing VLMs' understanding, enabling researchers to test hypotheses with statistical rigor. With CIVET, we evaluate five state-of-the-art VLMs on exhaustive sets of stimuli, free from annotation noise, dataset-specific biases, and uncontrolled scene complexity. Our findings reveal that 1) current VLMs can accurately recognize only a limited set of basic object properties; 2) their performance heavily depends on the position of the object in the scene; 3) they struggle to understand basic relations among objects. Furthermore, a comparative evaluation with human annotators reveals that VLMs still fall short of achieving human-level accuracy.
CIVET: Systematic Evaluation of Understanding in VLMs / Rizzoli, Massimo; Alghisi, Simone; Khomyn, Olha; Roccabruna, Gabriel; Mousavi, Seyed Mahed; Riccardi, Giuseppe. - (2025), pp. 4462-4480. (Intervento presentato al convegno ACL tenutosi a Suzhou, China nel 4th November-9th November 2025) [10.18653/v1/2025.findings-emnlp.239].
CIVET: Systematic Evaluation of Understanding in VLMs
Rizzoli MassimoCo-primo
;Alghisi SimoneCo-primo
;Khomyn Olha;Roccabruna Gabriel;Mousavi Seyed Mahed;Riccardi Giuseppe
2025-01-01
Abstract
While Vision-Language Models (VLMs) have achieved competitive performance in various tasks, their comprehension of the underlying structure and semantics of a scene remains understudied. To investigate the understanding of VLMs, we study their capability regarding object properties and relations in a controlled and interpretable manner. To this scope, we introduce CIVET, a novel and extensible framework for systemati\textbf{C} evaluat\textbf{I}on \textbf{V}ia controll\textbf{E}d s\textbf{T}imuli. CIVET addresses the lack of standardized systematic evaluation for assessing VLMs' understanding, enabling researchers to test hypotheses with statistical rigor. With CIVET, we evaluate five state-of-the-art VLMs on exhaustive sets of stimuli, free from annotation noise, dataset-specific biases, and uncontrolled scene complexity. Our findings reveal that 1) current VLMs can accurately recognize only a limited set of basic object properties; 2) their performance heavily depends on the position of the object in the scene; 3) they struggle to understand basic relations among objects. Furthermore, a comparative evaluation with human annotators reveals that VLMs still fall short of achieving human-level accuracy.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione



