In this paper we examine the affective content of meeting videos. First we asked five subjects to manually label three meeting videos using continuous response measurement (continuous-scale labeling in real-time) for energy and valence (the two dimensions of the human affect space). Then we automatically extracted audio-visual features to characterize the affective content of the videos. We compare the results of manual labeling and low-level automatic audio-visual feature extraction. Our analysis yields promising results, which suggest that affective meeting video analysis can lead to very interesting observations useful for automatic indexing. © 2005 IEEE.
Affective Meeting Video Analysis
Sebe, Niculae
2005-01-01
Abstract
In this paper we examine the affective content of meeting videos. First we asked five subjects to manually label three meeting videos using continuous response measurement (continuous-scale labeling in real-time) for energy and valence (the two dimensions of the human affect space). Then we automatically extracted audio-visual features to characterize the affective content of the videos. We compare the results of manual labeling and low-level automatic audio-visual feature extraction. Our analysis yields promising results, which suggest that affective meeting video analysis can lead to very interesting observations useful for automatic indexing. © 2005 IEEE.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione



