Multimedia data are usually represented by multiple features. In this paper, we propose a new algorithm, namely Multi-feature Learning via Hierarchical Regression for multimedia semantics understanding, where two issues are considered. First, labeling large amount of training data is labor-intensive. It is meaningful to effectively leverage unlabeled data to facilitate multimedia semantics understanding. Second, given that multimedia data can be represented by multiple features, it is advantageous to develop an algorithm which combines evidence obtained from different features to infer reliable multimedia semantic concept classifiers. We design a hierarchical regression model to exploit the information derived from each type of feature, which is then collaboratively fused to obtain a multimedia semantic concept classifier. Both label information and data distribution of different features representing multimedia data are considered. The algorithm can be applied to a wide range of multimedia applications and experiments are conducted on video data for video concept annotation and action recognition. Using Trecvid and CareMedia video datasets, the experimental results show that it is beneficial to combine multiple features. The performance of the proposed algorithm is remarkable when only a small amount of labeled training data are available.
Scheda prodotto non validato
I dati visualizzati non sono stati ancora sottoposti a validazione formale da parte dello Staff di IRIS, ma sono stati ugualmente trasmessi al Sito Docente Cineca (Loginmiur).
|Titolo:||Multi-Feature Fusion via Hierarchical Regression for Multimedia Analysis|
|Autori:||Y. Yang; J. Song; Z. Huang; Z. Ma; N. Sebe; A. Hauptmann|
|Titolo del periodico:||IEEE TRANSACTIONS ON MULTIMEDIA|
|Anno di pubblicazione:||2013|
|Numero e parte del fascicolo:||3|
|Codice identificativo Scopus:||2-s2.0-84897724955|
|Digital Object Identifier (DOI):||http://dx.doi.org/10.1109/TMM.2012.2234731|
|Appare nelle tipologie:||03.1 Articolo su rivista (Journal article)|