Sparse coding was shown to be able to find succinct representations of stimuli. Recently, it has been successfully applied to a variety of problems in image processing analysis. Sparse coding models data vectors as a linear combination of a few elements from a dictionary. However, most existing sparse coding methods are applied for a single task on a single dataset. The learned dictionary is then possibly biased towards the specific dataset and lacks of generalization abilities. In light of this, in this paper we propose a multitask sparse coding approach by uncovering a shared subspace among heterogeneous datasets. The proposed multi-task coding strategy leverages the commonality benefit from different datasets. Moreover, our multi-task coding framework is capable of direct classification by incorporating label information. Experimental results show that the dictionary learned by our approach has more generalization abilities and our model performs better classification compared to th...
Minimizing Dataset Bias: Discriminative Multi-task Sparse Coding through Shared Subspace Learning for Image Classification
Liu, Gaowen;Yan, Yan;Sebe, Niculae
2014-01-01
Abstract
Sparse coding was shown to be able to find succinct representations of stimuli. Recently, it has been successfully applied to a variety of problems in image processing analysis. Sparse coding models data vectors as a linear combination of a few elements from a dictionary. However, most existing sparse coding methods are applied for a single task on a single dataset. The learned dictionary is then possibly biased towards the specific dataset and lacks of generalization abilities. In light of this, in this paper we propose a multitask sparse coding approach by uncovering a shared subspace among heterogeneous datasets. The proposed multi-task coding strategy leverages the commonality benefit from different datasets. Moreover, our multi-task coding framework is capable of direct classification by incorporating label information. Experimental results show that the dictionary learned by our approach has more generalization abilities and our model performs better classification compared to th...I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione



