In this paper, we propose a novel method to recognize two-person interactions through a two-phase sparse coding approach. In the first phase, we adopt the non-negative sparse coding on the spatio-temporal interest points (STIPs) extracted from videos, and then construct the feature vector for each video by sum-pooling and l2-normalization. At the second stage, we apply the label-consistent KSVD (LC-KSVD) algorithm on the video feature vectors to train a new dictionary. The algorithm has been validated on the TV human interaction dataset, and the experimental results show that the classification performance is considerably improved compared with the standard bag-of-words approach and the single layer non-negative sparse coding. © 2014 SPIE.

Human interaction recognition through two-phase sparse coding

Zhang, Bo;Conci, Nicola;De Natale, Francesco
2014-01-01

Abstract

In this paper, we propose a novel method to recognize two-person interactions through a two-phase sparse coding approach. In the first phase, we adopt the non-negative sparse coding on the spatio-temporal interest points (STIPs) extracted from videos, and then construct the feature vector for each video by sum-pooling and l2-normalization. At the second stage, we apply the label-consistent KSVD (LC-KSVD) algorithm on the video feature vectors to train a new dictionary. The algorithm has been validated on the TV human interaction dataset, and the experimental results show that the classification performance is considerably improved compared with the standard bag-of-words approach and the single layer non-negative sparse coding. © 2014 SPIE.
2014
Electronic Imaging
1000 20TH ST, PO BOX 10, BELLINGHAM, WA 98227-0010 USA
SPIE-INT SOC OPTICAL ENGINEERING
9780819499431
Zhang, Bo; Conci, Nicola; De Natale, Francesco
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/101190
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
  • OpenAlex ND
social impact