Short-term action anticipation (STA) in first-person videos is a challenging task that involves understanding the next active object interactions and predicting future actions. Existing action anticipation methods have primarily focused on utilizing features extracted from video clips, but often overlooked the importance of objects and their interactions. To this end, we propose a novel approach that applies a guided attention mechanism between the objects, and the spatiotemporal features extracted from video clips, enhancing the motion and contextual information, and further decoding the object-centric and motion-centric information to address the problem of STA in egocentric videos. Our method, GANO (Guided Attention for Next active Objects) is a multi-modal, end-to-end, single transformer-based network. The experimental results performed on the largest egocentric dataset demonstrate that GANO outperforms the existing state-of-the-art methods for the prediction of the next active object label, its bounding box location, the corresponding future action, and the time to contact the object. The ablation study shows the positive contribution of the guided attention mechanism compared to other fusion methods. Moreover, it is possible to improve the next active object location and class label prediction results of GANO by just appending the learnable object tokens with the region of interest embeddings.

Enhancing Next Active Object-based Egocentric Action Anticipation with Guided Attention / Thakur, Sanket; Beyan, Cigdem; Morerio, Pietro; Murino, Vittorio; Del Bue, Alessio. - (2023). (Intervento presentato al convegno IEEE ICIP 2023 tenutosi a Malaysia nel 8th October 2023- 11th October 2023).

Enhancing Next Active Object-based Egocentric Action Anticipation with Guided Attention

Cigdem Beyan;
2023-01-01

Abstract

Short-term action anticipation (STA) in first-person videos is a challenging task that involves understanding the next active object interactions and predicting future actions. Existing action anticipation methods have primarily focused on utilizing features extracted from video clips, but often overlooked the importance of objects and their interactions. To this end, we propose a novel approach that applies a guided attention mechanism between the objects, and the spatiotemporal features extracted from video clips, enhancing the motion and contextual information, and further decoding the object-centric and motion-centric information to address the problem of STA in egocentric videos. Our method, GANO (Guided Attention for Next active Objects) is a multi-modal, end-to-end, single transformer-based network. The experimental results performed on the largest egocentric dataset demonstrate that GANO outperforms the existing state-of-the-art methods for the prediction of the next active object label, its bounding box location, the corresponding future action, and the time to contact the object. The ablation study shows the positive contribution of the guided attention mechanism compared to other fusion methods. Moreover, it is possible to improve the next active object location and class label prediction results of GANO by just appending the learnable object tokens with the region of interest embeddings.
2023
Proceedings of the IEEE International Conference on Image Processing (ICIP 2023)
Malaysia
IEEE
Thakur, Sanket; Beyan, Cigdem; Morerio, Pietro; Murino, Vittorio; Del Bue, Alessio
Enhancing Next Active Object-based Egocentric Action Anticipation with Guided Attention / Thakur, Sanket; Beyan, Cigdem; Morerio, Pietro; Murino, Vittorio; Del Bue, Alessio. - (2023). (Intervento presentato al convegno IEEE ICIP 2023 tenutosi a Malaysia nel 8th October 2023- 11th October 2023).
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/387312
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact