Functional near-infrared spectroscopy (fNIRS) is beneficial for studying brain activity in naturalistic settings due to its tolerance for movement. However, residual motion artifacts still compromise fNIRS data quality and might lead to spurious results. Although some motion artifact correction algorithms have been proposed in the literature, their development and accurate evaluation have been challenged by the lack of ground truth information. This is because ground truth information is time- and labor-intensive to manually annotate. This work investigates the feasibility and reliability of a deep learning computer vision (CV) approach for automated detection and annotation of head movements from video recordings. Fifteen participants performed controlled head movements across three main rotational axes (head up/down, head left/right, bend left/right) at two speeds (fast and slow), and in different ways (half, complete, repeated movement). Sessions were video recorded and head movement information was obtained using a CV approach. A 1-dimensional UNet model (1D-UNet) that detects head movements from head orientation signals extracted via a pre-trained model (SynergyNet) was implemented. Movements were manually annotated as a ground truth for model evaluation. The model’s performance was evaluated using the Jaccard index. The model showed comparable performance between the training and test sets (J train = 0.954; J test = 0.865). Moreover, it demonstrated good and consistent performance at annotating movement across movement axes and speeds. However, performance varied by movement type, with the best results being obtained for repeated (J test = 0.941), followed by complete (J test = 0.872), and then half movements (J test = 0.826). This study suggests that the proposed CV approach provides accurate ground truth movement information. Future research can rely on this CV approach to evaluate and improve fNIRS motion artifact correction algorithms.

Computer Vision-Driven Movement Annotations to Advance fNIRS Pre-Processing Algorithms / Bizzego, Andrea; Carollo, Alessandro; Senay, Burak; Fong, Seraphina; Furlanello, Cesare; Esposito, Gianluca. - In: SENSORS. - ISSN 1424-8220. - 24:21(2024). [10.3390/s24216821]

Computer Vision-Driven Movement Annotations to Advance fNIRS Pre-Processing Algorithms

Bizzego, Andrea
;
Carollo, Alessandro;Fong, Seraphina;Esposito, Gianluca
2024-01-01

Abstract

Functional near-infrared spectroscopy (fNIRS) is beneficial for studying brain activity in naturalistic settings due to its tolerance for movement. However, residual motion artifacts still compromise fNIRS data quality and might lead to spurious results. Although some motion artifact correction algorithms have been proposed in the literature, their development and accurate evaluation have been challenged by the lack of ground truth information. This is because ground truth information is time- and labor-intensive to manually annotate. This work investigates the feasibility and reliability of a deep learning computer vision (CV) approach for automated detection and annotation of head movements from video recordings. Fifteen participants performed controlled head movements across three main rotational axes (head up/down, head left/right, bend left/right) at two speeds (fast and slow), and in different ways (half, complete, repeated movement). Sessions were video recorded and head movement information was obtained using a CV approach. A 1-dimensional UNet model (1D-UNet) that detects head movements from head orientation signals extracted via a pre-trained model (SynergyNet) was implemented. Movements were manually annotated as a ground truth for model evaluation. The model’s performance was evaluated using the Jaccard index. The model showed comparable performance between the training and test sets (J train = 0.954; J test = 0.865). Moreover, it demonstrated good and consistent performance at annotating movement across movement axes and speeds. However, performance varied by movement type, with the best results being obtained for repeated (J test = 0.941), followed by complete (J test = 0.872), and then half movements (J test = 0.826). This study suggests that the proposed CV approach provides accurate ground truth movement information. Future research can rely on this CV approach to evaluate and improve fNIRS motion artifact correction algorithms.
2024
21
Bizzego, Andrea; Carollo, Alessandro; Senay, Burak; Fong, Seraphina; Furlanello, Cesare; Esposito, Gianluca
Computer Vision-Driven Movement Annotations to Advance fNIRS Pre-Processing Algorithms / Bizzego, Andrea; Carollo, Alessandro; Senay, Burak; Fong, Seraphina; Furlanello, Cesare; Esposito, Gianluca. - In: SENSORS. - ISSN 1424-8220. - 24:21(2024). [10.3390/s24216821]
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/438914
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
  • OpenAlex ND
social impact