This PhD thesis addresses the challenge of emotion assessment through the integration of multimodal physiological and eye-tracking data using artificial intelligent approaches. The study aims to meet the critical needs of emotion assessment, including algorithm development and the creation of an appropriate database for this specific purpose. To accomplish this, an experimental procedure was designed and implemented to collect a comprehensive dataset. This dataset incorporates a wide range of physiological parameters, including ECG, PPG, EDA, respiratory signals, eye-tracking data and protocol-related information. The dataset forms a robust foundation for research in both engineering and cognitive science. The experimental protocol was carefully designed to ensure data quality, including hardware and software configurations, environmental control, stimulus duration, and minimizing external interferences. Notably, a set of codified images was chosen as elicitation medium for emotional responses, deviating from conventional paradigms and opening new avenues for exploration. In addition to objectively rating the stimuli, subjective evaluations of arousal and valence were collected to complement the dataset, further enhancing its usability. In the context of emotion recognition through physiological signals, the methodology employed for emotion classification and data processing plays a pivotal role. This study explored two primary methodologies for classifying emotions: machine learning and deep learning. For each approach, novel algorithms related mainly to time-domain signal processing were developed. A novel methodology called inter-subject normalization was introduced, showcasing its potential to reduce individual variability and enhance the interpretability of physiological data. This normalization approach was applied differently to accommodate the specific requirements of both machine learning and deep learning paradigms. Furthermore, this study compares the results obtained from machine learning and deep learning, revealing the strengths and weaknesses of each approach. Machine learning models offered transparency and interpretability in feature extraction, making them valuable for understanding physiological signal-emotion relationships but struggled with complex patterns. In contrast, deep learning models excelled in handling intricate patterns, working directly with raw data, and adapting to diverse emotional status. On the other side, even though at the cost of computational resources and interpretability challenges. Overall, classification tasks highlighted promising results in terms of accuracy, precision, sensitivity, and specificity for both methodologies. These findings highlighted the values of the proposed methodologies for emotion recognition tasks and their potentials in various applications, from healthcare to psychology. However, improvements could be made with larger datasets, more complex neural networks and enhanced interpretability. The choice of the appropriate method should consider the specific task requirements and the need for precision, sensitivity, and specificity in emotion classification. Some deep analysis on methods and algorithms may be needed, as well as dataset expansion, to better represent the diversity of emotional responses in the broader population.

Emotion recognition through AI based approaches using multidomain physiological integrated data / Fruet, Damiano. - (2023 Dec 18).

Emotion recognition through AI based approaches using multidomain physiological integrated data

Fruet, Damiano
2023-12-18

Abstract

This PhD thesis addresses the challenge of emotion assessment through the integration of multimodal physiological and eye-tracking data using artificial intelligent approaches. The study aims to meet the critical needs of emotion assessment, including algorithm development and the creation of an appropriate database for this specific purpose. To accomplish this, an experimental procedure was designed and implemented to collect a comprehensive dataset. This dataset incorporates a wide range of physiological parameters, including ECG, PPG, EDA, respiratory signals, eye-tracking data and protocol-related information. The dataset forms a robust foundation for research in both engineering and cognitive science. The experimental protocol was carefully designed to ensure data quality, including hardware and software configurations, environmental control, stimulus duration, and minimizing external interferences. Notably, a set of codified images was chosen as elicitation medium for emotional responses, deviating from conventional paradigms and opening new avenues for exploration. In addition to objectively rating the stimuli, subjective evaluations of arousal and valence were collected to complement the dataset, further enhancing its usability. In the context of emotion recognition through physiological signals, the methodology employed for emotion classification and data processing plays a pivotal role. This study explored two primary methodologies for classifying emotions: machine learning and deep learning. For each approach, novel algorithms related mainly to time-domain signal processing were developed. A novel methodology called inter-subject normalization was introduced, showcasing its potential to reduce individual variability and enhance the interpretability of physiological data. This normalization approach was applied differently to accommodate the specific requirements of both machine learning and deep learning paradigms. Furthermore, this study compares the results obtained from machine learning and deep learning, revealing the strengths and weaknesses of each approach. Machine learning models offered transparency and interpretability in feature extraction, making them valuable for understanding physiological signal-emotion relationships but struggled with complex patterns. In contrast, deep learning models excelled in handling intricate patterns, working directly with raw data, and adapting to diverse emotional status. On the other side, even though at the cost of computational resources and interpretability challenges. Overall, classification tasks highlighted promising results in terms of accuracy, precision, sensitivity, and specificity for both methodologies. These findings highlighted the values of the proposed methodologies for emotion recognition tasks and their potentials in various applications, from healthcare to psychology. However, improvements could be made with larger datasets, more complex neural networks and enhanced interpretability. The choice of the appropriate method should consider the specific task requirements and the need for precision, sensitivity, and specificity in emotion classification. Some deep analysis on methods and algorithms may be needed, as well as dataset expansion, to better represent the diversity of emotional responses in the broader population.
18-dic-2023
XXXV
2023-2024
Ingegneria industriale (29/10/12-)
Materials, Mechatronics and Systems Engineering
Nollo, Giandomenico
De Cecco, Mariolino
no
Inglese
Settore ING-INF/06 - Bioingegneria Elettronica e Informatica
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/403334
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact