Recently, automatic deception detection has gained momentum thanks to advances in computer vision, computational linguistics and machine learning research fields. The majority of the work in this area focused on written deception and analysis of verbal features. However, according to psychology, people display various nonverbal behavioral cues, in addition to verbal ones, while lying. Therefore, it is important to utilize additional modalities such as video and audio to detect deception accurately. When multi-modal data was used for deception detection, previous studies concatenated all verbal and nonverbal features into a single vector. This concatenation might not be meaningful, because different feature groups can have different statistical properties, leading to lower classification accuracy. Following this intuition, we apply, for the first time in deception detection, a multi-view learning (MVL) approach, where each view corresponds to a feature group. This results in improved classification results over the state of the art methods. Additionally, we show that the optimized parameters of the MVL algorithm can give insights into the contribution of each feature group to the final results, thus revealing the importance of each feature and eliminating the need of performing feature selection as well. Finally, we focus on analyzing face-based low level, not hand crafted features, which are extracted using various pre-trained Deep Neural Networks (DNNs), showing that face is the most important nonverbal cue for the detection of deception.

A multi-view learning approach to deception detection / Carissimi, N.; Beyan, C.; Murino, V.. - (2018), pp. 599-606. (Intervento presentato al convegno 13th IEEE International Conference on Automatic Face and Gesture Recognition, FG 2018 tenutosi a Grand Dynasty Culture Hotel, chn nel 2018) [10.1109/FG.2018.00095].

A multi-view learning approach to deception detection

Beyan C.;
2018-01-01

Abstract

Recently, automatic deception detection has gained momentum thanks to advances in computer vision, computational linguistics and machine learning research fields. The majority of the work in this area focused on written deception and analysis of verbal features. However, according to psychology, people display various nonverbal behavioral cues, in addition to verbal ones, while lying. Therefore, it is important to utilize additional modalities such as video and audio to detect deception accurately. When multi-modal data was used for deception detection, previous studies concatenated all verbal and nonverbal features into a single vector. This concatenation might not be meaningful, because different feature groups can have different statistical properties, leading to lower classification accuracy. Following this intuition, we apply, for the first time in deception detection, a multi-view learning (MVL) approach, where each view corresponds to a feature group. This results in improved classification results over the state of the art methods. Additionally, we show that the optimized parameters of the MVL algorithm can give insights into the contribution of each feature group to the final results, thus revealing the importance of each feature and eliminating the need of performing feature selection as well. Finally, we focus on analyzing face-based low level, not hand crafted features, which are extracted using various pre-trained Deep Neural Networks (DNNs), showing that face is the most important nonverbal cue for the detection of deception.
2018
Proceedings - 13th IEEE International Conference on Automatic Face and Gesture Recognition, FG 2018
345 E 47TH ST, NEW YORK, NY 10017 USA
Institute of Electrical and Electronics Engineers Inc.
978-1-5386-2335-0
Carissimi, N.; Beyan, C.; Murino, V.
A multi-view learning approach to deception detection / Carissimi, N.; Beyan, C.; Murino, V.. - (2018), pp. 599-606. (Intervento presentato al convegno 13th IEEE International Conference on Automatic Face and Gesture Recognition, FG 2018 tenutosi a Grand Dynasty Culture Hotel, chn nel 2018) [10.1109/FG.2018.00095].
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/298049
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 12
  • ???jsp.display-item.citation.isi??? 11
social impact