When trying to understand the neural mechanisms of the human brain, one essential component is to decipher the neuronal representation of information: when and where in the brain task-relevant information is encoded. To accomplish this, we try to apply machine learning and decoding techniques to source-reconstructed MEG signals. Particularly challenging is the decoding of internally generated signals, which orchestrate high-level functions like memory, attention, planning, etc., because often these inherently generated signals are not well aligned in time across trials. This misalignment of the underlying neural processes poses great technical challenges. Here we will present two recent studies: In the first experiment we aimed at decoding how auditory attention is deployed to relevant information in complex naturalistic scenes, in which relevant and irrelevant information cannot be simply dissociated based on low-level features. Rather, in these naturalistic auditory scenes ‘auditory objects’ have to be parsed and selected, and the neural mechanisms of such object-based auditory attention are elusive. We therefore used a linear classifier to distinguish from the cortical distribution of source-reconstructed oscillatory activity which auditory stream was currently attended. The top-down control of auditory attention could be decoded in frequency domain, suggesting that alpha oscillations may support the top-down control of object-based auditory attention in complex naturalistic scenes. In a second study, we decoded internal signals related to visual imagery (in the absence of any sensory input) while participants imagined completely self-determined either a familiar face or place. By applying a novel decoding technique based on spatial covariance matrix representations we can accurately classify the imaginary content and describe its temporal evolution.
Decoding top-down signals in MEG source-space / Baldauf, Daniel. - (2021). (Intervento presentato al convegno NonInvasive Mathematics, on-line INDAM Workshop tenutosi a Genova, Italy nel 13.-16.4.2021).
Decoding top-down signals in MEG source-space
Baldauf, DanielPrimo
2021-01-01
Abstract
When trying to understand the neural mechanisms of the human brain, one essential component is to decipher the neuronal representation of information: when and where in the brain task-relevant information is encoded. To accomplish this, we try to apply machine learning and decoding techniques to source-reconstructed MEG signals. Particularly challenging is the decoding of internally generated signals, which orchestrate high-level functions like memory, attention, planning, etc., because often these inherently generated signals are not well aligned in time across trials. This misalignment of the underlying neural processes poses great technical challenges. Here we will present two recent studies: In the first experiment we aimed at decoding how auditory attention is deployed to relevant information in complex naturalistic scenes, in which relevant and irrelevant information cannot be simply dissociated based on low-level features. Rather, in these naturalistic auditory scenes ‘auditory objects’ have to be parsed and selected, and the neural mechanisms of such object-based auditory attention are elusive. We therefore used a linear classifier to distinguish from the cortical distribution of source-reconstructed oscillatory activity which auditory stream was currently attended. The top-down control of auditory attention could be decoded in frequency domain, suggesting that alpha oscillations may support the top-down control of object-based auditory attention in complex naturalistic scenes. In a second study, we decoded internal signals related to visual imagery (in the absence of any sensory input) while participants imagined completely self-determined either a familiar face or place. By applying a novel decoding technique based on spatial covariance matrix representations we can accurately classify the imaginary content and describe its temporal evolution.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione