From the very first hours of life, human newborns are immersed in rich multisensory environments and must rapidly learn to integrate information across distinct sensory modalities. Cross-modal integration operates not only at the perceptual level—linking temporally synchronous low-level features such as shape, texture, and sound—but also at higher levels of abstraction, supporting the recognition of complex, modality-independent representations. A striking example is the study of Izard et al. (2009), which demonstrated that newborn infants spontaneously associate slowly moving visual arrays of objects with auditory sequences of syllables based on numerosity. The neural dynamics underlying such early, abstract, and asynchronous cross-modal processing remain unknown. To address this question, we recorded high-density EEG data from human newborns (0-3 days-old) while they were looking at visual arrays of objects and were simultaneously exposed to auditory sequences of spoken syllables that were either numerically congruent or incongruent. To track visual processing independently from auditory processing, visual arrays were presented using a frequency-tagging design with low-frequency sinusoidal modulation (0.8 Hz), temporally asynchronous with the auditory stimulation. Our results show that in the incongruent condition, EEG responses over posterior and central regions were robustly entrained by the visual stimulation, with minimal auditory-evoked activity. In contrast, in the congruent condition, while phase-locking to visual stimulation persisted in occipital areas, the amplitude of visual entrainment was largely suppressed in central regions, where auditory-evoked responses —indexed by increased theta-band event-related spectral perturbation— were significantly enhanced. These findings reveal that when numerical information is congruent across modalities, newborns engage in joint processing of both sensory streams - even when these are asynchronous - indicating a remarkably early and flexible capacity for abstract cross-modal integration.
Neural Dynamics of Cross-Modal Numerosity Integration in Human Newborns / Buiatti, Marco; Eccher, Elena; Petrizzo, Irene; Pradal, Ugo; Taddei, Fabrizio; Vallortigara, Giorgio; Izard, Veronique; Piazza, Manuela. - (2025). (Intervento presentato al convegno SINS2025 tenutosi a Pisa, Italy nel 10-13 September 2025).
Neural Dynamics of Cross-Modal Numerosity Integration in Human Newborns
Buiatti, MarcoPrimo
;Eccher, Elena;Petrizzo, Irene;Vallortigara, Giorgio;Piazza, ManuelaUltimo
2025-01-01
Abstract
From the very first hours of life, human newborns are immersed in rich multisensory environments and must rapidly learn to integrate information across distinct sensory modalities. Cross-modal integration operates not only at the perceptual level—linking temporally synchronous low-level features such as shape, texture, and sound—but also at higher levels of abstraction, supporting the recognition of complex, modality-independent representations. A striking example is the study of Izard et al. (2009), which demonstrated that newborn infants spontaneously associate slowly moving visual arrays of objects with auditory sequences of syllables based on numerosity. The neural dynamics underlying such early, abstract, and asynchronous cross-modal processing remain unknown. To address this question, we recorded high-density EEG data from human newborns (0-3 days-old) while they were looking at visual arrays of objects and were simultaneously exposed to auditory sequences of spoken syllables that were either numerically congruent or incongruent. To track visual processing independently from auditory processing, visual arrays were presented using a frequency-tagging design with low-frequency sinusoidal modulation (0.8 Hz), temporally asynchronous with the auditory stimulation. Our results show that in the incongruent condition, EEG responses over posterior and central regions were robustly entrained by the visual stimulation, with minimal auditory-evoked activity. In contrast, in the congruent condition, while phase-locking to visual stimulation persisted in occipital areas, the amplitude of visual entrainment was largely suppressed in central regions, where auditory-evoked responses —indexed by increased theta-band event-related spectral perturbation— were significantly enhanced. These findings reveal that when numerical information is congruent across modalities, newborns engage in joint processing of both sensory streams - even when these are asynchronous - indicating a remarkably early and flexible capacity for abstract cross-modal integration.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione



