To understand a visual scene, observers need to both recognize objects and encode relational structure. For example, a scene comprising three apples requires the observer to encode concepts of “apple” and “three.” In the primate brain, these functions rely on dual (ventral and dorsal) processing streams. Object recognition in primates has been successfully modeled with deep neural networks, but how scene structure (including numerosity) is encoded remains poorly understood. Here, we built a deep learning model, based on the dual-stream architecture of the primate brain, which is able to count items “zero-shot”—even if the objects themselves are unfamiliar. Our dual-stream network forms spatial response fields and lognormal number codes that resemble those observed in the macaque posterior parietal cortex. The dual-stream network also makes successful predictions about human counting behavior. Our results provide evidence for an enactive theory of the role of the posterior parietal cort...

To understand a visual scene, observers need to both recognize objects and encode relational structure. For example, a scene comprising three apples requires the observer to encode concepts of "apple" and "three." In the primate brain, these functions rely on dual (ventral and dorsal) processing streams. Object recognition in primates has been successfully modeled with deep neural networks, but how scene structure (including numerosity) is encoded remains poorly understood. Here, we built a deep learning model, based on the dual-stream architecture of the primate brain, which is able to count items "zero-shot"-even if the objects themselves are unfamiliar. Our dual-stream network forms spatial response fields and lognormal number codes that resemble those observed in the macaque posterior parietal cortex. The dual-stream network also makes successful predictions about human counting behavior. Our results provide evidence for an enactive theory of the role of the posterior parietal cortex in visual scene understanding.

Zero-shot counting with a dual-stream neural network model / Thompson, J. A. F.; Sheahan, H.; Dumbalska, T.; Sandbrink, J. D.; Piazza, M.; Summerfield, C.. - In: NEURON. - ISSN 1097-4199. - ELETTRONICO. - 112:24(2024), pp. 4147-4158. [10.1016/j.neuron.2024.10.008]

Zero-shot counting with a dual-stream neural network model

Piazza M.
Penultimo
;
2024-01-01

Abstract

To understand a visual scene, observers need to both recognize objects and encode relational structure. For example, a scene comprising three apples requires the observer to encode concepts of “apple” and “three.” In the primate brain, these functions rely on dual (ventral and dorsal) processing streams. Object recognition in primates has been successfully modeled with deep neural networks, but how scene structure (including numerosity) is encoded remains poorly understood. Here, we built a deep learning model, based on the dual-stream architecture of the primate brain, which is able to count items “zero-shot”—even if the objects themselves are unfamiliar. Our dual-stream network forms spatial response fields and lognormal number codes that resemble those observed in the macaque posterior parietal cortex. The dual-stream network also makes successful predictions about human counting behavior. Our results provide evidence for an enactive theory of the role of the posterior parietal cort...
2024
24
Thompson, J. A. F.; Sheahan, H.; Dumbalska, T.; Sandbrink, J. D.; Piazza, M.; Summerfield, C.
Zero-shot counting with a dual-stream neural network model / Thompson, J. A. F.; Sheahan, H.; Dumbalska, T.; Sandbrink, J. D.; Piazza, M.; Summerfield, C.. - In: NEURON. - ISSN 1097-4199. - ELETTRONICO. - 112:24(2024), pp. 4147-4158. [10.1016/j.neuron.2024.10.008]
File in questo prodotto:
File Dimensione Formato  
2024_Thompson_Zero-shot counting with a dual-stream neural network model_Neuron2024.pdf

accesso aperto

Tipologia: Versione editoriale (Publisher’s layout)
Licenza: Creative commons
Dimensione 3.96 MB
Formato Adobe PDF
3.96 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/442834
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 1
  • ???jsp.display-item.citation.isi??? 1
  • OpenAlex ND
social impact