As the use of large language models (LLMs) grows, it is important to examine whether they exhibit biases in their output. Research in cultural evolution, using transmission chain experiments, demonstrates that humans have biases to attend to, remember, and transmit some types of content over others. Here, in five preregistered experiments using material from previous studies with human participants, we use the same, transmission chain-like methodology, and find that the LLM ChatGPT-3 shows biases analogous to humans for content that is gender-stereotype-consistent, social, negative, threat-related, and biologically counterintuitive, over other content. The presence of these biases in LLM output suggests that such content is widespread in its training data and could have consequential downstream effects, by magnifying preexisting human tendencies for cognitively appealing and not necessarily informative, or valuable, content.

Large language models show human-like content biases in transmission chain experiments / Acerbi, Alberto; Stubbersfield, Joseph M. - In: PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA. - ISSN 0027-8424. - 120:44(2023). [10.1073/pnas.2313790120]

Large language models show human-like content biases in transmission chain experiments

Acerbi, Alberto
Primo
;
2023-01-01

Abstract

As the use of large language models (LLMs) grows, it is important to examine whether they exhibit biases in their output. Research in cultural evolution, using transmission chain experiments, demonstrates that humans have biases to attend to, remember, and transmit some types of content over others. Here, in five preregistered experiments using material from previous studies with human participants, we use the same, transmission chain-like methodology, and find that the LLM ChatGPT-3 shows biases analogous to humans for content that is gender-stereotype-consistent, social, negative, threat-related, and biologically counterintuitive, over other content. The presence of these biases in LLM output suggests that such content is widespread in its training data and could have consequential downstream effects, by magnifying preexisting human tendencies for cognitively appealing and not necessarily informative, or valuable, content.
2023
44
Acerbi, Alberto; Stubbersfield, Joseph M
Large language models show human-like content biases in transmission chain experiments / Acerbi, Alberto; Stubbersfield, Joseph M. - In: PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA. - ISSN 0027-8424. - 120:44(2023). [10.1073/pnas.2313790120]
File in questo prodotto:
File Dimensione Formato  
acerbi_stubbersfield.pdf

accesso aperto

Tipologia: Pre-print non referato (Non-refereed preprint)
Licenza: Creative commons
Dimensione 4.93 MB
Formato Adobe PDF
4.93 MB Adobe PDF Visualizza/Apri
acerbi-stubbersfield-2023-large-language-models-show-human-like-content-biases-in-transmission-chain-experiments.pdf

accesso aperto

Tipologia: Versione editoriale (Publisher’s layout)
Licenza: Creative commons
Dimensione 773.76 kB
Formato Adobe PDF
773.76 kB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/395330
Citazioni
  • ???jsp.display-item.citation.pmc??? 6
  • Scopus 24
  • ???jsp.display-item.citation.isi??? 13
  • OpenAlex ND
social impact