Large language models (LLMs) are increasingly used in everyday tools and applications, raising concerns about their potential influence on political views. While prior research has shown that LLMs often exhibit measurable political biases-frequently skewing toward liberal or progressive positions-key gaps remain. Most existing studies evaluate only a narrow set of models and languages, leaving open questions about the generalizability of political biases across architectures, scales, and multilingual settings. Moreover, few works examine whether these biases can be actively controlled.In this work, we address these gaps through a large-scale study of political orientation in modern open-source instruction-tuned LLMs. We evaluate seven models, including LLaMA-3.1, Qwen-3, and Aya-Expanse, across 14 languages using the Political Compass Test with 11 semantically equivalent paraphrases per statement to ensure robust measurement. Our results reveal that larger models consistently shift toward libertarian-left positions, with significant variations across languages and model families. To test the manipulability of political stances, we utilize a simple center-of-mass activation intervention technique and show that it reliably steers model responses toward alternative ideological positions across multiple languages. Our code is publicly available at https://github.com/d-gurgurov/Political-Ideologies-LLMs.

Multilingual Political Views of Large Language Models: Identification and Steering / Gurgurov, Daniil; Trinley, Katharina; Vykopal, Ivan; Van Genabith, Josef; Ostermann, Simon; Zamparelli, Roberto. - ELETTRONICO. - (2025). ( IJCNLP17 Mumbai, India 20-24 December 2025).

Multilingual Political Views of Large Language Models: Identification and Steering

Roberto Zamparelli
2025-01-01

Abstract

Large language models (LLMs) are increasingly used in everyday tools and applications, raising concerns about their potential influence on political views. While prior research has shown that LLMs often exhibit measurable political biases-frequently skewing toward liberal or progressive positions-key gaps remain. Most existing studies evaluate only a narrow set of models and languages, leaving open questions about the generalizability of political biases across architectures, scales, and multilingual settings. Moreover, few works examine whether these biases can be actively controlled.In this work, we address these gaps through a large-scale study of political orientation in modern open-source instruction-tuned LLMs. We evaluate seven models, including LLaMA-3.1, Qwen-3, and Aya-Expanse, across 14 languages using the Political Compass Test with 11 semantically equivalent paraphrases per statement to ensure robust measurement. Our results reveal that larger models consistently shift toward libertarian-left positions, with significant variations across languages and model families. To test the manipulability of political stances, we utilize a simple center-of-mass activation intervention technique and show that it reliably steers model responses toward alternative ideological positions across multiple languages. Our code is publicly available at https://github.com/d-gurgurov/Political-Ideologies-LLMs.
2025
Proceedings of the 14th International Joint Conference on Natural Language Processing and the 4th conference of the Asia-Pacific chapter of the Association for Computational Linguistics
Mumbai, India
The Asian Federation of Natural Language Processing and The Association for Computational Linguistics
979-8-89176-303-6
Settore ING-INF/05 - Sistemi di Elaborazione delle Informazioni
Gurgurov, Daniil; Trinley, Katharina; Vykopal, Ivan; Van Genabith, Josef; Ostermann, Simon; Zamparelli, Roberto
Multilingual Political Views of Large Language Models: Identification and Steering / Gurgurov, Daniil; Trinley, Katharina; Vykopal, Ivan; Van Genabith, Josef; Ostermann, Simon; Zamparelli, Roberto. - ELETTRONICO. - (2025). ( IJCNLP17 Mumbai, India 20-24 December 2025).
File in questo prodotto:
File Dimensione Formato  
2025.findings-ijcnlp.17.pdf

accesso aperto

Tipologia: Versione editoriale (Publisher’s layout)
Licenza: Creative commons
Dimensione 20.02 MB
Formato Adobe PDF
20.02 MB Adobe PDF Visualizza/Apri
2025.findings-ijcnlp.17_compressed.pdf

accesso aperto

Tipologia: Versione editoriale (Publisher’s layout)
Licenza: Creative commons
Dimensione 849.26 kB
Formato Adobe PDF
849.26 kB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11572/474390
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
  • OpenAlex ND
social impact