Please use this identifier to cite or link to this item:
https://doi.org/10.21256/zhaw-30246
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Jenny, David F. | - |
dc.contributor.author | Billeter, Yann | - |
dc.contributor.author | Sachan, Mrinmaya | - |
dc.contributor.author | Schölkopf, Bernhard | - |
dc.contributor.author | Jin, Zhijing | - |
dc.date.accessioned | 2024-03-15T15:48:45Z | - |
dc.date.available | 2024-03-15T15:48:45Z | - |
dc.date.issued | 2023-11-15 | - |
dc.identifier.uri | https://digitalcollection.zhaw.ch/handle/11475/30246 | - |
dc.description.abstract | The rapid advancement of Large Language Models (LLMs) has sparked intense debate regarding their ability to perceive and interpret complex socio-political landscapes. In this study, we undertake an exploration of decision-making processes and inherent biases within LLMs, exemplified by ChatGPT, specifically contextualizing our analysis within political debates. We aim not to critique or validate LLMs' values, but rather to discern how they interpret and adjudicate "good arguments." By applying Activity Dependency Networks (ADNs), we extract the LLMs' implicit criteria for such assessments and illustrate how normative values influence these perceptions. We discuss the consequences of our findings for human-AI alignment and bias mitigation. | de_CH |
dc.format.extent | 27 | de_CH |
dc.language.iso | en | de_CH |
dc.publisher | arXiv | de_CH |
dc.rights | http://creativecommons.org/licenses/by/4.0/ | de_CH |
dc.subject | Computation and language | de_CH |
dc.subject | Artificial intelligence | de_CH |
dc.subject | Social network | de_CH |
dc.subject | Information network | de_CH |
dc.subject | Large language model | de_CH |
dc.subject.ddc | 006: Spezielle Computerverfahren | de_CH |
dc.title | Navigating the ocean of biases : political bias attribution in language models via causal structures | de_CH |
dc.type | Working Paper – Gutachten – Studie | de_CH |
dcterms.type | Text | de_CH |
zhaw.departement | School of Engineering | de_CH |
zhaw.organisationalunit | Centre for Artificial Intelligence (CAI) | de_CH |
dc.identifier.doi | 10.48550/arXiv.2311.08605 | de_CH |
dc.identifier.doi | 10.21256/zhaw-30246 | - |
zhaw.funding.eu | No | de_CH |
zhaw.originated.zhaw | Yes | de_CH |
zhaw.webfeed | Intelligent Vision Systems | de_CH |
zhaw.author.additional | No | de_CH |
zhaw.display.portrait | Yes | de_CH |
zhaw.relation.references | https://github.com/david-jenny/LLM-Political-Study | de_CH |
Appears in collections: | Publikationen School of Engineering |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
2023_Jenny-etal_Political-bias-attribution-in-language-models.pdf | 3.17 MB | Adobe PDF | View/Open |
Show simple item record
Jenny, D. F., Billeter, Y., Sachan, M., Schölkopf, B., & Jin, Z. (2023). Navigating the ocean of biases : political bias attribution in language models via causal structures. arXiv. https://doi.org/10.48550/arXiv.2311.08605
Jenny, D.F. et al. (2023) Navigating the ocean of biases : political bias attribution in language models via causal structures. arXiv. Available at: https://doi.org/10.48550/arXiv.2311.08605.
D. F. Jenny, Y. Billeter, M. Sachan, B. Schölkopf, and Z. Jin, “Navigating the ocean of biases : political bias attribution in language models via causal structures,” arXiv, Nov. 2023. doi: 10.48550/arXiv.2311.08605.
JENNY, David F., Yann BILLETER, Mrinmaya SACHAN, Bernhard SCHÖLKOPF und Zhijing JIN, 2023. Navigating the ocean of biases : political bias attribution in language models via causal structures. arXiv
Jenny, David F., Yann Billeter, Mrinmaya Sachan, Bernhard Schölkopf, and Zhijing Jin. 2023. “Navigating the Ocean of Biases : Political Bias Attribution in Language Models via Causal Structures.” arXiv. https://doi.org/10.48550/arXiv.2311.08605.
Jenny, David F., et al. Navigating the Ocean of Biases : Political Bias Attribution in Language Models via Causal Structures. arXiv, 15 Nov. 2023, https://doi.org/10.48550/arXiv.2311.08605.
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.