Friday, 9 February 2024, 2pm–4pm (CET)
Uwe Peters (Utrecht)
Hasty (Algorithmic) Generalizations: A Systematic Analysis of Chatbot Science Communication
Large language models (LLM) such as ChatGPT have extensive potential as science communicators because they can provide laypeople, governments, and policymakers with easily understandable explanations of scientific findings, thus helping to increase science literacy worldwide. However, it remains unclear whether LLM summaries of scientific texts capture the uncertainties, limitations, and nuances of research, or contain oversimplified texts, omitting qualifiers or quantifiers present in scientific texts. The omission of qualifiers may result in generalizations of scientific findings that are much broader than warranted by the original research, potentially raising significant ethical and epistemic problems (e.g., human users may misinterpret scientific findings). However, the scope and accuracy of the generalizations that LLMs produce in their science communication has not yet been systematically explored. Building (inter alia) on recent philosophical work on generics, we therefore statistically compared the generalizations found in 200 human summaries of scientific texts (i.e., abstracts of scientific articles) with the corresponding 200 summaries produced by four leading LLMs (incl. ChatGPT 3.5. and ChatGPT 4). This talk presents the preliminary (disconcerting) results of our analyses and highlights normative implications.
To receive the link to the webinar, please fill out this form (if possible, using your institutional academic email address):
https://forms.gle/vUwADBM3tHVhZZvQ6
All are welcome!