Collaborative Colloquium Talk Series: Dr. Michal Kosinski


9:30 am-10:30 am
Add to Outlook/iCal
Add to Google Calendar


Malachowsky Hall 7200
1889 Museum Road
Gainesville, Fl 32611


Biography of Speaker: Prof. Michal Kosinski is a Professor at Stanford University. His research interests encompass both human and artificial cognition. His current work centers on examining the psychological processes in Large Language Models (LLMs), and leveraging Artificial Intelligence (AI), Machine Learning (ML), and Big Data to model and predict human behavior. Michal has co-authored Modern Psychometrics (a popular textbook) and published over 100 peer-reviewed papers in leading journals including Nature Scientific Reports, Proceedings of the National Academy of Sciences, Psychological Science, Journal of Personality and Social Psychology, and Machine Learning, that have been cited over 21,000 times. He is among the Top 1% of the Highly Cited Researchers according to Clarivate. His research inspired a cover of The Economist, a 2014 theatre play “Privacy”, multiple TED talks, a video game, and was discussed in thousands of books, press articles, podcasts, and documentaries. Michal was behind the first press article warning against Cambridge Analytica. His research exposed the privacy risks that they have exploited and measured the efficiency of their methods. He holds a doctorate in psychology from the University of Cambridge and master’s degrees in psychometrics and social psychology. He worked as a post-doctoral scholar at Stanford’s Computer Science Department, the Deputy Director of the University of Cambridge Psychometrics Centre, and a researcher at Microsoft Research (Machine Learning Group).

Title of the Talk: Emergent Cognitive Abilities in Large Language Models: Mirage, Miracle, or Mundane?

Abstract: Large Language Models (LLMs) trained to predict the next word in a sentence surprised their creators by displaying emergent properties ranging from a proclivity to be racist and sexist, to an ability to write computer code, translate between languages, and solve mathematical tasks. This talk discusses results of several studies evaluating LLMs performance on tasks typically used to study mental processes in humans. Findings indicate that with increases in model size and linguistic dexterity, LLMs show a growing capacity to navigate false-belief scenarios, sidestep semantic illusions, and tackle cognitive reflection tasks. This talk will explore several possible interpretations of these findings, including the intriguing possibility that theory of mind and System 2 thinking may have spontaneously emerged as a byproduct of LLMs’ improving language skills.


Hosted by

Department of CISE: Dr. Sonja Schmer-Galunder & Department of Psychology: Dr. Gregory Webster