Scientists Go Serious About Large Language Models Mirroring Human Thinking

Research combining human brain imaging and psychology with hardcore computer science studies of the LLMs at work

Here I present a set of novel papers, preprints and reviews with research suggesting that, at least for text processing and procedural reasoning, LLMs do work pretty much like the human brain — yet with quite some substantial differences that scientists are now starting to clarify.

Picture generated by DALL-E 3 via ChatGPT, usable for commercial purposes as indicated at https://openai.com/policies/terms-of-use/

Introduction

The emergence of large language models (LLMs) has spurred considerable interest in their potential to mirror the cognitive processes of the human brain. These complex computational systems demonstrate increasingly sophisticated capabilities in language processing, reasoning, and problem-solving, raising the intriguing question of whether they might operate using principles similar to those governing the human mind. I have indeed covered this idea before a couple of times, particularly in the context of the “Chinese room argument” and also in drawing parallels between how LLMs process text and how we humans learn to speak at the same time as we interact with the world and develop reasoning abilities from our daily experiences: