LLMs talk like us, but they don’t think like us. In brains, thinking goes beyond language.
Anna Ivanova’s work vividly illustrates this. Her experiments probe how certain forms of reasoning don’t need the language area of the brain.
Subjects with aphantasia (or severe damage to their language faculties) can not only play chess, they can reason about the probability of situations. Expressing surprise at a picture of a swimmer biting a shark for example.
Perhaps even more surprisingly, reading through computer code scarcely requires the language area of our brains. Computer languages are not treated the same as natural languages.
These examples show that concepts and cogitation don’t reside only in our language system.
This has implications for how we think about thought. We certainly cannot equate it with language, and it may have no neat correspondence with any one thing. Language may be the best surface mapping or reflection of a deeper, multifaceted set of activities.
Anna also argues that AI needs to progress beyond monolithic language models. Architectural modularity either needs to emerge or be put in by design. The latter is certainly happening: RAG is the modularisation of memory, while ChatGPT’s code interpretation (AKA advanced data analysis) corresponds to a specialization for logical and mathematical manipulation. How well these map to what the brain does … we shall see.
Anna’s website has links to her publications and other podcasts and videos discussing her work.