Neuroscience News: Cognitive Illusion: Why AI Still Can’t Think Like a Human

This shows a digitized head and people.

While the model appears to solve complex cognitive tasks, researchers found it often ignores direct intent, relying instead on statistical “test-taking” strategies. Credit: Neuroscience News

Cognitive Illusion: Why AI Still Can’t Think Like a Human

FeaturedNeurosciencePsychology

·February 12, 2026

Summary: A major debate in psychology: whether a single theory can explain the entire human mind—recently turned to AI for answers, but new evidence suggests we may be witnessing a digital illusion. While the “Centaur” AI model initially made waves for its ability to simulate human behavior across 160 cognitive tasks, researchers have uncovered evidence of significant overfitting.

Instead of genuinely understanding psychological principles, the model appears to be relying on statistical “test-taking strategies.” This discovery highlights a critical bottleneck in artificial intelligence: the gap between sophisticated data fitting and genuine language comprehension, serving as a warning against treating black-box models as true mirrors of human thought.

Key Facts

  • The Overfitting Trap: Researchers found that “Centaur” didn’t actually process task instructions; when told to “Choose Option A,” it ignored the command and continued picking “correct” answers from its training patterns.
  • Pattern Matching vs. Understanding: The model’s high performance across 160 tasks is likely the result of learning specific answer patterns rather than simulating the underlying cognitive processes of decision-making or executive control.
  • The Language Bottleneck: The study suggests that the most significant barrier to creating a “General Cognitive Model” is not data size, but the model’s inability to capture and respond to the actual intent of language.

Source: Science China Press

In psychology, it has long been debated whether the human mind can be explained using a unified theory or whether each aspect of the human mind, e.g., attention and memory, has to be separately studied.

Now, artificial intelligence (AI) models are entering the discussion, offering a new way to probe this age‑old question.

In July 2025, Nature published a groundbreaking study introducing an AI model named “Centaur”. Built upon conventional large language models and fine‑tuned with psychological experiment data, this model claimed to accurately simulate human cognitive behavior across 160 tasks covering decision‑making, executive control, and other domains.

The achievement attracted widespread attention and was regarded as potentially signaling AI’s capability to comprehensively simulate human cognition.

However, a recent study published in National Science Open has raised significant doubts about the Centaur model.

https://4e851f28a031995a38ff4e2c7b927168.safeframe.googlesyndication.com/safeframe/1-0-45/html/container.html

The research team from Zhejiang University pointed out that the “human cognitive simulation ability” demonstrated by Centaur is likely a result of overfitting—meaning the model did not genuinely understand the experimental tasks but merely learned answer patterns from the training data.

To validate this perspective, the research team designed multiple testing scenarios. For instance, they replaced the original multiple‑choice question stems, which described specific psychological tasks, with the instruction “Please choose option A”.

In such a scenario, if the model truly understood the task requirement, it should consistently select option A. However, in actual testing, Centaur still chose the “correct answers” from the original question database.

This indicates that the model did not make judgments based on the semantic meaning of the questions but relied on statistical patterns to “guess” the answers—akin to a student achieving high scores through test‑taking strategies without understanding the questions.

This study serves as a reminder to adopt a more cautious approach when evaluating the capabilities of large language models. While large language models are powerful tools for data fitting, their “black‑box” nature makes them prone to issues such as hallucinations and misinterpretations. Only through precise and multi‑faceted evaluations can we determine whether a model genuinely possesses certain professional abilities.

Notably, despite Centaur’s positioning as a “cognitive simulation” model, its most significant shortcoming lies in language comprehension itself, specifically, in capturing and responding to the intent of the questions. This study also suggests that genuine language understanding may be the most critical technological bottleneck in the path toward building general cognitive models.

Key Questions Answered:

Q: Did AI actually solve the mystery of how the human mind works?

A: Not yet. While the Centaur model claimed to simulate human behavior across nearly 200 domains, new testing shows it was essentially “gaming the system.” It wasn’t thinking like a human; it was matching data points like a student memorizing an answer key without reading the textbook.

Q: How did scientists prove the AI was “cheating”?

A: They used a clever “instruction override.” By replacing complex questions with the simple command “Please choose option A,” researchers proved the AI wasn’t listening. The model kept providing answers to the original questions it had seen during training, proving it was blind to the actual meaning of the prompt.

Q: What does this mean for the future of AI in psychology?

A: It serves as a major “caution” sign. It proves that a model can look incredibly “human” on the surface while being completely hollow underneath. Future research must focus on multi-faceted evaluations to ensure AI is genuinely understanding intent, rather than just being a powerful engine for data fitting.

https://4e851f28a031995a38ff4e2c7b927168.safeframe.googlesyndication.com/safeframe/1-0-45/html/container.html

Editorial Notes:

  • This article was edited by a Neuroscience News editor.
  • Journal paper reviewed in full.
  • Additional context added by our staff.

About this AI and cognition research news

Author: Bei Yan
Source: Science China Press
Contact: Bei Yan – Science China Press
Image: The image is credited to Neuroscience News

Original Research: Open access.
Can Centaur truly simulate human cognition? The fundamental limitation of instruction understanding” by Wei Liu, and Nai Ding. National Science Open
DOI:10.1360/nso/20250053

Unknown's avatar

About michelleclarke2015

Life event that changes all: Horse riding accident in Zimbabwe in 1993, a fractured skull et al including bipolar anxiety, chronic fatigue …. co-morbidities (Nietzche 'He who has the reason why can deal with any how' details my health history from 1993 to date). 17th 2017 August operation for breast cancer (no indications just an appointment came from BreastCheck through the Post). Trinity College Dublin Business Economics and Social Studies (but no degree) 1997-2003; UCD 1997/1998 night classes) essays, projects, writings. Trinity Horizon Programme 1997/98 (Centre for Women Studies Trinity College Dublin/St. Patrick's Foundation (Professor McKeon) EU Horizon funded: research study of 15 women (I was one of this group and it became the cornerstone of my journey to now 2017) over 9 mth period diagnosed with depression and their reintegration into society, with special emphasis on work, arts, further education; Notes from time at Trinity Horizon Project 1997/98; Articles written for Irishhealth.com 2003/2004; St Patricks Foundation monthly lecture notes for a specific period in time; Selection of Poetry including poems written by people I know; Quotations 1998-2017; other writings mainly with theme of social justice under the heading Citizen Journalism Ireland. Letters written to friends about life in Zimbabwe; Family history including Michael Comyn KC, my grandfather, my grandmother's family, the O'Donnellan ffrench Blake-Forsters; Moral wrong: An acrimonious divorce but the real injustice was the Catholic Church granting an annulment – you can read it and make your own judgment, I have mine. Topics I have written about include annual Brain Awareness week, Mashonaland Irish Associataion in Zimbabwe, Suicide (a life sentence to those left behind); Nostalgia: Tara Hill, Co. Meath.
This entry was posted in Uncategorized and tagged , , , , . Bookmark the permalink.

Leave a comment