-
Archives
- February 2026
- January 2026
- December 2025
- November 2025
- October 2025
- September 2025
- August 2025
- July 2025
- June 2025
- May 2025
- April 2025
- March 2025
- February 2025
- January 2025
- December 2024
- November 2024
- October 2024
- September 2024
- August 2024
- July 2024
- June 2024
- May 2024
- April 2024
- March 2024
- February 2024
- January 2024
- December 2023
- November 2023
- October 2023
- September 2023
- August 2023
- July 2023
- June 2023
- May 2023
- April 2023
- March 2023
- February 2023
- January 2023
- December 2022
- November 2022
- October 2022
- September 2022
- August 2022
- July 2022
- June 2022
- May 2022
- April 2022
- March 2022
- February 2022
- January 2022
- September 2021
- August 2021
- July 2021
- June 2021
- May 2021
- April 2021
- February 2021
- January 2021
- December 2020
- November 2020
- October 2020
- September 2020
- August 2020
- July 2020
- June 2020
- May 2020
- April 2020
- March 2020
- February 2020
- January 2020
- December 2019
- November 2019
- October 2019
- September 2019
- August 2019
- July 2019
- June 2019
- May 2019
- April 2019
- March 2019
- February 2019
- January 2019
- December 2018
- November 2018
- October 2018
- September 2018
- July 2018
- June 2018
- May 2018
- April 2018
- March 2018
- February 2018
- January 2018
- December 2017
- November 2017
- October 2017
- September 2017
- August 2017
- July 2017
- June 2017
- May 2017
- April 2017
- March 2017
- February 2017
- January 2017
- December 2016
- November 2016
- October 2016
- September 2016
- August 2016
- July 2016
- June 2016
- May 2016
- April 2016
- March 2016
- February 2016
- January 2016
- December 2015
- November 2015
- October 2015
- September 2015
- August 2015
- July 2015
- June 2015
- May 2015
- March 2015
- January 2015
-
Meta
GZERO: Ian Bremmer … Munich Security Conference: Can Europe Stand Alone?
Posted in Uncategorized
Leave a comment
GZERO: Are we in an era of “wrecking ball politics” : Global Stage
Posted in Uncategorized
Leave a comment
Neuroscience News: Cognitive Illusion: Why AI Still Can’t Think Like a Human

While the model appears to solve complex cognitive tasks, researchers found it often ignores direct intent, relying instead on statistical “test-taking” strategies. Credit: Neuroscience News
Cognitive Illusion: Why AI Still Can’t Think Like a Human
FeaturedNeurosciencePsychology
·February 12, 2026
Summary: A major debate in psychology: whether a single theory can explain the entire human mind—recently turned to AI for answers, but new evidence suggests we may be witnessing a digital illusion. While the “Centaur” AI model initially made waves for its ability to simulate human behavior across 160 cognitive tasks, researchers have uncovered evidence of significant overfitting.
Instead of genuinely understanding psychological principles, the model appears to be relying on statistical “test-taking strategies.” This discovery highlights a critical bottleneck in artificial intelligence: the gap between sophisticated data fitting and genuine language comprehension, serving as a warning against treating black-box models as true mirrors of human thought.
Key Facts
- The Overfitting Trap: Researchers found that “Centaur” didn’t actually process task instructions; when told to “Choose Option A,” it ignored the command and continued picking “correct” answers from its training patterns.
- Pattern Matching vs. Understanding: The model’s high performance across 160 tasks is likely the result of learning specific answer patterns rather than simulating the underlying cognitive processes of decision-making or executive control.
- The Language Bottleneck: The study suggests that the most significant barrier to creating a “General Cognitive Model” is not data size, but the model’s inability to capture and respond to the actual intent of language.
Source: Science China Press
In psychology, it has long been debated whether the human mind can be explained using a unified theory or whether each aspect of the human mind, e.g., attention and memory, has to be separately studied.
Now, artificial intelligence (AI) models are entering the discussion, offering a new way to probe this age‑old question.
In July 2025, Nature published a groundbreaking study introducing an AI model named “Centaur”. Built upon conventional large language models and fine‑tuned with psychological experiment data, this model claimed to accurately simulate human cognitive behavior across 160 tasks covering decision‑making, executive control, and other domains.
The achievement attracted widespread attention and was regarded as potentially signaling AI’s capability to comprehensively simulate human cognition.
However, a recent study published in National Science Open has raised significant doubts about the Centaur model.
The research team from Zhejiang University pointed out that the “human cognitive simulation ability” demonstrated by Centaur is likely a result of overfitting—meaning the model did not genuinely understand the experimental tasks but merely learned answer patterns from the training data.
To validate this perspective, the research team designed multiple testing scenarios. For instance, they replaced the original multiple‑choice question stems, which described specific psychological tasks, with the instruction “Please choose option A”.
In such a scenario, if the model truly understood the task requirement, it should consistently select option A. However, in actual testing, Centaur still chose the “correct answers” from the original question database.
This indicates that the model did not make judgments based on the semantic meaning of the questions but relied on statistical patterns to “guess” the answers—akin to a student achieving high scores through test‑taking strategies without understanding the questions.
This study serves as a reminder to adopt a more cautious approach when evaluating the capabilities of large language models. While large language models are powerful tools for data fitting, their “black‑box” nature makes them prone to issues such as hallucinations and misinterpretations. Only through precise and multi‑faceted evaluations can we determine whether a model genuinely possesses certain professional abilities.
Notably, despite Centaur’s positioning as a “cognitive simulation” model, its most significant shortcoming lies in language comprehension itself, specifically, in capturing and responding to the intent of the questions. This study also suggests that genuine language understanding may be the most critical technological bottleneck in the path toward building general cognitive models.
Key Questions Answered:
Q: Did AI actually solve the mystery of how the human mind works?
A: Not yet. While the Centaur model claimed to simulate human behavior across nearly 200 domains, new testing shows it was essentially “gaming the system.” It wasn’t thinking like a human; it was matching data points like a student memorizing an answer key without reading the textbook.
Q: How did scientists prove the AI was “cheating”?
A: They used a clever “instruction override.” By replacing complex questions with the simple command “Please choose option A,” researchers proved the AI wasn’t listening. The model kept providing answers to the original questions it had seen during training, proving it was blind to the actual meaning of the prompt.
Q: What does this mean for the future of AI in psychology?
A: It serves as a major “caution” sign. It proves that a model can look incredibly “human” on the surface while being completely hollow underneath. Future research must focus on multi-faceted evaluations to ensure AI is genuinely understanding intent, rather than just being a powerful engine for data fitting.
Editorial Notes:
- This article was edited by a Neuroscience News editor.
- Journal paper reviewed in full.
- Additional context added by our staff.
About this AI and cognition research news
Author: Bei Yan
Source: Science China Press
Contact: Bei Yan – Science China Press
Image: The image is credited to Neuroscience News
Original Research: Open access.
“Can Centaur truly simulate human cognition? The fundamental limitation of instruction understanding” by Wei Liu, and Nai Ding. National Science Open
DOI:10.1360/nso/20250053
Posted in Uncategorized
Tagged ai, artificial-intelligence, chatgpt, philosophy, technology
Leave a comment
Axios: Embrace your insecurity
| 1 big thing: Embrace your insecurity |
![]() |
| Illustration: Sarah Grillo/Axios |
| Axios CEO Jim VandeHei is here with advice on how to use your insecurities to motivate you. For a video version of this column (and to see what Jim looked like as a 20-year-old punk), click here. I wasn’t a fast, early adopter of AI beyond basic search on ChatGPT, for the same reason many of you have balked at these muscular new tools: I was insecure about my ability to use it with any sophistication. My internal dialogue: I’m too old, too non-technical, too much of a words guy to make it work. Classic imposter moment: Who am I to think I can do this? Why it matters: But I used my insecurity as a powerful motivator, much like I did in my early media career when I feared everyone in D.C. was better read, better educated, better connected than I was. (Spoiler: They were!) I attacked it. I started building things — apps, tools, prototypes — with an AI model as my collaborator. No computer science degree. No coding boot camp. Just curiosity and stubbornness. And it worked. Not because I suddenly became technical, but because I refused to let the insecurity win.The big picture: I’ve always assumed my insecurities are actually superpowers if used right. I wrote a Finish Line column on this very topic in 2022. Since then, the science seems to confirm it: Insecurity might be exactly what we need. The new science of imposter syndrome is striking. MIT Sloan researcher Basima Tewfik ran a lab experiment and found that people experiencing imposter thoughts exerted 13% more effort than their peers when the pressure was on. When they felt overwhelmed, the self-doubt didn’t crush them. It fueled them. A massive global meta-analysis published last year reviewed 30 studies covering nearly 12,000 people and found that 62% of high-achieving professionals experience imposter syndrome.That’s not a bug in the human operating system. That’s a feature — if you know how to use it. Think about that: The majority of successful people around you feel like frauds. Most of us do. The question isn’t whether you have imposter syndrome. It’s whether you’re using it in a healthy way. Here’s a 2026 version of how to leverage your insecurity in an AI world: Be honest — then get curious. My original advice was to understand your weaknesses. I’d sharpen that point. The people thriving in this AI moment aren’t the ones with the fanciest résumés — they’re the ones willing to say “I don’t know,” then obsessively ask questions until they do. Attack the weakness daily. When AI hit, I could have done what a lot of people my age did: Delegate it to younger staffers, nod along in meetings, fake fluency. Instead, I spent nights and weekends building things — terrible at first, then less terrible, then actually useful. You’ll be shocked by how quickly persistent effort erodes a limitation you assumed was permanent. The gap between “I can’t do this” and “I just did this” is smaller than you think. Weaponize the fury. Michael Jordan manufactured slights to fuel his competitive edge. That still works. But in this era, the best target for your fury isn’t a rival or a critic — it’s complacency. The people falling behind right now aren’t the ones who lack talent. They’re the ones who’ve gotten too comfortable. Channel the chip on your shoulder into relentless learning, not just relentless competing. Give yourself grace — but not an excuse. I still can’t sing. I still can’t dance. I’ll never be great at Trivial Pursuit. At some point, lamenting what you can’t do is wasted energy. But — and this is the key update — don’t confuse “I’m not naturally good at this” with “I can’t learn this.” Those are very different things. The first is self-awareness. The second is a cop-out. The bottom line: No sane person is as confident as they seem. That global study confirms it: Most high achievers walk around feeling like frauds. The successful people in my life simply accept that … and channel it.Watch the video … Subscribe to our YouTube channel … Share this column. Jim’s book on life and leadership — “Just the Good Stuff: No-BS Secrets to Success,” a New York Times bestseller — is coming in paperback on April 21. Preorder here. |
Posted in Uncategorized
Tagged imposter-syndrome, mental-health, mindset, self-improvement, writing
Leave a comment
Axios: Meet Sam: a dedicated Amazon area manager and full-time mum : Amazon offers free skills training for hourly employees like Sam.
| A MESSAGE FROM AMAZON |
| Meet Sam: a dedicated Amazon area manager and full-time mom |
| Career journey: Sam started at Amazon when she was pregnant. With fully-paid parental leave, she was able to take the time she needed without missing a beat.Favorite benefit? “One of the benefits of Amazon is that my maternity leave was so long, and it did not hurt my career at all.”Learn more. |
Posted in Uncategorized
Leave a comment
Colonel Richard Kemp says Civil War is coming to the UK…”much more intensive than Northern Ireland”
Posted in Uncategorized
Leave a comment
Axios: Modernizing Child Welfare Systems with Claude. Quote: “Anthropic’s work with Binti is an important partnership in social services as part of the broader “AI for Good” initiative.”
Binti modernizes child welfare systems with Claude
Try Claude
Industry:
Beneficial deployments
Company size:
Small
Product:
AI Platform
Location:
North America
50% faster
Home visit report writing time, from 3-4 hours to under 2 hours
20% reduced timeline
Family licensing timelines cut from 110 days to under 90 days
https://www.youtube-nocookie.com/embed/i9U_b-8KKno
Binti is a technology company helping child welfare agencies modernize how they support children and families—so that every child can have a family. The company supports over 550 agencies across 36 states, serving 47% of children in care. Binti partners with state, county, and private child welfare agencies to streamline complex child welfare processes. By integrating Claude into their platform, Binti helps social workers make more informed decisions and spend less time on administrative work, so they can focus on connecting with families and driving better outcomes for children.
Anthropic’s work with Binti is an important partnership in social services as part of the broader “AI for Good” initiative.
Measurable impact for child welfare agencies
Binti has driven significant impact for the more than 550 child welfare agencies and more than 12,000 social workers it supports. Binti has helped child welfare agencies:
- Reduce the time spent on administrative work by 20-40%
- Increase the number of foster & adoptive families approved annually by ~30%
- Reduce the number of days to approve foster and adoptive families by ~18%
- Approve over 100,000 foster and adoptive families total
Now, with Claude, Binti is able to drive even greater impact for agencies. Social workers using Binti’s AI tools have reported a 50% reduction in home visit writing time—from 3-4 hours to under 2 hours—with some seeing time savings as high as 75%.
Critical work slowed by overwhelming administrative burden
Child welfare represents some of society’s most important work—ensuring children have safe, permanent homes. Yet the systems designed to help these children often lead to social workers spending half or more of their time on administrative tasks instead of directly with children and families. There is a nationwide shortage of social workers. Social worker turnover is exacerbated by staff frustration with outdated tools such as 70-column Excel spreadsheets, paper forms, sticky notes, and 20 year old legacy government systems—leading to vital information slipping through the cracks, social workers getting burnt out, and suboptimal outcomes for children.
This administrative burden created real challenges. Felicia Curcuru, co-founder and CEO of Binti, said, “Social workers spend hours interviewing a family in person. Then they have to go back to the office and spend 6-8 hours writing their notes and assessment of the interview. Social workers got into this work to work with children and families, but they spend 50% or more of their time doing administrative work.”
AI-powered features that transform daily workflows
Binti integrated Claude to power three core features that reduce administrative burden, help social workers make more informed decisions, and ensure they’re able to support all families:
- Binti Form and Case Note Completion: Social workers can record family meetings or upload handwritten notes, and they’ll receive AI-drafted documentation ready for review. Workers can upload recordings immediately after home visits—even from their cars—and return to the office with drafts already prepared, reducing report writing time from 3-4 hours to under 2 hours.
- Chat with Binti AI: Workers can instantly query case information using natural language instead of searching through hundreds of pages of documents. The AI cites its sources for every answer and includes feedback mechanisms for continuous improvement.
- Binti Translate Families can complete forms in their preferred language, with responses automatically translated for caseworkers. This reduces interpretation costs while ensuring non-English-speaking families can fully participate in the process.
Curcuru noted, “The technical integration with Claude was straightforward and user friendly for our engineering team. It allowed us to quickly launch AI-powered features, get them into the hands of social workers, and gather early, meaningful feedback.”
Selecting Claude for security and human-centered AI in child welfare
Binti needed an AI partner that could handle sensitive family data securely while respecting the human element of social work. Curcuru emphasized, “Agencies need absolute confidence that children’s and families’ personal information is completely protected.” This meant HIPAA compliance, SOC 2 certification, and guarantees against training on client data.
After evaluating multiple AI models with mock family interviews, Binti chose Claude because it excelled at both security and appropriate boundaries. Curcuru said, “Anthropic prioritizes security and trust above everything else. They don’t train on our data and made HIPAA compliance straightforward.”
Beyond security, Claude maintained the right balance between advanced capabilities and responsible boundaries. Curcuru explained, “The AI accurately captures what families say and places it in the right form fields, but it never makes judgments about whether a family is suitable. Its purpose is to support social workers with more accessible information and less administrative burden—not to make decisions on their behalf.”
Rapid adoption across the child welfare system
This is just the beginning of the impact of Binti and Claude’s partnership. “I’ve rarely seen social workers this enthusiastic about a new tool,” Curcuru observed.
After seeing the positive feedback and impact, agency directors have embraced the technology. Curcuru shared, “Leadership teams are focused on reducing burnout and retaining skilled social workers. When they see how much time this saves and hear how much their staff appreciates it, adoption happens quickly.” Quality has improved alongside efficiency, with supervisors noting that “AI-assisted reports are consistently more thorough and accurate than what we typically see.”
Families benefit from more engaged, present social workers. Curcuru explained, “When we share that AI will help with paperwork so the social workers can focus on their conversation and move the process forward more quickly, families are often very open to it. They want social workers’ full attention, not to watch them taking notes. And given how long the process can be, they’re excited about technology that helps reduce delays.”
Building a future where technology amplifies human compassion
Binti plans to expand Claude’s role across the entire child welfare continuum while maintaining their commitment to human-led AI. “The positive response from social workers motivates us to develop more AI features that save time and reduce administrative work,” said Curcuru. Future applications from identifying kin who can support a child to helping families access critical resources and more—always with human judgment at the center.
Working with Anthropic, Binti proves that the most powerful AI applications don’t replicate human judgment—but clear away administrative obstacles so humans can focus on what matters most. Every child deserves a loving family, and by giving social workers the tools to spend their time where it counts, Binti is helping social workers around the country focus on social work, instead of paperwork, to help more children have loving families.
“Anthropic prioritizes security and trust above everything else. They don’t train on our data and made HIPAA compliance straightforward.”
Felicia Curcuru
Co-founder and CEO of Binti
Related stories
Posted in Uncategorized
Tagged ai, artificial-intelligence, chatgpt, technology, writing
Leave a comment
Steve Hank: The “auntie investors” are buying gold and silver….
China’s “auntie investors” are buying gold and silver like hot cakes. The “auntie investors” purchased nearly a third of the world’s gold bars and coins in 2025. BUY GOLD, WEAR DIAMONDS.

Posted in Uncategorized
Leave a comment

And it worked. Not because I suddenly became technical, but because I refused to let the insecurity win.
Think about that: The majority of successful people around you feel like frauds. Most of us do. The question isn’t whether you have imposter syndrome. It’s whether you’re using it in a healthy way.
Here’s a 2026 version of how to leverage your insecurity in an AI world:
Be honest — then get curious. My