I suggest we round up everyone that ever stepped foot on Epstein’s festering island and return them. Then drop a nuclear bomb on the place. pic.twitter.com/o28N8mVhlF
* He told Epstein that he had finally persuaded Gordon Brown to stand down and leave No 10 five hours before it became wider public knowledge
* He forwarded him an email sent to Gordon Brown disclosing govt was considering selling assets to reduce govt debt. Brown later followed through on the advice
* The same email included sensitive discussions on tax policy – tax relief for businesses, potential flagship electoral pledges on tax
* Mandelson also gave Epstein advance notice of a €500billion bailout of the Eurozone – he emailed him then spoke to him about it on eve of the announcement
* Another email he forwarded looked at how to bolster business lending in wake of financial crisis and included excoriating criticism of Mervyn King
Feeling out of step with the world is a common, often distressing, experience where you may feel disconnected from your surroundings, your own life, or the prevailing pace and values of modern society. This feeling can stem from intense stress, trauma, burnout, or a deeper existential, cultural, or personal mismatch.
Here is a breakdown of why this happens and how to manage it, based on psychological and therapeutic perspectives.
What It Feels Like
Depersonalization/Derealization: A sense that you are an outside observer of your own life, watching it like a movie, or feeling that the world is surreal, dreamlike, or hidden behind a pane of glass.
“Going Through the Motions”: A feeling of being on autopilot, lacking emotional depth, or acting like a robot.
Disconnection from Others: Feeling that people around you are operating on a different frequency, making you feel lonely or like an outsider, even in a crowd.
Existential Fatigue: A deep sense of alienation, questioning the purpose of the modern, fast-paced world, or finding it difficult to relate to cultural, political, or social norms.
Potential Causes
Chronic Stress & Burnout: When the body and mind are under constant pressure, they may “shut down” or dissociate to protect themselves from overwhelming emotion.
Anxiety & Depression: These can create a “false” reality, making you feel detached and causing you to withdraw from daily life.
Trauma or Early Life Experiences: A history of childhood emotional neglect can cause a persistent feeling of not belonging or being “on the outside”.
Rapid Cultural Shifts: Feeling that your personal values, pace, or beliefs no longer align with a rapidly changing world.
“Observer” Personality: Sometimes this feeling is a result of high sensitivity or a tendency to overthink and over-analyze the state of the world.
How to Reconnect (Practical Steps)
If this feeling is persistent, it is important to seek support from a mental health professional, such as a therapist specializing in Cognitive Behavioral Therapy (CBT).
Grounding Techniques: Bring yourself back to the present moment through your senses.
5-4-3-2-1 Technique: Identify 5 things you can see, 4 you can touch, 3 you can hear, 2 you can smell, and 1 you can taste.
Physical Activity: Walk, run, or do light exercise to re-engage with your body.
Limit Information Overload: Take a break from social media and news cycles to reduce environmental stress.
Focus on the “Micro”: Instead of trying to fix large, overwhelming world problems, focus on your immediate environment—your home, your friends, and small acts of kindness.
Be Kind to Yourself: Accept that this is a difficult feeling rather than fighting it, which can increase anxiety.
Connect with Others: Reach out to trusted friends or family, even if you feel like withdrawing.
If this feeling of detachment is intense, lasts for weeks, or hinders your daily functioning, consider reaching out to a healthcare provider or using resources like a suicide and crisis hotline.
==============
The Observer Personality:
The “Observer” (often associated with Enneagram Type 5) is an analytical, curious, and reserved individual driven to understand the world by gathering knowledge and observing from a distance. They are independent thinkers who value competence, often becoming experts in their fields. They may, however, struggle with emotional detachment, isolation, and practical, daily tasks.
Key Characteristics of the Observer Personality
Analytical & Curious: Observers possess an intense desire to learn, investigating everything rather than accepting conventional wisdom.
Independent & Detached: They value autonomy and often detach from emotions or social situations to conserve energy.
Innovative & Detail-Oriented: They are often visionary, capable of seeing new perspectives and remembering minute details.
Reserved & Private: Observers may be secretive and, when overwhelmed, can become isolated or aloof.
Core Fear:They fear being useless, helpless, or incapable, which drives their pursuit of knowledge.
Strengths
Deep Insight: Exceptional ability to concentrate on complex ideas.
Innovation: They often think outside the box, leading to unique, creative solutions.
Objectivity: Capable of viewing situations rationally without emotional bias.
Weaknesses & Challenges
Isolation: A tendency to withdraw from social interaction.
Over-analyzing: They may get stuck in the planning phase, struggling to take action.
Detachment: Sometimes struggle to make effective human connections or express emotions.
Famous Examples Individuals often cited as having an Observer-type personality include Albert Einstein, Stephen Hawking, Bill Gates, Tim Burton, and Vincent Van Gogh.
President Donald Trump speaks to the media before boarding Marine One on January 9, 2026. The day before, the President told New York Times reporters, “If it expires, it expires,” referring to New START—the last remaining bilateral nuclear arms control treaty between Washington and Moscow, which expires on February 5. (Photo: White House/Molly Riley)Share
For decades, nuclear weapons have been treated as the ultimate arbiter of international politics.They were supposed to deter great-power war, impose caution on leaders, and anchor what strategists liked to call strategic stability. Today, that framework is eroding in plain sight. Yet the reaction from policymakers and much of the expert community remains oddly muted.
Put simply, nuclear weapons are no longer functioning as a decisive factor in global security.
For almost four years, Russia—the world’s largest nuclear power—has been subjected to missile strikes carried out with systems supplied by several other nuclear-armed states. The United Kingdom now openly speaks of developing new tactical ballistic missiles for Kyiv and of placing “leading-edge weapons” directly into the hands of Ukrainians. Russia itself employs nuclear-capable intermediate-range ballistic Oreshnik missiles as if they were any other conventional weapon system for punishing Ukrainian infrastructure. Meanwhile, US President Donald Trump casually commented on New START—the last remaining bilateral nuclear arms control treaty between Washington and Moscow, which expires on February 5—“If it expires, it expires.” And former Russian President Dmitry Medvedev, currently serving as a Deputy Chief of the Russian Security Council, stated, “No START-4 is better than a treaty that only masks mutual distrust and provokes an arms race in other countries,” referring to what may come next after New START expires.
This is not how deterrence was supposed to work.
Shock technology.The traditional logic was straightforward. Nuclear weapons were so destructive that their mere existence would impose discipline and responsibility on those who possess them. Escalation would be tightly managed as a result, red lines respected, and arms control treated as a shared survival mechanism rather than a conditional concession. That logic has not vanished overnight—but it is slowly and decisively losing its force. And I’ve been able to experience this shift firsthand.
In 2010, I was involved in promoting New START in Russia as part of the Russian Center for Policy Research (PIR Center)—a Moscow-based nonprofit organization then carrying out research and policy work in arms control and WMD nonproliferation in collaboration with researchers from other nuclear-weapon states. At the time, nuclear arms control was still widely understood on both sides as the backbone of strategic stability. Even amid deep mistrust, there was a shared principle that the nuclear domain had to remain insulated from day-to-day geopolitical confrontation.
Historically, nuclear weapons were a shock technology that emerged from the horrific atomic bombings of Hiroshima and Nagasaki. They rebuilt international relations not just because of their destructive power, but because they fundamentally changed the risk calculus for those on both sides of these new weapons. However, no disruptive technology retains its dominance forever. Over time, adversaries adapt, political taboos erode, and most importantly, new tools emerge that change the balance again.
All available evidence suggests that we are living through such a transition now with nuclear weapons.
Displaced. One likely successor to nuclear weapons’ sole dominance on the strategic value ladder could be AI technology, which could be either used for powering new weapons systems or integrated into existing infrastructure, such as command, control, and communications of nuclear weapon systems. Either AI technology itself will become the primary strategic weapon—or it will enable the rapid creation of alternatives that render nuclear arsenals increasingly irrelevant to real-world outcomes.
AI technology already compresses decision-making timelines and enables continuous competition below the threshold of declared war. It allows countries to exert coercive pressure through cyber operations, information manipulation, autonomous systems, and precision-strike capabilities that do not trigger the same existential fear as nuclear escalation would. But AI-powered weapon systems can nonetheless reshape battlefields and, potentially, geopolitical realities.
In this environment, nuclear weapons begin to look strangely blunt. They are catastrophic, but unusable. They inspire fear, but not necessarily restraint. They no longer prevent adversaries from striking directly at a nuclear-armed state’s territory, infrastructure, or proxies. Instead, they sit in the background while conflicts are fought with tools that are faster, cheaper, and politically easier to employ. But nuclear weapons are still hanging in the air, should a crisis escalate to that level.
On the margins, the displacement of nuclear weapons also helps explain a puzzling dynamic in today’s US-Russian relationship. The traditional Soviet and Russian so-called “lever of strategic stability”—that is, the implicit warning that escalation could lead to nuclear catastrophe—appears to have lost much of its influence in Washington. US policymakers increasingly behave as if nuclear risk can be managed, compartmentalized to a limited exchange, or simply accepted as the price of pursuing other strategic goals.
From a classical deterrence perspective, this would have once been unthinkable.
The real-world risk is not that nuclear weapons will suddenly disappear from global politics. They will not. But they might persist more as symbols while losing their practical role as stabilizers, creating a more dangerous world in which countries are neither safely deterred nor meaningfully disarmed.
At the same time, AI-driven competition between nuclear-armed states risks producing a new kind of instability in which escalation is constant, ambiguous, and difficult to control. Unlike with nuclear arms control, which relied on relatively slow-moving technologies and verifiable limits, risk reduction of AI-powered weapon systems must deal with technologies that evolve rapidly and lack transparency. Despite some efforts toward regulation of AI, the habits and institutions designed to manage nuclear risk remain poorly suited to this new reality.
Nuclear deterrence is not collapsing with a bang. It is fading, quietly and unevenly, as the strategic center of gravity shifts elsewhere. To preserve stability, the decline of nuclear deterrence can no longer be ignored. Otherwise, the next shock—technological or geopolitical—will catch us unprepared.
Together, we make the world safer.
The Bulletin elevates expert voices above the noise. But as an independent nonprofit organization, our operations depend on the support of readers like you. Help us continue to deliver quality journalism that holds leaders accountable. Your support of our work at any level is important. In return, we promise our coverage will be understandable, influential, vigilant, solution-oriented, and fair-minded. Together we can make a difference.
The human desire to live both longer and better has exploded into a cultural obsession. The global wellness economy—spanning fitness, nutrition, mindfulness, and biohacking—was valued at $5.6 trillion in 2022, withprojections to reach $8.5 trillion by 2027. In the US and UK, 70 percent of consumers reported spending on healthy aging and longevity-related products or services, highlighting the widespread commitment to improving healthspan. Meanwhile, elite investors pour resources into anti-aging and life extension biotechnology, generating an unprecedented market expected to reach $64 billion by 2026. From fascination with centenarians to lifestyle trends to tech entrepreneurs aspiring to live forever, interest in aging reflects how deeply we value the quantity and quality of our days.
In the Mair Laboratory of Harvard’s T.H. Chan School of Public Health, Maria Perez-Matos, PhD ’25, set out to understand why certain individuals live longer, healthier lives than others.
“Some people within the same environment, within the same kind of context, or even twins – they live for different times,” Perez-Matos says. “So we wanted to understand, is there something that explains it? If so, how can we say that based on these biomarkers, this or that type of intervention would be better for you?”
For Perez-Matos, the question of aging is not just about adding years to life, but about adding healthy years. She notes that the average American’s quality of life declines sharply over the last decade. Rather than pursuing a goal to live to 200 years old, she hopes to ensure individuals can experience a good quality of life until their final day.
Image
Maria Perez-Matos analyzing lipids by thin-layer chromatography. Each band represents a different class of lipids in C. elegans samples, August 2021.
The Splice of Life
To study longevity, Perez-Matos turned to C. elegans, tiny transparent worms that self-fertilize and reproduce nearly identical versions of themselves. Despite having identical genetics and living in highly controlled conditions, some worms aged faster than others.
“They all eat exactly the same,” Perez-Matos says. “And even with that, there’s heterogeneity.”
Focusing on molecular “splicing factors,” or proteins that regulate how RNA (ribonucleic acid, the messenger molecule that helps turn DNA instructions into proteins) is cut and assembled, Perez-Matos discovered differences that correlated with lifespan. Using fluorescent tags, she watched individual worms’ cells and saw which splicing patterns persisted into midlife. Those maintaining a “young” pattern tended to live longer.
Digging deeper, she examined what distinguished these longer-living worms. By analyzing over 500 lipids, she discovered that the worms that aged more gracefully had consistently higher levels of oleic acid, a fatty acid found in olive oil.
What is amazing about Maria’s work is that she kept both [genetics and environment] the same, yet still some animals aged much more slowly than others. –Professor William Mair
“There’s so much literature about people in the Mediterranean eating a lot of olive oil and aging better, even centenarians,” Perez-Matos says. “But it’s meaningful to see it in the context of a controlled experiment where all the worms ate the same diet, and yet those that age better just have more oleic acid.”
Her findings suggest that aging isn’t determined solely by genes or environment. Subtle molecular differences, like patterns in RNA splicing or fat content, can influence how long and how well an organism lives. Understanding these differences may guide interventions to keep humans healthier for longer.
“Most variation between how different individuals age is due to differences in their genetics, their environment, or a combination of the two,” says William Mair, professor of molecular metabolism at the Harvard Chan School and Perez-Matos’s faculty advisor. “What is amazing about Maria’s work is that she kept both of those things the same, yet still some animals aged much more slowly than others. So, the big question is: Why? Is it luck? Is it random? Maria’s work, alongside another former graduate student, suggests the differences lie in how different individuals process lipids. If this turns out to also be true in humans, it might have real therapeutic value for older adults.”
By investigating these mechanisms, we move one step closer to not only extending but also improving human quality of life.
Bedside Perspective
Perez-Matos traces her fascination with the human body to her childhood in Bucamaranga, Colombia, a medium-sized town where “everybody knew everybody” and she could walk to school. Her parents were both physicians, so she overheard snippets of their conversations about medicine throughout her upbringing. She recalls the excitement she felt when her dad was on call for work, and how she’d beg to go to the hospital with him.
“Take me with you,” she would say. “I want to understand. I want to go!”
Perez-Matos’ interest in science cemented as a middle schooler the first time she opened a textbook and encountered an illustration of a cell with organelles drawn in. Then, she watched an animation of intercellular movement.
“Everything was so beautifully organized,” Perez-Matos says. “How is it possible that we have so many things happening right now [in the body], and things work out? There are so many processes. The human body is just amazing, and that’s something I’m going to carry with me forever.”
Motivated by curiosity, Perez-Matos completed medical school in Colombia before starting as a research fellow at the Harvard-affiliated hospital, Beth Israel. There, she used mouse models to study how liver disease happens, and realized she wanted to pursue research at a deeper level.
“It was super captivating,” she says. “I wanted to dedicate myself to research.”
As a PhD student at Harvard’s Kenneth C. Griffin Graduate School of Arts and Sciences, she found alignment with her lab’s ethos of addressing aging itself as a process in order to provide more effective interventions for multiple age-related diseases.
“There’s a ton of research to try to understand heart disease, osteoporosis—all these things that happen in the aging population,” Perez-Matos says. “But the main risk factor is aging. So, if we target the aging process itself, we would make all of it better.”
Reflecting on her path, she emphasizes the importance of connecting research to real-world impact, noting how easy it can be to get lost in the details of basic science as well as the importance of remembering what matters most.
“Being a physician at the bedside gave me perspective,” she says. “I needed to always ask—how is this going to impact someone’s life and make it better? I always want to carry that idea, even if I’m just playing around with worms now.”
There’s a ton of research to try to understand heart disease, osteoporosis—all these things that happen in the aging population. But the main risk factor is aging. So, if we target the aging process itself, we would make all of it better. –Maria Perez-Matos
Telling the Truth
Image
Perez-Matos recently ran the Brooklyn Marathon after picking up the hobby or running during her PhD.
Perez-Matos’ research shows that even genetically identical worms in controlled environments age differently, pointing to subtle molecular factors that could inform human interventions. Looking forward, she hopes her discoveries will contribute to personalized interventions that ensure the final years are as healthy and vibrant as possible.
“There are always so many more questions that you could ask,” she says. “How does oleic acid regulate aging? How is the level of oleic acid regulated to begin with?”
Today, Perez-Matos has moved into healthcare consulting, but she brings the same rigor, problem-solving, and curiosity to her work. She credits Professor Mair with teaching her the importance of communicating science clearly.
“That’s something the world needs a lot,” she says. “To be able to communicate science and get people excited with the truth. He’s really good at that, and that’s something that I definitely tried to learn from my time with him.”
Having completed her PhD, Perez-Matos embraces the potential to explore the possibilities that lie ahead.
“I feel very free right now,” she says. “I honestly feel I accomplished my childhood dream. Now I get to see what comes next.”
Maria Perez-Matos’ research was supported by the Ruth L. Kirschstein National Research Service Award Individual Predoctoral Fellowship from the National Institutes of Health.
Banner photo courtesy of Harvard T. H. Chan School of Public Health.
How does artificial intelligence think? The big surprise is that it ‘intuits’
Something extraordinary has happened, even if we haven’t fully realized it yet: algorithms are now capable of solving intellectual tasks. These models are not replicas of human intelligence. Their intelligence is limited, different, and — curiously — turns out to work in a way that resembles intuition. This is one of the seven lessons we’ve learned so far about them and about ourselves
Artificial intelligence was born in the 1950s when a group of pioneers wondered if they could make their computers “think.” After 70 years, something tremendous has happened: neural networks are solving cognitive tasks. For 300,000 years, these tasks were the exclusive domain of living beings. Not anymore. It’s not controversial: it’s a fact. And it has happened suddenly. Machine learning with neural networks has solved problems that eluded machines for decades:
ChatGPT, Gemini, or Claude handle language
They have fluent and encyclopedic knowledge
They write code at a superhuman level
They describe images at a human level
They transcribe at a human level
They translate at a human level
Other models generate realistic images, predict hurricanes, win at Go, and drive cars in Phoenix.
AI researcher François Chollet sums it up this way: “In the last decade, deep learning has achieved nothing less than a technological revolution.” Each of these achievements would have been a remarkable breakthrough on its own. Solving them all with a single technique is like discovering a master key that unlocks every door at once.
Why now? Three pieces converged: algorithms, computing power, and massive amounts of data. We can even put faces to them, because behind each element is a person who took a gamble. Academic Geoffrey Hintonkept working on neural networks long after his colleagues had abandoned them.Jensen Huang, Nvidia’s CEO, kept improving parallel‑processing chips far beyond what video games — the core of his business — actually needed. And researcher Fei‑Fei Li risked her career to build ImageNet, an image collection that seemed absurdly large at the time.
But these three pieces aligned. In 2012, two of Hinton’s students, Ilya Sutskever and Alex Krizhevsky, combined them to achieve spectacular success: they built AlexNet, a neural network capable of “seeing”—recognizing images — far better than anything that had come before.
The rumor spread quickly through the laboratories: this worked. Hinton’s team had found a formula: networks, data, and computing in gigantic quantities.
The impact of this transformation will be profound. As Ethan Mollick, one of the most astute observers of our time, has said, even if AI development were to stop tomorrow, “we would still have a decade of changes across entire industries.”
No one knows how far these machines will go. Between the hype promising superhuman intelligence every year and the denial that ignores the obvious, we are missing something crucial: current AI models are already fascinating.The latest big surprise is that they work in a way that closely resembles intuition. Their development forces us to confront deep questions — about how they work and how we do. And they have already given us some answers..
Lesson 1. Machines can learn
It’s the most overlooked and least controversial lesson: machines learn. James Watt’s centrifugal governor (1788) already adjusted the speed of steam engines without supervision. It was the beginning of a discovery: you don’t need to fully specify the rules of a device for it to work.
Classical programming consists of defining rules and expecting answers: “This is how you add; now add 2 and 2.” But machine learning works the other way around: you give it examples, and the system discovers the rules. Chollet sums it up in Deep Learning with Python: “A machine learning system is trained rather than programmed.” The most powerful example is the large language models like Claude, Gemini, or ChatGPT. They are neural networks — tangles of computational units connected in successive layers, imitating the neurons of the brain — with hundreds of billions of parameters that are adjusted during training. Every success and every mistake tweaks those parameters. This learning process is extremely long, opaque due to its sheer scale, but not mysterious. It’s mathematical. And it has worked.
Hidden here is what the field calls the “bitter lesson.” For decades, experts tried to encode their knowledge into machines. They failed. What succeeded was creating the conditions for knowledge to emerge… and stepping aside.
Lesson 2. AIs have emergent abilities
The bitter lesson hides a profound idea: something complex can emerge from simple processes. It is the principle that organizes life. Evolution didn’t design each organ; it set in motion a process — mutation, recombination, selection — and from there sprang eyes, wings, brains. Now we have replicated that process in machines.
Let’s return to large language models (LLMs). Without going into the divisive topic of defining their capabilities, it’s clear they handle language with flexibility. You can converse with ChatGPT; it detects sarcasm and responds to changing contexts. But no one programmed it with grammar or explained sarcasm to it. How is this possible? Most experts assumed that mastering language fluently would require general intelligence (something human‑like across a wide range of tasks). Yet it turned out that the simple training task of “predict the next word” had emergent power.
The procedure is simple. The first training of an LLM is what we call pretraining: the model is presented with snippets of text from the internet and asked to predict the next token (a word or fragment). When it fails, the parameters responsible for the error are adjusted. This simple process, repeated an astronomical number of times, ends up creating models that predict words very well… and that learn much more along the way.
For Carlos Riquelme, a researcher at Microsoft AI, this was a crucial discovery. He shared his astonishment from 2017, when he was working at Google Brain: “I was amazed by the power of scaling. By scaling a very simple method [predicting the next word] with large amounts of data and powerful models, it became clear that it was possible to largely replicate human linguistic capacity.”
The key is this: to predict words you need to grasp complex concepts. Suppose you have to complete these sentences, which Blaise Agüera y Arcas compiles in a brilliant new book, What is Intelligence?.
“In stacks of pennies, the height of Mount Kilimanjaro is…”
“After her dog died, Jen didn’t leave the house for days, so her friends decided…”
Filling these gaps requires geographical knowledge, mathematics, common sense, and even “theory of mind” to put yourself in Jen and her friends’ shoes. In this way, “what seemed like a narrow linguistic task — predicting the next word — turned out to encompass all tasks,” argues Agüera y Arcas in his book. For example, when presented with the phrase about Kilimanjaro, Google’s latest Gemini 3 model thinks for a minute and then responds (correctly): “The height of the mountain is approximately 3.9 million cents.” For Jen’s friends, it offers different options, from “showing up at her door with ice cream” to “taking turns visiting her.”
Agüera y Arcas provides another example in an email exchange: multiplication. An LLM like Gemini or ChatGPT might have memorized common calculations from the internet, such as “2 × 7.” But they also predict “871 × 133,” which doesn’t appear anywhere. “Successfully performing these operations generally implies having inferred non-trivial algorithms from examples.” It’s the trick of emergence: a simple process produces complex capabilities.
Lesson 3. AI learns with a ‘crappy evolution’
Our AI doesn’t learn like people. A child is born with a lot of innate “machinery,” and then learns with little data and few experiences, with remarkable efficiency. The pre-training of an LLM is very different: it begins with a blank slate and learns very slowly with millions of examples. The topic is cats: training an AI to identify cats in an image requires thousands of photos, but a two-year-old can distinguish them by seeing three.
There’s a better analogy: evolution. The renowned researcher Andrej Karpathy describes LLM training as a kind of “crappy evolution.”In a recent podcast, he spoke about how surprising its development has been: “we can build these ghosts, spirit-like entities, by imitating internet documents. This works. It’s a way to bring you up to something that has a lot of built-in knowledge and intelligence in some way, similar to maybe what evolution has done.”
Why does this analogy work? Because evolution also arises from a vast number of tiny trials and changes (mutations and symbiosis), repeated over millions of years. It’s a slow, blind process that ends up embedding capabilities in living beings: instincts, reflexes, or patterns. It’s chaotic and noisy. That’s why each gene influences many characteristics of an organism; and that’s why medicines have side effects, because they disrupt circuits other than the intended one.
Actually, the surprise of an AI mastering language by predicting words — which surprised Riquelme — reminds me of the shock that Darwin caused: how to accept that animals, people, and even his poems are the byproduct of a blind process that only seeks to “maximize copies”?
Lesson 4. We have automated cognition
François Chollet is cautious when speaking about artificial intelligence. He prefers to call it “cognitive automation.” True intelligence, in his view, will require something that current models lack: “cognitive autonomy,” the ability to confront the unknown and adapt. Chollet wants to curb the hype, although at the same time he acknowledges a remarkable achievement: we are automating cognitive tasks on an industrial scale.“What’s surprising about deep learning is how much can be achieved with pure memorization,” he says via email. For him, LLMs lack the deliberate and efficient reasoning that humans possess. That’s why, initially, they made crude errors, such as miscounting the r’s in the word “raspberries.” What surprises him is that they can often compensate for that lack of reasoning: “If you have almost infinite experience, intelligence isn’t so critical.”
Other experts see more than just memorization: are we witnessing real intelligence? Andrej Karpathy believes so. On Dwarkesh Patel’s podcast, he explained that pretraining does two things: “Number one, it’s picking up all this knowledge, as I call it. Number two, it’s actually becoming intelligent. By observing the algorithmic patterns in the internet, it boots up all these little circuits and algorithms inside the neural net to do things like in-context learning.”
Jeremy Berman, creator of the leading algorithm in the ARC Prize, became convinced by the new reasoning‑focused models, which emerged about a year ago and include learning stages without examples:“I was surprised that you can train a model on its own attempts, and that allows it to think and learn for itself,” he explains in a message exchange. He is referring to reinforcement learning (RL), described by the creators of DeepSeek R1. “If you present a math problem to an LLM, let them answer 100 times, and train them on their best answers, the LLM learns. This goes beyond the pure memorization of pre-training.” Thanks to this, the latest generation of models can solve long, complex problems that their versions from just a few months ago were unable to handle.
A man interacts with a robot at the pib@school fair in Hanover, in March 2025.JULIAN STRATENSCHULTE ( DPA / AFP / ContactoPhoto )
Carlos Riquelme points out that there are semantic differences: “Algorithms, mechanisms, and ways of reasoning can be memorized. Some might call that ‘circuits for thinking,’ while others might say that the algorithm was simply memorized, like how we learn to add.” Furthermore, Riquelme emphasizes that real-world learning is more active. From the moment the model generates its responses and receives feedback, “it can end up memorizing something that wasn’t in its initial data.”
Agüera y Arcas believes that AI is real intelligence — without further qualifiers. He thinks models like Gemini, ChatGPT, or Claude display a capacity for generalization that goes beyond what we can reasonably call memorization. And he is surprised that Chollet argues otherwise: “What evidence is he looking for?” he asked. For Agüera y Arcas, nature already shows that intelligence comes in many forms, such as Portia spiders — which plan cunning attacks — or octopuses, which distribute their cognition across their arms.
Lesson 5. It’s more intuitive than rational
Here comes the paradox. In the 20th‑century imagination, robots were supposed to be cold, rational machines: logic, calculation, deduction. But today’s AI works the other way around..
Psychologist Daniel Kahneman, Nobel laureate in Economics, distinguished two systems in human thought. System 1 is fast, automatic, and intuitive; it uses shortcuts and patterns. System 2 is slow, deliberate, and rational; it requires conscious effort. The former dominates our lives. A baby knows how to nurse, we pull our hand away from fire, we hold a glass with just the right force… Things that took robots decades to learn.
What’s surprising is that early LLMs operate much closer to System 1 than to System 2. They mimic the style of Jorge Luis Borges, they write with rhythm. They do things without being able to “explain” how — just like us. They don’t reason step by step; they’ve absorbed patterns at massive scale. And deliberate reasoning — deduction, counting, logic — is precisely where they struggle.
That’s why recent innovations seek to add reasoning. The aforementioned “reasoning” models — from DeepSeek R1 to the current generation — write for themselves before responding, generating more cautious and reflective, step-by-step thought processes. Other advances pursue the same goal: reinforcement training that rewards correct reasoning, launching multiple attempts in parallel and selecting the best one, or connecting models to external mathematical tools that overcome their limitations. It’s an attempt to build an artificial System 2. And it’s working, at least to some extent: the newest models excel at math and spatial tests where the first LLMs failed.
Lesson 6. Humans are also patterns
If AI captures patterns and uses them to write, translate, and draw, an uncomfortable question arises: how much of us works the same way? Perhaps more than we like to admit. We already know that our brains rely on constant shortcuts. Watching how machine learning performs, it’s hard not to wonder how much of what we traditionally attribute to talent or experience — writing with rhythm, choosing colors, sensing tone — is actually automatic.
The history of science is the history of dismantling our exceptionalism. Galileo showed we are not the center of the universe; Darwin showed we are not special creations; neuroscience showed we are not one but many. Now AI adds another lesson: abilities we once felt were uniquely ours can be captured through large‑scale pattern recognition.
Lesson 7. We are living through a Cambrian explosion of AI
Today’s AI systems have deep limitations. Andrej Karpathy listed some of them in the podcast mentioned earlier: “They don’t have enough intelligence, they’re not multimodal enough, they can’t do computer use […] They don’t have continual learning,“ he said. ”They’re cognitively lacking and it’s just not working. It will take about a decade to work through all of those issues.”
But a new avenue has opened up with the successful formula of networks, data, and computing.That’s why we are living through a Cambrian period. Like that explosion of life 540 million years ago, when a multitude of animals suddenly appeared, we are now seeing an explosion of novel approaches to artificial intelligence. There are laboratories exploring fascinating directions: Sara Hooker is working on adaptive systems, Fei-Fei Li wants to build models that decipher the physical world, and François Chollet is researching AIs that write and evolve their own logical programs.
How far will these attempts go? Blaise Agüera y Arcas sees no limits: “Our brains achieve incredible feats of reasoning, creativity, and empathy. And those brains are circuitry: they are not something supernatural. And if they are not supernatural, they can be modeled computationally.”
Will we achieve this in practice? Nobody knows. But the question is no longer theoretical. We are watching algorithms learning to read, write, program, and reason — clumsily at times, astonishingly at others. Whatever happens from now on, this has already occurred. And it is extraordinary. Perhaps it will end up being the most important transformation of our lives.
Sign up for our weekly newsletter to get more English-language news coverage from EL PAÍS USA Edition