Elon Musk just announced the merger of his AI startup xAI with SpaceX, forming what’s now the highest-valued private company on the planet at a reported $1.25T â combining his rockets, Grok, and the X platform all under one entity.
The details:xAI will operate as a division within SpaceX, with Musk pitching a vision of launching AI data centers into orbit to overcome Earth’s energy constraints.The merger comes ahead of an anticipated SpaceX IPO later this year, expected to push the companyâs valuation to $1.25T.Musk estimated that space-based AI compute will be cheaper than traditional data centers within 2-3 years, powered by near-constant solar energy.
He also said space-based data centers will âenable self-growing bases on the Moon, an entire civilization on Mars⌠and expansion to the Universe.â
Why it matters: Elonâs tech empire is consolidating fast, calling this merger “the most ambitious, vertically-integrated innovation engine on (and off) Earth.” Data centers in space may sound wild, but Musk isn’t alone in eyeing that solution â and with SpaceX now in the mix, nobody is better positioned to own that opportunity.
The Rundown: Swedish researchers just published results from the largest-scale trial of AI-powered breast cancer screening, finding the technology helps radiologists spot a higher percentage of tumors while cutting radiologist workload nearly in half.
The details:
The two-year study tracked over 100K women to see if AI could catch cancers that traditional screening misses between appointments.The AI analyzed mammograms and flagged high-risk cases for radiologists, boosting the detection rate from 74% to 81% without increasing false positives.Women in the AI group saw 27% fewer aggressive tumor types and 21% fewer large tumors compared to standard screening alone.The system also cut radiologist workload by 44% by handling initial screening, sorting, and freeing doctors to focus on the cases that need the most attention.
Why it matters:Â Between drug discovery, tumor detection, treatment planning, and more, AI is quickly becoming one of the most impactful tools in the cancer fight. With over 2M breast cancer diagnoses each year, scaling this kind of early detection via AI could be life-changing for women across the globe.
I suggest we round up everyone that ever stepped foot on Epsteinâs festering island and return them. Then drop a nuclear bomb on the place. pic.twitter.com/o28N8mVhlF
* He told Epstein that he had finally persuaded Gordon Brown to stand down and leave No 10 five hours before it became wider public knowledge
* He forwarded him an email sent to Gordon Brown disclosing govt was considering selling assets to reduce govt debt. Brown later followed through on the advice
* The same email included sensitive discussions on tax policy – tax relief for businesses, potential flagship electoral pledges on tax
* Mandelson also gave Epstein advance notice of a âŹ500billion bailout of the Eurozone – he emailed him then spoke to him about it on eve of the announcement
* Another email he forwarded looked at how to bolster business lending in wake of financial crisis and included excoriating criticism of Mervyn King
Feeling out of step with the world is a common, often distressing, experience where you may feel disconnected from your surroundings, your own life, or the prevailing pace and values of modern society. This feeling can stem from intense stress, trauma, burnout, or a deeper existential, cultural, or personal mismatch.
Here is a breakdown of why this happens and how to manage it, based on psychological and therapeutic perspectives.
What It Feels Like
Depersonalization/Derealization: A sense that you are an outside observer of your own life, watching it like a movie, or feeling that the world is surreal, dreamlike, or hidden behind a pane of glass.
“Going Through the Motions”: A feeling of being on autopilot, lacking emotional depth, or acting like a robot.
Disconnection from Others: Feeling that people around you are operating on a different frequency, making you feel lonely or like an outsider, even in a crowd.
Existential Fatigue: A deep sense of alienation, questioning the purpose of the modern, fast-paced world, or finding it difficult to relate to cultural, political, or social norms.
Potential Causes
Chronic Stress & Burnout: When the body and mind are under constant pressure, they may “shut down” or dissociate to protect themselves from overwhelming emotion.
Anxiety & Depression: These can create a “false” reality, making you feel detached and causing you to withdraw from daily life.
Trauma or Early Life Experiences: A history of childhood emotional neglect can cause a persistent feeling of not belonging or being “on the outside”.
Rapid Cultural Shifts: Feeling that your personal values, pace, or beliefs no longer align with a rapidly changing world.
“Observer” Personality: Sometimes this feeling is a result of high sensitivity or a tendency to overthink and over-analyze the state of the world.
How to Reconnect (Practical Steps)
If this feeling is persistent, it is important to seek support from a mental health professional, such as a therapist specializing in Cognitive Behavioral Therapy (CBT).
Grounding Techniques: Bring yourself back to the present moment through your senses.
5-4-3-2-1 Technique: Identify 5 things you can see, 4 you can touch, 3 you can hear, 2 you can smell, and 1 you can taste.
Physical Activity: Walk, run, or do light exercise to re-engage with your body.
Limit Information Overload: Take a break from social media and news cycles to reduce environmental stress.
Focus on the “Micro”: Instead of trying to fix large, overwhelming world problems, focus on your immediate environmentâyour home, your friends, and small acts of kindness.
Be Kind to Yourself: Accept that this is a difficult feeling rather than fighting it, which can increase anxiety.
Connect with Others: Reach out to trusted friends or family, even if you feel like withdrawing.
If this feeling of detachment is intense, lasts for weeks, or hinders your daily functioning, consider reaching out to a healthcare provider or using resources like a suicide and crisis hotline.
==============
The Observer Personality:
The “Observer” (often associated with Enneagram Type 5) is an analytical, curious, and reserved individual driven to understand the world by gathering knowledge and observing from a distance. They are independent thinkers who value competence, often becoming experts in their fields. They may, however, struggle with emotional detachment, isolation, and practical, daily tasks.
Key Characteristics of the Observer Personality
Analytical & Curious:Â Observers possess an intense desire to learn, investigating everything rather than accepting conventional wisdom.
Independent & Detached:Â They value autonomy and often detach from emotions or social situations to conserve energy.
Innovative & Detail-Oriented:Â They are often visionary, capable of seeing new perspectives and remembering minute details.
Reserved & Private:Â Observers may be secretive and, when overwhelmed, can become isolated or aloof.
Core Fear:Â They fear being useless, helpless, or incapable, which drives their pursuit of knowledge.Â
Strengths
Deep Insight:Â Exceptional ability to concentrate on complex ideas.
Innovation:Â They often think outside the box, leading to unique, creative solutions.
Objectivity:Â Capable of viewing situations rationally without emotional bias.Â
Weaknesses & Challenges
Isolation:Â A tendency to withdraw from social interaction.
Over-analyzing:Â They may get stuck in the planning phase, struggling to take action.
Detachment:Â Sometimes struggle to make effective human connections or express emotions.Â
Famous Examples Individuals often cited as having an Observer-type personality include Albert Einstein, Stephen Hawking, Bill Gates, Tim Burton, and Vincent Van Gogh.
President Donald Trump speaks to the media before boarding Marine One on January 9, 2026. The day before, the President told New York Times reporters, âIf it expires, it expires,â referring to New STARTâthe last remaining bilateral nuclear arms control treaty between Washington and Moscow, which expires on February 5. (Photo: White House/Molly Riley)Share
For decades, nuclear weapons have been treated as the ultimate arbiter of international politics.They were supposed to deter great-power war, impose caution on leaders, and anchor what strategists liked to call strategic stability. Today, that framework is eroding in plain sight. Yet the reaction from policymakers and much of the expert community remains oddly muted.
Put simply, nuclear weapons are no longer functioning as a decisive factor in global security.
For almost four years, Russiaâthe worldâs largest nuclear powerâhas been subjected to missile strikes carried out with systems supplied by several other nuclear-armed states. The United Kingdom now openly speaks of developing new tactical ballistic missiles for Kyiv and of placing âleading-edge weaponsâ directly into the hands of Ukrainians. Russia itself employs nuclear-capable intermediate-range ballistic Oreshnik missiles as if they were any other conventional weapon system for punishing Ukrainian infrastructure. Meanwhile, US President Donald Trump casually commented on New STARTâthe last remaining bilateral nuclear arms control treaty between Washington and Moscow, which expires on February 5ââIf it expires, it expires.â And former Russian President Dmitry Medvedev, currently serving as a Deputy Chief of the Russian Security Council, stated, âNo START-4 is better than a treaty that only masks mutual distrust and provokes an arms race in other countries,â referring to what may come next after New START expires.
This is not how deterrence was supposed to work.
Shock technology. The traditional logic was straightforward. Nuclear weapons were so destructive that their mere existence would impose discipline and responsibility on those who possess them. Escalation would be tightly managed as a result, red lines respected, and arms control treated as a shared survival mechanism rather than a conditional concession. That logic has not vanished overnightâbut it is slowly and decisively losing its force. And Iâve been able to experience this shift firsthand.
In 2010, I was involved in promoting New START in Russia as part of the Russian Center for Policy Research (PIR Center)âa Moscow-based nonprofit organization then carrying out research and policy work in arms control and WMD nonproliferation in collaboration with researchers from other nuclear-weapon states. At the time, nuclear arms control was still widely understood on both sides as the backbone of strategic stability. Even amid deep mistrust, there was a shared principle that the nuclear domain had to remain insulated from day-to-day geopolitical confrontation.
Historically, nuclear weapons were a shock technology that emerged from the horrific atomic bombings of Hiroshima and Nagasaki. They rebuilt international relations not just because of their destructive power, but because they fundamentally changed the risk calculus for those on both sides of these new weapons. However, no disruptive technology retains its dominance forever. Over time, adversaries adapt, political taboos erode, and most importantly, new tools emerge that change the balance again.
All available evidence suggests that we are living through such a transition now with nuclear weapons.
Displaced. One likely successor to nuclear weaponsâ sole dominance on the strategic value ladder could be AI technology, which could be either used for powering new weapons systems or integrated into existing infrastructure, such as command, control, and communications of nuclear weapon systems. Either AI technology itself will become the primary strategic weaponâor it will enable the rapid creation of alternatives that render nuclear arsenals increasingly irrelevant to real-world outcomes.
AI technology already compresses decision-making timelines and enables continuous competition below the threshold of declared war. It allows countries to exert coercive pressure through cyber operations, information manipulation, autonomous systems, and precision-strike capabilities that do not trigger the same existential fear as nuclear escalation would. But AI-powered weapon systems can nonetheless reshape battlefields and, potentially, geopolitical realities.
In this environment, nuclear weapons begin to look strangely blunt. They are catastrophic, but unusable. They inspire fear, but not necessarily restraint. They no longer prevent adversaries from striking directly at a nuclear-armed stateâs territory, infrastructure, or proxies. Instead, they sit in the background while conflicts are fought with tools that are faster, cheaper, and politically easier to employ. But nuclear weapons are still hanging in the air, should a crisis escalate to that level.
On the margins, the displacement of nuclear weapons also helps explain a puzzling dynamic in todayâs US-Russian relationship. The traditional Soviet and Russian so-called âlever of strategic stabilityââthat is, the implicit warning that escalation could lead to nuclear catastropheâappears to have lost much of its influence in Washington. US policymakers increasingly behave as if nuclear risk can be managed, compartmentalized to a limited exchange, or simply accepted as the price of pursuing other strategic goals.
From a classical deterrence perspective, this would have once been unthinkable.
The real-world risk is not that nuclear weapons will suddenly disappear from global politics. They will not. But they might persist more as symbols while losing their practical role as stabilizers, creating a more dangerous world in which countries are neither safely deterred nor meaningfully disarmed.
At the same time, AI-driven competition between nuclear-armed states risks producing a new kind of instability in which escalation is constant, ambiguous, and difficult to control. Unlike with nuclear arms control, which relied on relatively slow-moving technologies and verifiable limits, risk reduction of AI-powered weapon systems must deal with technologies that evolve rapidly and lack transparency. Despite some efforts toward regulation of AI, the habits and institutions designed to manage nuclear risk remain poorly suited to this new reality.
Nuclear deterrence is not collapsing with a bang. It is fading, quietly and unevenly, as the strategic center of gravity shifts elsewhere. To preserve stability, the decline of nuclear deterrence can no longer be ignored. Otherwise, the next shockâtechnological or geopoliticalâwill catch us unprepared.
Together, we make the world safer.
The Bulletin elevates expert voices above the noise. But as an independent nonprofit organization, our operations depend on the support of readers like you. Help us continue to deliver quality journalism that holds leaders accountable. Your support of our work at any level is important. In return, we promise our coverage will be understandable, influential, vigilant, solution-oriented, and fair-minded. Together we can make a difference.
The human desire to live both longer and better has exploded into a cultural obsession. The global wellness economyâspanning fitness, nutrition, mindfulness, and biohackingâwas valued at $5.6 trillion in 2022, with projections to reach $8.5 trillion by 2027. In the US and UK, 70 percent of consumers reported spending on healthy aging and longevity-related products or services, highlighting the widespread commitment to improving healthspan. Meanwhile, elite investors pour resources into anti-aging and life extension biotechnology, generating an unprecedented market expected to reach $64 billion by 2026. From fascination with centenarians to lifestyle trends to tech entrepreneurs aspiring to live forever, interest in aging reflects how deeply we value the quantity and quality of our days.Â
In the Mair Laboratory of Harvardâs T.H. Chan School of Public Health, Maria Perez-Matos, PhD â25, set out to understand why certain individuals live longer, healthier lives than others.Â
âSome people within the same environment, within the same kind of context, or even twins â they live for different times,â Perez-Matos says. âSo we wanted to understand, is there something that explains it? If so, how can we say that based on these biomarkers, this or that type of intervention would be better for you?â
For Perez-Matos, the question of aging is not just about adding years to life, but about adding healthy years. She notes that the average Americanâs quality of life declines sharply over the last decade. Rather than pursuing a goal to live to 200 years old, she hopes to ensure individuals can experience a good quality of life until their final day.Â
Image
Maria Perez-Matos analyzing lipids by thin-layer chromatography. Each band represents a different class of lipids in C. elegans samples, August 2021.
The Splice of Life
To study longevity, Perez-Matos turned to C. elegans, tiny transparent worms that self-fertilize and reproduce nearly identical versions of themselves. Despite having identical genetics and living in highly controlled conditions, some worms aged faster than others.
âThey all eat exactly the same,â Perez-Matos says. âAnd even with that, there’s heterogeneity.â
Focusing on molecular âsplicing factors,â or proteins that regulate how RNA (ribonucleic acid, the messenger molecule that helps turn DNA instructions into proteins) is cut and assembled, Perez-Matos discovered differences that correlated with lifespan. Using fluorescent tags, she watched individual wormsâ cells and saw which splicing patterns persisted into midlife. Those maintaining a âyoungâ pattern tended to live longer.
Digging deeper, she examined what distinguished these longer-living worms. By analyzing over 500 lipids, she discovered that the worms that aged more gracefully had consistently higher levels of oleic acid, a fatty acid found in olive oil.
What is amazing about Mariaâs work is that she kept both [genetics and environment] the same, yet still some animals aged much more slowly than others. âProfessor William Mair
âThere’s so much literature about people in the Mediterranean eating a lot of olive oil and aging better, even centenarians,â Perez-Matos says. âBut itâs meaningful to see it in the context of a controlled experiment where all the worms ate the same diet, and yet those that age better just have more oleic acid.â
Her findings suggest that aging isnât determined solely by genes or environment. Subtle molecular differences, like patterns in RNA splicing or fat content, can influence how long and how well an organism lives. Understanding these differences may guide interventions to keep humans healthier for longer.
âMost variation between how different individuals age is due to differences in their genetics, their environment, or a combination of the two,â says William Mair, professor of molecular metabolism at the Harvard Chan School and Perez-Matosâs faculty advisor. âWhat is amazing about Mariaâs work is that she kept both of those things the same, yet still some animals aged much more slowly than others. So, the big question is: Why? Is it luck? Is it random? Mariaâs work, alongside another former graduate student, suggests the differences lie in how different individuals process lipids. If this turns out to also be true in humans, it might have real therapeutic value for older adults.â
By investigating these mechanisms, we move one step closer to not only extending but also improving human quality of life.
Bedside Perspective
Perez-Matos traces her fascination with the human body to her childhood in Bucamaranga, Colombia, a medium-sized town where âeverybody knew everybodyâ and she could walk to school. Her parents were both physicians, so she overheard snippets of their conversations about medicine throughout her upbringing. She recalls the excitement she felt when her dad was on call for work, and how sheâd beg to go to the hospital with him.
âTake me with you,â she would say. âI want to understand. I want to go!â
Perez-Matosâ interest in science cemented as a middle schooler the first time she opened a textbook and encountered an illustration of a cell with organelles drawn in. Then, she watched an animation of intercellular movement.Â
âEverything was so beautifully organized,â Perez-Matos says. âHow is it possible that we have so many things happening right now [in the body], and things work out? There are so many processes. The human body is just amazing, and thatâs something Iâm going to carry with me forever.â
Motivated by curiosity, Perez-Matos completed medical school in Colombia before starting as a research fellow at the Harvard-affiliated hospital, Beth Israel. There, she used mouse models to study how liver disease happens, and realized she wanted to pursue research at a deeper level.Â
âIt was super captivating,â she says. âI wanted to dedicate myself to research.â
As a PhD student at Harvardâs Kenneth C. Griffin Graduate School of Arts and Sciences, she found alignment with her labâs ethos of addressing aging itself as a process in order to provide more effective interventions for multiple age-related diseases.
âThere’s a ton of research to try to understand heart disease, osteoporosisâall these things that happen in the aging population,â Perez-Matos says. âBut the main risk factor is aging. So, if we target the aging process itself, we would make all of it better.â
Reflecting on her path, she emphasizes the importance of connecting research to real-world impact, noting how easy it can be to get lost in the details of basic science as well as the importance of remembering what matters most.Â
âBeing a physician at the bedside gave me perspective,â she says. âI needed to always askâhow is this going to impact someone’s life and make it better? I always want to carry that idea, even if I’m just playing around with worms now.â
There’s a ton of research to try to understand heart disease, osteoporosisâall these things that happen in the aging population. But the main risk factor is aging. So, if we target the aging process itself, we would make all of it better. âMaria Perez-Matos
Telling the Truth
Image
Perez-Matos recently ran the Brooklyn Marathon after picking up the hobby or running during her PhD.
Perez-Matosâ research shows that even genetically identical worms in controlled environments age differently, pointing to subtle molecular factors that could inform human interventions. Looking forward, she hopes her discoveries will contribute to personalized interventions that ensure the final years are as healthy and vibrant as possible.Â
âThere are always so many more questions that you could ask,â she says. âHow does oleic acid regulate aging? How is the level of oleic acid regulated to begin with?â
Today, Perez-Matos has moved into healthcare consulting, but she brings the same rigor, problem-solving, and curiosity to her work. She credits Professor Mair with teaching her the importance of communicating science clearly.
âThatâs something the world needs a lot,â she says. âTo be able to communicate science and get people excited with the truth. Heâs really good at that, and that’s something that I definitely tried to learn from my time with him.â
Having completed her PhD, Perez-Matos embraces the potential to explore the possibilities that lie ahead.
âI feel very free right now,â she says. âI honestly feel I accomplished my childhood dream. Now I get to see what comes next.â
Maria Perez-Matosâ research was supported by the Ruth L. Kirschstein National Research Service Award Individual Predoctoral Fellowship from the National Institutes of Health.
Banner photo courtesy of Harvard T. H. Chan School of Public Health.
How does artificial intelligence think? The big surprise is that it âintuitsâ
Something extraordinary has happened, even if we havenât fully realized it yet: algorithms are now capable of solving intellectual tasks. These models are not replicas of human intelligence. Their intelligence is limited, different, and â curiously â turns out to work in a way that resembles intuition. This is one of the seven lessons weâve learned so far about them and about ourselves
Artificial intelligence was born in the 1950s when a group of pioneers wondered if they could make their computers âthink.â After 70 years, something tremendous has happened:Â neural networks are solving cognitive tasks. For 300,000 years, these tasks were the exclusive domain of living beings. Not anymore. Itâs not controversial: itâs a fact. And it has happened suddenly. Machine learning with neural networks has solved problems that eluded machines for decades:
ChatGPT, Gemini, or Claude handle language
They have fluent and encyclopedic knowledge
They write code at a superhuman level
They describe images at a human level
They transcribe at a human level
They translate at a human level
Other models generate realistic images, predict hurricanes, win at Go, and drive cars in Phoenix.
AI researcher François Chollet sums it up this way: âIn the last decade, deep learning has achieved nothing less than a technological revolution.â Each of these achievements would have been a remarkable breakthrough on its own. Solving them all with a single technique is like discovering a master key that unlocks every door at once.
Why now? Three pieces converged: algorithms, computing power, and massive amounts of data. We can even put faces to them, because behind each element is a person who took a gamble. Academic Geoffrey Hinton kept working on neural networks long after his colleagues had abandoned them. Jensen Huang, Nvidiaâs CEO, kept improving parallelâprocessing chips far beyond what video games â the core of his business â actually needed. And researcher FeiâFei Li risked her career to build ImageNet, an image collection that seemed absurdly large at the time.
But these three pieces aligned. In 2012, two of Hintonâs students, Ilya Sutskever and Alex Krizhevsky, combined them to achieve spectacular success: they built AlexNet, a neural network capable of âseeingâârecognizing images â far better than anything that had come before.
The rumor spread quickly through the laboratories: this worked. Hintonâs team had found a formula: networks, data, and computing in gigantic quantities.
The impact of this transformation will be profound. As Ethan Mollick, one of the most astute observers of our time, has said, even if AI development were to stop tomorrow, âwe would still have a decade of changes across entire industries.â
No one knows how far these machines will go. Between the hype promising superhuman intelligence every year and the denial that ignores the obvious, we are missing something crucial: current AI models are already fascinating.The latest big surprise is that they work in a way that closely resembles intuition. Their development forces us to confront deep questions â about how they work and how we do. And they have already given us some answers..
Lesson 1. Machines can learn
Itâs the most overlooked and least controversial lesson: machines learn. James Wattâs centrifugal governor (1788) already adjusted the speed of steam engines without supervision. It was the beginning of a discovery: you donât need to fully specify the rules of a device for it to work.
Classical programming consists of defining rules and expecting answers: âThis is how you add; now add 2 and 2.â But machine learning works the other way around: you give it examples, and the system discovers the rules. Chollet sums it up in Deep Learning with Python: âA machine learning system is trained rather than programmed.â The most powerful example is the large language models like Claude, Gemini, or ChatGPT. They are neural networks â tangles of computational units connected in successive layers, imitating the neurons of the brain â with hundreds of billions of parameters that are adjusted during training. Every success and every mistake tweaks those parameters. This learning process is extremely long, opaque due to its sheer scale, but not mysterious. Itâs mathematical. And it has worked.
Hidden here is what the field calls the âbitter lesson.â For decades, experts tried to encode their knowledge into machines. They failed. What succeeded was creating the conditions for knowledge to emerge⌠and stepping aside.
Lesson 2. AIs have emergent abilities
The bitter lesson hides a profound idea: something complex can emerge from simple processes. It is the principle that organizes life. Evolution didnât design each organ; it set in motion a process â mutation, recombination, selection â and from there sprang eyes, wings, brains. Now we have replicated that process in machines.
Letâs return to large language models (LLMs). Without going into the divisive topic of defining their capabilities, itâs clear they handle language with flexibility. You can converse with ChatGPT; it detects sarcasm and responds to changing contexts. But no one programmed it with grammar or explained sarcasm to it. How is this possible? Most experts assumed that mastering language fluently would require general intelligence (something humanâlike across a wide range of tasks). Yet it turned out that the simple training task of âpredict the next wordâ had emergent power.
The procedure is simple. The first training of an LLM is what we call pretraining: the model is presented with snippets of text from the internet and asked to predict the next token (a word or fragment). When it fails, the parameters responsible for the error are adjusted. This simple process, repeated an astronomical number of times, ends up creating models that predict words very well⌠and that learn much more along the way.
For Carlos Riquelme, a researcher at Microsoft AI, this was a crucial discovery. He shared his astonishment from 2017, when he was working at Google Brain: âI was amazed by the power of scaling. By scaling a very simple method [predicting the next word] with large amounts of data and powerful models, it became clear that it was possible to largely replicate human linguistic capacity.â
The key is this: to predict words you need to grasp complex concepts. Suppose you have to complete these sentences, which Blaise Agßera y Arcas compiles in a brilliant new book, What is Intelligence?.
âIn stacks of pennies, the height of Mount Kilimanjaro isâŚâ
âAfter her dog died, Jen didnât leave the house for days, so her friends decidedâŚâ
Filling these gaps requires geographical knowledge, mathematics, common sense, and even âtheory of mindâ to put yourself in Jen and her friendsâ shoes. In this way, âwhat seemed like a narrow linguistic task â predicting the next word â turned out to encompass all tasks,â argues AgĂźera y Arcas in his book. For example, when presented with the phrase about Kilimanjaro, Googleâs latest Gemini 3 model thinks for a minute and then responds (correctly): âThe height of the mountain is approximately 3.9 million cents.â For Jenâs friends, it offers different options, from âshowing up at her door with ice creamâ to âtaking turns visiting her.â
AgĂźera y Arcas provides another example in an email exchange: multiplication. An LLM like Gemini or ChatGPT might have memorized common calculations from the internet, such as â2 Ă 7.â But they also predict â871 Ă 133,â which doesnât appear anywhere. âSuccessfully performing these operations generally implies having inferred non-trivial algorithms from examples.â Itâs the trick of emergence: a simple process produces complex capabilities.
Lesson 3. AI learns with a âcrappy evolutionâ
Our AI doesnât learn like people. A child is born with a lot of innate âmachinery,â and then learns with little data and few experiences, with remarkable efficiency. The pre-training of an LLM is very different: it begins with a blank slate and learns very slowly with millions of examples. The topic is cats: training an AI to identify cats in an image requires thousands of photos, but a two-year-old can distinguish them by seeing three.
Thereâs a better analogy: evolution. The renowned researcher Andrej Karpathy describes LLM training as a kind of âcrappy evolution.â In a recent podcast, he spoke about how surprising its development has been: âwe can build these ghosts, spirit-like entities, by imitating internet documents. This works. Itâs a way to bring you up to something that has a lot of built-in knowledge and intelligence in some way, similar to maybe what evolution has done.â
Why does this analogy work? Because evolution also arises from a vast number of tiny trials and changes (mutations and symbiosis), repeated over millions of years. Itâs a slow, blind process that ends up embedding capabilities in living beings: instincts, reflexes, or patterns. Itâs chaotic and noisy. Thatâs why each gene influences many characteristics of an organism; and thatâs why medicines have side effects, because they disrupt circuits other than the intended one.
Actually, the surprise of an AI mastering language by predicting words â which surprised Riquelme â reminds me of the shock that Darwin caused: how to accept that animals, people, and even his poems are the byproduct of a blind process that only seeks to âmaximize copiesâ?
Lesson 4. We have automated cognition
François Chollet is cautious when speaking about artificial intelligence. He prefers to call it âcognitive automation.â True intelligence, in his view, will require something that current models lack: âcognitive autonomy,â the ability to confront the unknown and adapt. Chollet wants to curb the hype, although at the same time he acknowledges a remarkable achievement: we are automating cognitive tasks on an industrial scale.âWhatâs surprising about deep learning is how much can be achieved with pure memorization,â he says via email. For him, LLMs lack the deliberate and efficient reasoning that humans possess. Thatâs why, initially, they made crude errors, such as miscounting the râs in the word âraspberries.â What surprises him is that they can often compensate for that lack of reasoning: âIf you have almost infinite experience, intelligence isnât so critical.â
Other experts see more than just memorization: are we witnessing real intelligence? Andrej Karpathy believes so. On Dwarkesh Patelâs podcast, he explained that pretraining does two things: âNumber one, itâs picking up all this knowledge, as I call it. Number two, itâs actually becoming intelligent. By observing the algorithmic patterns in the internet, it boots up all these little circuits and algorithms inside the neural net to do things like in-context learning.â
Jeremy Berman, creator of the leading algorithm in the ARC Prize, became convinced by the new reasoningâfocused models, which emerged about a year ago and include learning stages without examples:âI was surprised that you can train a model on its own attempts, and that allows it to think and learn for itself,â he explains in a message exchange. He is referring to reinforcement learning (RL), described by the creators of DeepSeek R1. âIf you present a math problem to an LLM, let them answer 100 times, and train them on their best answers, the LLM learns. This goes beyond the pure memorization of pre-training.â Thanks to this, the latest generation of models can solve long, complex problems that their versions from just a few months ago were unable to handle.
A man interacts with a robot at the pib@school fair in Hanover, in March 2025.JULIAN STRATENSCHULTE ( DPA / AFP / ContactoPhoto )
Carlos Riquelme points out that there are semantic differences: âAlgorithms, mechanisms, and ways of reasoning can be memorized. Some might call that âcircuits for thinking,â while others might say that the algorithm was simply memorized, like how we learn to add.â Furthermore, Riquelme emphasizes that real-world learning is more active. From the moment the model generates its responses and receives feedback, âit can end up memorizing something that wasnât in its initial data.â
AgĂźera y Arcas believes that AI is real intelligence â without further qualifiers. He thinks models like Gemini, ChatGPT, or Claude display a capacity for generalization that goes beyond what we can reasonably call memorization. And he is surprised that Chollet argues otherwise: âWhat evidence is he looking for?â he asked. For AgĂźera y Arcas, nature already shows that intelligence comes in many forms, such as Portia spiders â which plan cunning attacks â or octopuses, which distribute their cognition across their arms.
Lesson 5. Itâs more intuitive than rational
Here comes the paradox. In the 20thâcentury imagination, robots were supposed to be cold, rational machines: logic, calculation, deduction. But todayâs AI works the other way around..
Psychologist Daniel Kahneman, Nobel laureate in Economics, distinguished two systems in human thought. System 1 is fast, automatic, and intuitive; it uses shortcuts and patterns. System 2 is slow, deliberate, and rational; it requires conscious effort. The former dominates our lives. A baby knows how to nurse, we pull our hand away from fire, we hold a glass with just the right force⌠Things that took robots decades to learn.
Whatâs surprising is that early LLMs operate much closer to System 1 than to System 2. They mimic the style of Jorge Luis Borges, they write with rhythm. They do things without being able to âexplainâ how â just like us. They donât reason step by step; theyâve absorbed patterns at massive scale. And deliberate reasoning â deduction, counting, logic â is precisely where they struggle.
Thatâs why recent innovations seek to add reasoning. The aforementioned âreasoningâ models â from DeepSeek R1 to the current generation â write for themselves before responding, generating more cautious and reflective, step-by-step thought processes. Other advances pursue the same goal: reinforcement training that rewards correct reasoning, launching multiple attempts in parallel and selecting the best one, or connecting models to external mathematical tools that overcome their limitations. Itâs an attempt to build an artificial System 2. And itâs working, at least to some extent: the newest models excel at math and spatial tests where the first LLMs failed.
Lesson 6. Humans are also patterns
If AI captures patterns and uses them to write, translate, and draw, an uncomfortable question arises: how much of us works the same way? Perhaps more than we like to admit. We already know that our brains rely on constant shortcuts. Watching how machine learning performs, itâs hard not to wonder how much of what we traditionally attribute to talent or experience â writing with rhythm, choosing colors, sensing tone â is actually automatic.
The history of science is the history of dismantling our exceptionalism. Galileo showed we are not the center of the universe; Darwin showed we are not special creations; neuroscience showed we are not one but many. Now AI adds another lesson: abilities we once felt were uniquely ours can be captured through largeâscale pattern recognition.
Lesson 7. We are living through a Cambrian explosion of AI
Todayâs AI systems have deep limitations. Andrej Karpathy listed some of them in the podcast mentioned earlier: âThey donât have enough intelligence, theyâre not multimodal enough, they canât do computer use […] They donât have continual learning,â he said. âTheyâre cognitively lacking and itâs just not working. It will take about a decade to work through all of those issues.”
But a new avenue has opened up with the successful formula of networks, data, and computing.Thatâs why we are living through a Cambrian period. Like that explosion of life 540 million years ago, when a multitude of animals suddenly appeared, we are now seeing an explosion of novel approaches to artificial intelligence. There are laboratories exploring fascinating directions: Sara Hooker is working on adaptive systems, Fei-Fei Li wants to build models that decipher the physical world, and François Chollet is researching AIs that write and evolve their own logical programs.
How far will these attempts go? Blaise AgĂźera y Arcas sees no limits: âOur brains achieve incredible feats of reasoning, creativity, and empathy. And those brains are circuitry: they are not something supernatural. And if they are not supernatural, they can be modeled computationally.â
Will we achieve this in practice? Nobody knows. But the question is no longer theoretical. We are watching algorithms learning to read, write, program, and reason â clumsily at times, astonishingly at others. Whatever happens from now on, this has already occurred. And it is extraordinary. Perhaps it will end up being the most important transformation of our lives.
Sign up for our weekly newsletter to get more English-language news coverage from EL PAĂS USA Edition