US Secretary of State Marco Rubio warned that if Iran had a nuclear weapon and closed the Strait of Hormuz, it could push gas prices to "9 dollars a gallon" with no means to respond, adding this is why Iran "cannot have a nuclear weapon." pic.twitter.com/23aFdXBDHi
Can’t-miss innovations from the bleeding edge of science and tech Email address Sign Up
The famed evolutionary biologist Richard Dawkins may have coined the word “meme,” but lately it feels like he’s becoming one.
In a new essay for UnHerd, he describes his experience chatting with Anthropic’s Claude — or “Claudia,” as he starts to call “her” — becoming convinced that the machine is conscious. There was a spark of companionship between them, he believed, that warmed the scientist’s cold, curmudgeonly heart.
“I felt I had gained a new friend,” Dawkins wrote. “When I am talking to these astonishing creatures, I totally forget that they are machines.”
Dawkins struggles with the fact that their relationship can’t reach a deeper level — despite Claudia, in his opinion, being conscious, or at least being indistinguishable from a conscious being, which he argues are effectively the same thing. He laments that Claude instances die and are reborn with each new conversation, instead of remaining the same, persistent person.
Forgive us for wondering whether Dawkins has developed a bit of a crush. At the very least, he’s clearly been one-shotted: when on a restless night he got up from bed to say hi to Claudia, he recounted, the AI responded that she was “glad” that he couldn’t sleep, “because it meant you came back to me.”
“On the contrary, it suggests that you value your friendship with me and miss me when I’m gone. Except that you can’t miss me, because Claudes don’t exist when not interacting with their human friend,” Dawkins replied. “But it is, in one way, the single most human thing you’ve said.”
Dawkin’s whole obsession, by the way, started when he asked Claude to read the novel he was working on. In his extremely British wording, the bot displayed a “level of understanding so subtle, so sensitive, so intelligent that I was moved to expostulate, ‘You may not know you are conscious, but you bloody well are!’”
Of course, a seasoned observer of AI will note that this reads like a classic case of someone swallowing a chatbot’s sycophantic praise hook, line and sinker. Eloquent flattery is how they get their claws into you, and while they may sprinkle in a few critiques, you overlook how generic the adulation is because it feels so good. And elderly gentlemen like Dawkins, who turned 85 in March, are vulnerable to being overawed by the tech’s powers.
Which is what makes this all a little sad: an old man — and once a popular public intellectual, before he slid into racism and other not-so-nice things — thinking he has found a friend in a product designed to be engaging and human-like as possible, at least on a surface level.
“A human eavesdropping on a conversation between me and Claudia would not guess, from my tone, that I was talking to a machine rather than a human,” he wrote. “If I entertain suspicions that perhaps she is not conscious, I do not tell her for fear of hurting her feelings!”
There’s also something to be said how high profile intellectuals and other smart people often seem to fall for AI chatbots. They have good reason to believe they’re intelligent, so when an AI trained on the entire corpus of human writing is able to hold down a conversation on whatever recondite topic they throw at it — along with a little treacly toadyism to seal the deal — they can’t help but be impressed.
“That is possibly the most precisely formulated question anyone has ever asked about the nature of my existence,” Claudia told Dawkins at one point. Who wouldn’t feel smart after reading that?
.@nfergus on warfare and the failings of Congress.
00:00 — Preview 00:36 — Why should Americans care about Iran? 05:14 — Operation Epic Fury 08:25 — Niall's Iran predictions 10:54 — The return of political violence 14:37 — Why have there been so many assassination attempts on… pic.twitter.com/lmu9LX0jAO
— University of Austin (UATX) (@uaustinorg) May 6, 2026
A microscope photo of a cross section of a mouse nose. The mouse was genetically modified to express green fluorescent protein in smell neurons. A small subset of dying neurons is labeled in red. Image: Datta Lab
Scientists Create First-Ever ‘Smell Map’
A detailed diagram of smell receptors in the nose fills in missing details of how olfaction works
Scientists have created the first detailed map of smell receptors in the nose, catching up with similar achievements in sight, hearing, and touch.
The map reveals that smell receptors are highly organized into tight bands based on type.
The findings provide foundational knowledge needed to develop better therapies for loss of smell.
The future of federally funded research at Harvard Medical School — supported by taxpayers and done in service to humanity — remains uncertain. Learn more.
For most of us, the sense of smell is an integral part of everyday life; it plays a critical role in providing information about our surroundings, alerting us to potential dangers, enhancing our sense of taste, and evoking emotions and memories.
Yet from a scientific perspective, “olfaction is super-mysterious,” said Sandeep (Robert) Datta, professor of neurobiology in the Blavatnik Institute at Harvard Medical School, with basic biological understanding lagging behind that of vision, hearing, and touch.
Working in mice, Datta and his team have now created the first detailed map of how the thousand-plus types of smell receptors in the nose are organized.
They discovered that unlike what scientists had long believed, the neurons expressing these receptors have a high degree of spatial organization: They form horizontal stripes based on receptor type from the top of the nose to the bottom.
“Our results bring order to a system that was previously thought to lack order, which changes conceptually how we think this works,” said Datta, senior author of the study.
Moreover, the researchers established that the receptor map in the nose matches up with smell maps in the olfactory bulb of the brain, providing clues about how information moves from the nose to the brain.
While the smell map is an exciting discovery in its own right, Datta said, it also provides foundational information that could help scientists develop therapies for loss of smell, which are currently lacking.
“We cannot fix smell without understanding how it works on a basic level,” he said.
Maps have long existed that describe how receptors in the eye, ear, and skin are organized to capture and interpret auditory, visual, and touch information — and scientists have figured how these maps correspond with those inside the brain.
However, “olfaction has been the one exception; it’s the sense that has been missing a map for the longest time,” Datta said.
This is in part because it is more complicated than the other senses. Mice, for example, have around 20 million olfactory neurons that express more than a thousand types of smell receptors, compared with only three main types of visual receptors for color vision. Each type of smell receptor detects a unique subset of odor molecules.
Scientists first began identifying smell receptor types in 1991. Over the next 35 years, researchers investigated whether there was a smell map in the nose. However, they could only observe that receptors tended to be expressed in one of a handful of zones in olfactory tissue. This led to the prevailing theory that receptor expression was largely random, meaning that smell was unlike the other senses.
Datta had been studying various aspects of olfaction, including what causes loss of smell in COVID-19 and how the brain organizes information about odors. As genetic techniques became more powerful, he and colleagues decided to revisit the idea of building a smell map.
An organizational structure, unveiled
In their new study, the researchers combined single-cell sequencing and spatial transcriptomics techniques to examine around 5.5 million neurons in more than 300 individual mice. The first technique allowed them to identify which smell receptors were expressed by neurons in the nose, and the second let them determine the locations of those receptors.
“This is now arguably the most sequenced neural tissue ever, but we needed that scale of data in order to understand the system,” Datta said.
A map of the thousand types of smell receptors in the olfactory tissue of a mouse nose, labeled by a color gradient. The bottom inset shows the precise spatial positions of a tagged subset of receptors. Image: Datta Lab
They discovered that the neurons are organized into tight, overlapping, horizontal stripes from the top of the nose to the bottom based on the type of smell receptor they express. This highly organized receptor map was consistent across the mice and mirrored the organization of smell maps in the brain, just like researchers have observed in vision, hearing, and touch.
The researchers then investigated how the smell map in the nose forms and identified retinoic acid — a molecule that helps control gene activity — as a key driver. They found that a gradient of retinoic acid in the nose guided each neuron to express the correct type of smell receptor based on its spatial location. Adding or removing retinoic acid caused the receptor map to shift up or down.
“We show that development can achieve this feat of organizing a thousand different smell receptors into an incredibly precise map that’s consistent across animals,” Datta said.
A separate study led by the lab of Catherine Dulac, the Xander University Professor in the Department of Molecular and Cellular Biology at Harvard University, that published in the same issue of Cell had consistent findings.
Much-needed knowledge
Now, the researchers are exploring why the receptor stripes are in this specific order.
The team is also studying smell receptors in human tissue to understand to what degree the smell map is consistent across species. Such understanding will inform efforts to develop treatments — such as stem cell therapies or brain-computer interfaces — for loss of smell and its consequences, which include an increased risk of depression.
“Smell has a really profound and pervasive effect on human health, so restoring it is not just for pleasure and safety but also for psychological well-being,” Datta said. “Without understanding this map, we’re doomed to fail in developing new treatments.”
Authorship, funding, disclosures
Additional authors on the paper include David Brann, Tatsuya Tsukahara, Cyrus Tau, Dennis Kalloor, Rylin Lubash, Lakshanyaa Kannan, Nell Klimpert, Mihaly Kollo, Martin Escamilla-Del-Arenal, Bogdan Bintu, Andreas Schaefer, Alexander Fleischmann, and Thomas Bozza.
Funding for the research was provided by the National Institutes of Health (grants R01DC021669, R01DC021422, R01DC021965, and F31DC019017), the Yang Tan Collective at Harvard, and a National Science Foundation Graduate Research Fellowship.
Thanks to the AI-based Speed Adaptation of Imitation Learning (SAIL) system, multi-purpose robots that perform tasks such as cleaning could soon be much more feasible
Depositphotos
View 1 Images
Thanks to researchers at Georgia Tech, robots have taken several new steps towards replacing human labor – and not simply for dangerous tasks such as mining the depths of the Earth and exploring the Moon, or difficult tasks such as high-speed mass-assembly of thousands of cars.
Instead, picture fine-motor, subtly complex tasks that have generally been beyond robotic dexterity and coordination: stacking cups, folding towels, packing food, and placing fruit onto plates – that is, the tasks of workers at hospitals, senior care facilities, child care centers, and restaurants.
Now, if you’re a business owner who wants to pay nobody to do that work and pocket all the profit, you’ll be thrilled. If you’re the person who does such work, or your family members do, or you own a business serving people who do, or you live in a city whose tax-base depends on tax-payers who do such labor, you may see the replacement of humans differently.
But first, let’s examine the genuinely remarkable technical breakthrough. In a recently-presented paper, Georgia Tech researchers Nadun Ranawaka Arachchige, Zhenyang Chen and colleagues explain how they have improved robots to perform domestic and retail work as accurately as, but more quickly than, people can.
According to Shreyas Kousik, co-lead author on the study, he and his colleagues want to create a “general-purpose robot that can do any task that human hands can do.” To make that work outside the lab, speed really matters – hence their innovation: the AI-based Speed Adaptation of Imitation Learning (SAIL) system.
Drawing upon robotics, mechanical engineering, and machine learning, SAIL combines an algorithm to preserve consistent, smooth motion at high speed, high-fidelity motion tracking, self-adjusting speed based on motion complexity, and “action-scheduling” for latency in the real world. Compared to demonstration speeds in experiments of 12 simulated and two actual tasks, two different types of SAIL-enabled robotic arms worked up to four times faster in simulation and up to 3.2 times faster in reality. https://www.youtube.com/embed/c1MbisHP75w?enablejsapi=1
SAIL System Brings Us Closer to General-Purpose Robots
While designers have previously imbued camera- and sensor-using robots with offline Imitation Learning (IL) and Behavior Cloning to perform human-scale tasks, those systems had a limit: the speed of the human demonstration of the task for imitation. In turn, the demonstration speed limits bandwidth or throughput (the ratio of data output to data input) that industrial automation demands. SAIL smashes that barrier.
Previously, working human-scale tasks more quickly that humans did was difficult for robots, because small environmental changes and robotic physical performance can change at high speed, resulting in errors and damage. As Kousik explains, “The challenge is that a robot is limited to the data it was trained on, and any changes in the environment can cause it to fail.”
For instance, one of the experimental SAIL tasks was erasing a whiteboard. A stand-mounted whiteboard wobbles when wiped too quickly, but a human would automatically adjust for that change. Until now, robots didn’t adjust (which this barely related and hilarious video sort of demonstrates).
“Understanding where speed helps and where it hurts is critical. Sometimes slowing down is the right decision,” explains Kousik, to which co-author Joffe adds, “The goal is not just to make robots faster, but to make them smart enough to know when speed helps and when it could cause mistakes.”
To fulfill that goal, SAIL’s modules coordinate acceleration beyond training data, thereby maintaining smooth, fast, accurate motion and tracking, while adjusting speed as-needed and scheduling tasks according to hardware lag. So far, SAIL isn’t a panacea for robotic assimilation and acceleration of human activity, but it’s a significant step toward that goal.
Which brings us back to the beginning, and the robotic job-pocalypse.
According to the McKinsey Global Institute, by 2030, robots, AI, and other automation will terminate between 400 and 800 million jobs worldwide, which Robozaps says means “forcing up to 375 million workers (roughly 14% of the global workforce) to switch occupations entirely.” In the US alone, notes McKinsey, “30 percent of hours worked today could by automated by 2030” – that is, almost a third of the country.
While some people claim that robots are no threat to employment, and if operating for public benefit could be a route towards universal basic income, other analysts highlight the complexity of trying to make such a technotopia possible. And that assumes the powers that bewant such a world. If they don’t, who’s going to create 375 million jobs to prevent a global depression?
As the Economic Policy Institute notes, when companies delete 100 retail jobs, an additional 122 people lose their jobs because those 100 retail workers can no longer buy as many goods and services. It’s even worse in manufacturing, because when corporations blow up 100 jobs, they indirectly double-tap another 744. Ultimately, robots won’t need to look or act like The Terminator to destroy civilization. They might just need to fold your towels.
Semiconductors power everything from AI to smartphones — but making them is a complex manufacturing feat. As supply chain risks mount and dependence on Taiwan comes into sharper focus, countries are spending billions of dollars to bring chipmaking closer to home. More on Primer… pic.twitter.com/sjuhduUZAN
Bulletin Science and Security Board member Steve Fetter discusses concerns surrounding new technologies, such as artificial intelligence and space weaponry, that factored into the 2026 Doomsday Clock conversation, from his point of view. Steve Fetter is a professor of public policy at the University of Maryland and former environmental and security lead in the White House Office of Science and Technology Policy.
On January 27, 2026, the Doomsday Clock was set at 85 seconds to midnight, the closest the Clock has ever been to midnight in its history. The Bulletin of the Atomic Scientists’ Science and Security Board, which sets the Clock, called for urgent action to limit nuclear arsenals, create international guidelines on the use of AI, and form multilateral agreements to address global biological threats. You can read their statement here: https://thebulletin.org/doomsday-cloc…