Futurism: Oxford Researcher Warns That AI Is Heading for a Hindenburg-Style Disaster

Oxford Researcher Warns That AI Is Heading for a Hindenburg-Style Disaster

“It was a dead technology from that point on.”

By Frank Landymore

Published Feb 18, 2026 12:12 PM EST

A city skyline featuring a mix of modern and older buildings under a partly cloudy sky. The shadow of an airship has been overlaid onto them.
Illustration by Tag Hartman-Simkins / Futurism. Source: Getty Images

Is the AI bubble going to burst? Will it cause the economy to go up in flames? Both analogies may be apt if you’re to believe one leading expert’s warning that the industry may be heading for a Hindenburg-style disaster.

“The Hindenburg disaster destroyed global interest in airships; it was a dead technology from that point on, and a similar moment is a real risk for AI,” Michael Wooldridge, a professor of AI at Oxford University, told The Guardian.

It may be hard to believe now, but before the German airship crashed in 1937, ponderously large dirigibles once seemed to represent the future of globe-spanning transportation, in an era when commercial airplanes, if you’ll permit the pun, hadn’t really taken off yet. And the Hindenburg, the largest airship in the world at the time, was the industry’s crowning achievement — as well as a propaganda vehicle for Nazi Germany.

At over 800 feet long, it wasn’t far off the length of the Titanic — another colossus whose name became synonymous with disaster — and regularly ferried dozens of passengers on Trans-Atlantic trips. All those ambitions were vaporized, however, when the ship suddenly burst into flames as it attempted a landing in New Jersey. The horrific fireball was attributed to a critical flaw: the hundreds of thousands of pounds of hydrogen it was filled with were ignited by an unfortunate spark. 

The inferno was filmed, photographed and broadcasted around the world in a media frenzy that sealed the airship industry’s future. Could AI, with its over a trillion dollars of investment, head the same way? It’s not unthinkable. 

“It’s the classic technology scenario,” Wooldridge told the newspaper. “You’ve got a technology that’s very, very promising, but not as rigorously tested as you would like it to be, and the commercial pressure behind it is unbearable.”

Perhaps AI could be responsible for a catastrophic spectacle, such as a deadly software update for self-driving cars, or a bad AI-driven decision collapsing a major company, Wooldridge suggests. But his main concern are the glaring safety flaws still present in AI chatbots, despite them being widely deployed. On top of having pitifully weak guardrails and being wildly unpredictable, AI chatbots are designed to affect human-like personas and, to keep users engaged, be sycophantic.

Together, these can encourage a user’s negative thoughts and lead them down mental health spirals fraught with delusions and even full-blown breaks with reality. These episodes of so-called AI psychosis have resulted in stalkingsuicide and murder. AI’s ticking time bomb isn’t a payload of combustible hydrogen, but millions of potentially psychosis-inducing conversations. OpenAI alone admitted that ChatGPT that more than half a million people were having conversations that showed signs of psychosis every week.

“Companies want to present AIs in a very human-like way, but I think that is a very dangerous path to take,” Wooldridge told The Guardian. “We need to understand that these are just glorified spreadsheets, they are tools and nothing more than that.”

If AI has a place for us in the future, it should be as cold, impartial assistants — not cloying friends that pretend to have all the answers. A shining example of this, according to Wooldridge, is how in an early episode of “Star Trek,” the Enterprise’s computer says it has “insufficient data” to answer a question (and in a voice that is robotic, not personable.)

“That’s not what we get. We get an overconfident AI that says: yes, here’s the answer,” he told The Guardian. “Maybe we need AIs to talk to us in the voice of the ‘Star Trek’ computer. You would never believe it was a human being.”

More on AI: It Turns Out That Constantly Telling Workers They’re About to Be Replaced by AI Has Grim Psychological Effects

Frank Landymore

Contributing Writer

I’m a tech and science correspondent for Futurism, where I’m particularly interested in astrophysics, the business and ethics of artificial intelligence and automation, and the environment.

Most Popular

It Turns Out That Constantly Telling Workers They’re About to Be Replaced by AI Has Grim Psychological Effects

By Frank Landymore

Realtor Uses AI, Accidentally Posts Photo of Rental Property With Demonic Figure Emerging From Mirror

By Victor Tangermann

NASA Running Out of Non-Life Explanations for What Its Rover Found on Mars

By Victor Tangermann

Researchers Studied What Happens When Workplaces Seriously Embrace AI, and the Results May Make You Nervous

By Frank Landymore

Tesla’s New Sales Data Is Absolutely Brutal

By Joe Wilkins

Read More

Around the Web

Unknown's avatar

About michelleclarke2015

Life event that changes all: Horse riding accident in Zimbabwe in 1993, a fractured skull et al including bipolar anxiety, chronic fatigue …. co-morbidities (Nietzche 'He who has the reason why can deal with any how' details my health history from 1993 to date). 17th 2017 August operation for breast cancer (no indications just an appointment came from BreastCheck through the Post). Trinity College Dublin Business Economics and Social Studies (but no degree) 1997-2003; UCD 1997/1998 night classes) essays, projects, writings. Trinity Horizon Programme 1997/98 (Centre for Women Studies Trinity College Dublin/St. Patrick's Foundation (Professor McKeon) EU Horizon funded: research study of 15 women (I was one of this group and it became the cornerstone of my journey to now 2017) over 9 mth period diagnosed with depression and their reintegration into society, with special emphasis on work, arts, further education; Notes from time at Trinity Horizon Project 1997/98; Articles written for Irishhealth.com 2003/2004; St Patricks Foundation monthly lecture notes for a specific period in time; Selection of Poetry including poems written by people I know; Quotations 1998-2017; other writings mainly with theme of social justice under the heading Citizen Journalism Ireland. Letters written to friends about life in Zimbabwe; Family history including Michael Comyn KC, my grandfather, my grandmother's family, the O'Donnellan ffrench Blake-Forsters; Moral wrong: An acrimonious divorce but the real injustice was the Catholic Church granting an annulment – you can read it and make your own judgment, I have mine. Topics I have written about include annual Brain Awareness week, Mashonaland Irish Associataion in Zimbabwe, Suicide (a life sentence to those left behind); Nostalgia: Tara Hill, Co. Meath.
This entry was posted in Uncategorized. Bookmark the permalink.

Leave a comment