15-20: The number of days before Cuba’s oil reserves run dry, according to the data firm Kpler, should it continue at current levels of demand and domestic production. This comes as Mexico, the largest supplier of crude to Cuba, has halted some oil exports to the island. Cuba’s economy is already deteriorating, but with Venezuela’s oil supplies now shut off, it may get even worse.
Founded in 1945 by Albert Einstein, J. Robert Oppenheimer, and University of Chicago scientists who helped develop the first atomic weapons in the Manhattan Project, the Bulletin of the Atomic Scientists created the Doomsday Clock two years later, using the imagery of apocalypse (midnight) and the contemporary idiom of nuclear explosion (countdown to zero) to convey threats to humanity and the planet. The Doomsday Clock is set every year by the Bulletin’s Science and Security Board in consultation with its Board of Sponsors, which includes eight Nobel laureates. The Clock has become a universally recognized indicator of the world’s vulnerability to global catastrophe caused by man-made technologies.
The Bulletin‘s Science and Security Board (SASB) is composed of a select group of globally recognized leaders with a specific focus on nuclear risk, climate change, and disruptive technologies. Learn more…
A year ago, we warned that the world was perilously close to global disaster and that any delay in reversing course increased the probability of catastrophe. Rather than heed this warning, Russia, China, the United States, and other major countries have instead become increasingly aggressive, adversarial, and nationalistic. Hard-won global understandings are collapsing, accelerating a winner-takes-all great power competition and undermining the international cooperation critical to reducing the risks of nuclear war, climate change, the misuse of biotechnology, the potential threat of artificial intelligence, and other apocalyptic dangers. Far too many leaders have grown complacent and indifferent, in many cases adopting rhetoric and policies that accelerate rather than mitigate these existential risks. Because of this failure of leadership, the Bulletin of the Atomic Scientists Science and Security Board today sets the Doomsday Clock at 85 seconds to midnight, the closest it has ever been to catastrophe.
Last year started with a glimmer of hope in regard to nuclear risks, as incoming US President Donald Trump made efforts to halt the Russia-Ukraine war and even suggested that major powers pursue “denuclearization.” Over the course of 2025, however, negative trends—old and new—intensified, with three regional conflicts involving nuclear powers all threatening to escalate. The Russia–Ukraine war has featured novel and potentially destabilizing military tactics and Russian allusions to nuclear weapons use. Conflict between India and Pakistan erupted in May, leading to cross-border drone and missile attacks amid nuclear brinkmanship. In June, Israel and the United States launched aerial attacks on Iranian nuclear facilities suspected of supporting the country’s nuclear weapons ambitions. It remains unclear whether the attacks constrained those efforts—or if they instead persuaded the country to pursue nuclear weapons covertly.
Meanwhile, competition among major powers has become a full-blown arms race, as evidenced by increasing numbers of nuclear warheads and platforms in China, and the modernization of nuclear delivery systems in the United States, Russia, and China. The United States plans to deploy a new, multilayered missile defense system, Golden Dome, that will include space-based interceptors, increasing the probability of conflict in space and likely fueling a new space-based arms race. As these worrying trends continued, countries with nuclear weapons failed to talk about strategic stability or arms control, much less nuclear disarmament, and questions about US extended deterrence commitments to traditional allies in Europe and Asia led some countries without nuclear weapons to consider acquiring them. As we publish this statement, the last major agreement limiting the numbers of strategic nuclear weapons deployed by the United States and Russia, New START, is set to expire, ending nearly 60 years of efforts to constrain nuclear competition between the world’s two largest nuclear countries. In addition, the US administration may be considering the resumption of explosive nuclear testing, further accelerating a renewed nuclear arms race.
An array of adverse trends also dominated the climate change outlook in the past year. The level of atmospheric carbon dioxide—the greenhouse gas most responsible for human-caused climate change—reached a new high, rising to 150 percent of preindustrial levels. Global average temperature in 2024 was the warmest in the 175-year record, and temperatures in 2025 were similar. With the addition of freshwater from melting glaciers and thermal expansion, global average sea level reached a record high. Energized by warm temperatures, the hydrologic cycle became more erratic, with deluges and droughts hopscotching around the globe. Large swaths of Peru, the Amazon, southern Africa, and northwest Africa experienced droughts. For the third time in the last four years Europe experienced more than 60,000 heat-related deaths. Floods in the Congo River Basin displaced 350,000 people, and record rainfall in southeast Brazil displaced over half a million.
The national and international responses to the climate emergency went from wholly insufficient to profoundly destructive. None of the three most recent UN climate summits emphasized phasing out fossil fuels or monitoring carbon dioxide emissions. In the United States, the Trump administration has essentially declared war on renewable energy and sensible climate policies, relentlessly gutting national efforts to combat climate change.
During the past year, developments in four areas of the life sciences have increased potentially catastrophic risks. In December 2024, scientists from nine countries announced the recognition of a potentially existential threat to all life on Earth: the laboratory synthesis of so-called “mirror life.” Those scientists urged that mirror bacteria and other mirror cells—composed of chemically-synthesized molecules that are mirror-images of those found on Earth, much as a left hand mirrors a right hand—not be created, because a self-replicating mirror cell could plausibly evade normal controls on growth, spread throughout all ecosystems, and eventually cause the widespread death of humans, other animals, and plants, potentially disrupting all life on Earth. So far, however, the international community has not arrived at a plan to address this risk.
At the same time, the accelerating evolution of artificial intelligence poses a different sort of biological threat: the potential for the AI-aided design of new pathogens to which humans have no effective defenses. Also, concerns about state-sponsored biological weapons programs have deepened due to the weakening during this past year of international norms and mechanisms for productive engagement. Perhaps of most immediate concern is the rapid degradation of US public health infrastructure and expertise. This dangerously reduces the ability of the United States and other nations to respond to pandemics and other biological threats.
The increasing sophistication of large language models and their applications in critical processes—coupled with lingering concerns about their accuracy and tendency to “hallucinate”—have generated significant public debate over the past year about the potential risks of artificial intelligence. The United States, Russia and China are incorporating AI across their defense sectors, despite the potential dangers of such moves. In the United States, the Trump administration has revoked a previous executive order on AI safety, reflecting a dangerous prioritization of innovation over safety. And the AI revolution has the potential to accelerate the existing chaos and dysfunction in the world’s information ecosystem, supercharging mis- and disinformation campaigns and undermining the fact-based public discussions required to address urgent major threats like nuclear war, pandemics, and climate change.
These dangerous trends are accompanied by another development that undermines efforts to deal with major global threats: the rise of nationalistic autocracy in countries around the world, including in a number of countries that possess nuclear weapons. Leaders of the United States, Russia, and China greatly vary in their autocratic leanings, but they all have approaches to international relations that favor grandiosity and competition over diplomacy and cooperation. The rise of autocracies is not in itself an existential threat, but an us-versus-them, zero-sum approach increases the risk of global catastrophe. The current autocratic trend impedes international cooperation, reduces accountability, and acts as a threat accelerant, making dangerous nuclear, climatic, and technological threats all the harder to reverse.
Even as the hands of the Doomsday Clock move closer to midnight, there are many actions that could pull humanity back from the brink:
The United States and Russia can resume dialogue about limiting their nuclear arsenals. All nuclear-armed states can avoid destabilizing investments in missile defense and observe the existing moratorium on explosive nuclear testing.
Through both multilateral agreements and national regulations, the international community can take all feasible steps to prevent the creation of mirror life and cooperate on meaningful measures to reduce the prospect that AI be used to create biological threats.
The United States Congress can repudiate President Trump’s war on renewable energy, instead providing incentives and investments that will enable rapid reduction in fossil fuel use.
The United States, Russia, and China can engage in bilateral and multilateral dialogue on meaningful guidelines regarding the incorporation of artificial intelligence in their militaries, particularly in nuclear command and control systems.
Our current trajectory is unsustainable. National leaders—particularly those in the United States, Russia, and China—must take the lead in finding a path away from the brink. Citizens must insist they do so.
Learn more about how each of the Bulletin‘s areas of concern contributed to the setting of the Doomsday Clock this year:
Nuclear Risk
The lack of arms control talks and a general dearth of leadership on nuclear issues has worsened the nuclear outlook. Read more…
Climate Change
Reducing the threat of climate catastrophe requires actions both to reduce the primary cause—the burning of fossil fuels—and to deal with the damage climate change is already causing. Read more…
Biological Threats
Four developments—research into self-replicating “mirror life”; AI tools that can design biological threats; state-sponsored biological weapons programs; and the dismantling of US public health efforts—have increased the possibility of bio-catastrophe. Read more…
Disruptive Technologies
The increasing sophistication and uncertain accuracy of AI models have generated significant concern about their application in critical processes, particularly in military programs. Read more…
About the Bulletin of the Atomic Scientists
At our core, the Bulletin of the Atomic Scientists is a media organization, publishing a free-access website and a bimonthly magazine. But we are much more. The Bulletin’s website, iconic Doomsday Clock, and regular events equip the public, policy makers, and scientists with the information needed to reduce man-made threats to our existence. The Bulletin focuses on three main areas: nuclear risk, climate change, and disruptive technologies, including developments in biotechnology. What connects these topics is a driving belief that because humans created them, we can control them.
The Bulletin is an independent, nonprofit 501(c)(3) organization. We gather the most informed and influential voices tracking man-made threats and bring their innovative thinking to a global audience. We apply intellectual rigor to the conversation and do not shrink from alarming truths.
The Bulletin has many audiences: the general public, which will ultimately benefit or suffer from scientific breakthroughs; policy makers, whose duty is to harness those breakthroughs for good; and the scientists themselves, who produce those technological advances and thus bear a special responsibility. Our community is international, with more than half of our website visitors coming from outside the United States. It is also young. Half are under the age of 35.
The same technology that’s ruining essay writing as a pedagogical tool can help students learn how to reason via conversations.
MIT Press Reader/Source image: Adobe Stock
By: David Weinberger
Recent research from MIT found that students using AI to help them structure and outline essays showed less executive brain activity than unaided students. Even more troubling, the effect carried over to when they later wrote an essay on their own. That’s quite concerning, of course.
But instead of taking this as proof that AI fatally weakens our intellectual capabilities, we might also see it as an opportunity to reclaim a powerful and needful skill: helping students learn to think through thoughtful, open dialogue. After all, most students are rarely going to write essays and reports after they graduate. But we hope they will all be able to engage in constructive conversations that broaden and deepen their thinking.
AI can be a powerful, even game-changing tool in this pursuit. As many of us have learned, the large language models such as ChatGPT, Gemini, and Claude are not simply vending machines for answers. You can “talk” with them, drill down on a topic, challenge them, or expand a “conversation” in unexpected directions.
As a small example, here’s the beginning of a conversation I had with Claude, based on my prompt asking it to assume I’m a college freshman and to challenge my ideas. At the end, I had Claude compress it for this brief article:
STUDENT: We did the Ship of Theseus in class today. You replace all the planks, and then everyone argues about whether it’s the same ship. I honestly don’t see why that’s supposed to be a big deal. It just depends on how you’re using the word “same.”
LLM: That’s reasonable — maybe it is just semantic. Though, are you the same person you were when you were five? Your cells have mostly been replaced.
STUDENT: I’m the same person in a different body.
LLM: So, what stays the same — the person part? Your memories and personality have changed pretty dramatically, too.
STUDENT: Yeah, but they’re not planks you swap out, like with the ship. My new memories build on my old ones.
LLM: Oh, that’s good. So, it’s about developmental continuity, not just continuity? That makes me reconsider the ship — planks don’t grow together. But what about a river? New water constantly flows through, yet it’s the same river.
STUDENT: The river’s defined by its shape, not its water.
LLM: Right — so for rivers it’s structure, for persons it’s developmental continuity. What about nations? Is America the same country as it was in 1800?
STUDENT: Probably developmental continuity, but that’s my point: Sameness is just definitional. Who cares?
LLM: But you rejected some definitions as wrong. Maybe we’re not choosing randomly — maybe what we mean by sameness tracks with what actually matters? Like, it matters whether someone who committed a crime decades ago is “the same person” we punish today.
This is not an AI writing an essay for the student. It’s a quick example of a student thinking through an idea in conversation — probing, being challenged, defending their point of view, and perhaps heading toward a reevaluation of their argument. Engaging this way is a fundamentally intellectual activity, hard to teach because it requires the one-on-one attention that few teachers can afford under today’s stresses. But AI can offer students individualized attention at scale, helping them become critical and constructive thinkers by engaging them in respectful, probing dialogues that aim to discover and test ideas, as we did in the ancient roots of Western thought.
When I was a philosophy graduate student, we got one full hour of teacher training. Despite the session’s ludicrous brevity, one line has stuck with me for many decades: Don’t teach the course you would have wanted as an undergraduate. (Thank you, Professor Frank Cunningham.) In other words, we in the room were the one percent who went on to study philosophy as graduate students, and as teachers, we needed to serve the needs and interests of the 99 percent who would not.
Similarly, writing essays won’t be a part of most students’ work once they graduate. This isn’t to say that students shouldn’t write any essays while in school. After all, writing essays also teaches students how to read them: how to follow an argument’s flow, listen with sympathy, evaluate evidence, and challenge ideas.Clearly, those skills are crucial for learning how to think, but they are not the only skills needed for post-collegiate success.Real-time conversations require listening, finding points of common interest, flexing and diverging as appropriate, and following a complex set of social rules for respectful interaction. (Rupert Wegerif has written brilliantly about this as a pedagogy.)
AI enables us to pursue an idea from history to physics to literature to pop culture to feminist ethics, and beyond, all in one sitting.
In fact, this is precisely why Socrates was so skeptical of the written word: In the “Phaedrus,” he lists three drawbacks of writing: It weakens our memory; it gives the same fixed text to everyone rather than adapting to the individual reader; and you can’t ask a written document a question. You don’t have to deny the importance of literacy to see the merit of his critiques: Certainly, outsourcing memory to documents has diminished our memory. (Remember when the local bards recited thousands of lines from “The Iliad” as we listened, enthralled? Me neither.)
But learning to converse about ideas with AI can overcome Socrates’ second and third criticisms. First, we’re on the verge of language models responding in ways closely tailored to the particularities of each learner’s interests, knowledge level, and conversational style. Second, unlike a static text, you can ask questions of an LLM, and it will answer, often uncannily well (and sometimes wildly incorrectly, which students must be taught how to detect).
There’s also been an epochal change since Socrates’ time. He believed the reasoning exercised in a good conversation could be applied to any topic; that’s an implication of his claim that he knew nothing yet could engage in worthwhile conversations with generals about courage, politicians about virtue, religious experts about piety, and so on. Today, topical knowledge matters more than ever, which can lead to the fragmentation of understanding into separate fields, each requiring deep expertise. But AI enables us to pursue an idea from history to physics to literature to pop culture to feminist ethics, and beyond, all in one sitting. That sort of freedom unleashes curiosity, which is itself a pedagogical good. So is letting students experience the interconnectedness of all knowledge — a reflection of the world’s own callous disregard of disciplinary boundaries.
None of this means AI is a substitute for human teachers or that its answers should be trusted uncritically. On the contrary, AI’s errors and biases make it useful for teaching critical skills: Students must learn how to push back, question sources, and test claims. And teaching them to do this in a conversation with no social stakes can make students more fearless in their critiques.
In human-to-human dialogues, there are stakes — for one thing, developing social and community bonds — so AI dialogues should not be presented as a replacement for human-to-human conversations or human mentorship. This should be clearly demonstrated in the class itself.
The teacher remains essential to this project. When students turn in transcripts of an AI conversation instead of an essay, the teacher would grade it, commenting on the dialogue’s flow and process, as well as its substance. What did the student probe and why? Where were they too trusting? What assumptions went unchallenged? What seems dubious in what the AI asserts so confidently? The teacher can also lead classroom discussions of the process by which we can discover and test ideas conversationally.
AI’s errors and biases make it useful for teaching critical skills: Students must learn how to push back, question sources, and test claims.
Crucially, students should be allowed to discuss one another’s AI dialogues, identify their strengths and weaknesses, and continue the collaborative thought process in fully human spaces. This would turn the exercise into an occasion for human-to-human interaction, which is ultimately its aim.
Used this way, AI is not a plagiarism machine but a tool for developing ideas by learning to explore them in conversation. At a moment when thoughtful dialogue too often gives way to dismissal or disdain, learning to develop and critique ideas via conversation is a crucial skill to cultivate. Now more than ever.
David Weinberger, PhD, is an author, philosopher, and researcher at Harvard’s metaLAB, with a 20-year affiliation with Harvard’s Berkman Klein Center. He is the editor of MIT Press’s open-access book series, “Strong Ideas,” and has published widely in articles and multiple books. His forthcoming book, “Beautiful Particulars,” will be published in October by MIT Press.
“I had a teacher that didn’t like me and I didn’t like him. At the end of the year he decided to fail me. The ironic thing is that the topic was chemistry. I have the distinction of being the only chemistry laureate who failed the topic in high school!” – Tomas Lindahl
🚨ALERT: In explosive testimony before the Senate Foreign Relations Committee, Secretary of State Marco Rubio dropped a bombshell that has Cuba on high alert. "I think we would love to see the regime there change.. it would be of great benefit to the United States," Rubio stated. pic.twitter.com/1pMZA0lphO
“£72,000”@vicderbyshire speaks to Dr Arthur Joustra, a trainee paediatrician, about how his student loan debt has risen since he graduated in 2022 – despite making payments. #Newsnightpic.twitter.com/rgBhJIaJyx
Xi Jinping has spent three years gutting his own military leadership. Five of the seven members of the Central Military Commission – China’s supreme military authority – have been purged since 2023, all of whom were handpicked by Xi himself back in 2022. But if anyone seemed safe from the carnage, it was Zhang Youxia.
Zhang wasn’t just China’s most senior uniformed military officer. He was a fellow “princeling” whose father fought alongside Xi’s in the revolution, a combat-tested general who distinguished himself in the 1979 war with Vietnam, and someone who had backed Xi since he first rose to power in 2012.
And yet, on Jan. 24, Beijing announced that Zhang and another CMC member, Liu Zhenli, were under investigation for “suspected serious discipline and law violations.” In practical terms, that means detention and dismissal. Official PLA media accused them of causing “damage to combat capability construction,” a phrase suggesting problems that go well beyond ordinary corruption. Wild rumors have swirled about what really happened. Some online chatter ties Zhang to a foiled coup attempt against Xi; a Wall Street Journal report suggests he leaked nuclear secrets to the United States. Color me skeptical of both.
What’s more likely, and plenty alarming in its own right, is that this reflects Xi’s deepening paranoia and mistrust. Zhang had accumulated significant power simply by surviving the earlier purges, and his combat pedigree, reputation for competence, and princeling status made him a potential rival in Xi’s eyes. By removing him, Xi blocks the formation of an alternative power center that might one day defy his rule and sends an unmistakable message to every party leader, military commander, and provincial official in China: whatever authority you hold is delegated, not owned.Loyalty guarantees nothing; no one is ever truly safe. Once that lesson sinks in, officials become consumed not with whether they’ve done anything wrong but with whether they might be next. Taking initiative becomes riskier than doing nothing.The ambitious learn to keep their heads down; the cautious get promoted. Honest information stops flowing upward. Problems don’t get flagged until they’re crises.
This is Maximum Xi – the logical endpoint of a system where all power flows to a single leader with no checks, no dissent, and no trusted counsel. We’ve witnessed this dynamic since Xi emerged from China’s 20th Party Congress in 2022 with a grip on power unrivaled since Mao Zedong. Back then, the concern was that stacking the Politburo Standing Committee with loyalists would leave Xi virtually unfettered and unconstrained in his ability to make big mistakes. Arbitrary decisions, policy volatility, elevated uncertainty – these are the consequences of concentrating so much authority in one man’s hands. Three years later, we’re watching that play out in the most dramatic arena possible – the military command structure itself.
The only people left on the CMC now are Xi and his anti-corruption enforcer, Zhang Shengmin. The chairman commands an army of two million with what is essentially a committee of one. Some will read this as proof of Xi’s iron grip, and there’s certainly truth to that. But control and power aren’t the same thing. Xi exercises more direct control over the People’s Liberation Army than at any point in his tenure, but a hollowed-out CMC means a less effective military in the near term – even if the purges are intended to produce a stronger, more disciplined fighting force over time.
The CMC’s five vacancies will likely remain open until the 21st Party Congress in October 2027, with Xi elevating trusted service chiefs or theater commanders to acting roles in the meantime. He’ll screen these candidates primarily for loyalty and disciplinary records rather than competence or warfighting ability, heightening operational risks. As one of the few active Chinese generals with actual combat experience, Zhang’s loss will be felt particularly hard. His removal creates a vacuum of reliable counsel at the very top of the command structure. Orders will be more difficult to transmit clearly down the chain of command. Readiness for complex joint operations – the kind required for, say, an amphibious invasion of Taiwan – will suffer.
The silver lining is that this turbulence pushes back the timeline for an already-unlikely invasion of Taiwan. Beijing has always viewed this option as a last resort: the risk of US military intervention remains too high, the economic costs would be staggering, and Chinese leaders still believe unification can be achieved without a direct conflict. Xi has reason to feel comfortable about the near-term trajectory: President Donald Trump is much more personally invested in bilateral stability than in Taiwan, President William Lai is in political trouble at home, and the opposition KMT is making favorable noises about Beijing ahead of Taiwan’s 2028 elections. And China is playing a long game, betting that as it catches up militarily and becomes economically and technologically self-reliant, unification can be achieved at a lower cost in the future. Why risk everything on an invasion today when time is on your side?
Against that backdrop, losing two senior generals with real warfighting experience pushes the invasion timeline back further still. The purges themselves clearly signal Xi’s lack of confidence in his military right now. Leaders don’t gut their command structures when they’re feeling bold, suggesting a reduced appetite for large-scale kinetic moves, at least until he fills the CMC vacancies with vetted loyalists. And even then, Xi will be asking generals he barely knows, men who watched him throw all their former bosses in jail, to advise him on the biggest military decision China has faced since splitting with the Soviet Union – hardly a recipe for candid counsel.
This doesn’t mean Taiwan can breathe easy. Xi remains set on unification by any means necessary. Beijing will continue favoring gray-zone coercion to chip away at the status quo and deter Taipei (and Washington) from moving toward independence. China will likely conduct at least two major military exercises in 2026 and work to isolate Taiwan internationally.
Perhaps most importantly, Maximum Xi cuts both ways. A degraded command structure makes the PLA less capable today, but it also means Xi is less likely to hear that from anyone around him. If he ever convinces himself the PLA is ready to pull off an invasion when it isn’t – or feels cornered enough to act – there will be fewer voices in the room to stop him. When no one dares tell the emperor he has no clothes, the risk of miscalculation rises. And if Xi eventually succeeds at forging a more disciplined, loyal military, Taiwan will face a far more dangerous threat down the road.
The more power Xi accumulates, the more brittle the system becomes. The fewer people who can tell him he’s wrong, the more likely he is to be wrong. And the tighter his grip on the military, the less capable that military may be when it actually matters … or so we should hope.
_________
🔔 Be sure to subscribe to GZERO Daily to get the world’s best global politics newsletter every day on top of my weekly email. Did I mention it’s free?
A commentary, published in theJournal of Neurotrauma, calls for traumatic brain injury to be recognized as a chronic condition as are diabetes, asthma, depression and heart failure.
To provide comprehensive care for traumatic brain injury throughout individuals’ lifespans, the authors propose that coordinated care models they and others have developed, tested and applied to various populations—including older adults, individuals living with depression and post-intensive care unit survivors—be adapted to improve communication and integration between brain injury specialists—including physical medicine and rehabilitation clinicians—and primary care physicians, fostering better long-term patient care for traumatic brain injury survivors and more support for both patients and their families.
The incidence of traumatic brain injury (frequently referred to as TBI) is rising in the U.S., with an estimated 2.8 million Americans receiving a traumatic brain injury each year. The health care system and insurance coverage predominantly focus on the acute phase of the condition, typically the first year after the injury is sustained.But for a significant number of those living with a traumatic brain injury, the effects are long lasting, occurring for a lifetime. Post-acute care options, however, are often rather bleak.
Traumatic brain injury is most commonly caused by falls (especially in older adults), blunt trauma (particularly sports-related injuries), firearms, motor vehicles or assaults.Damage can be confined to one area of the brain or it can occur over a more widespread area. Many factors including the size, severity and location of the brain injury as well as age of the individual and prior brain injuries impact recovery.
According to the Centers for Disease Control and Prevention, populations more likely to be affected by traumatic brain injury include:
racial and ethnic minorities
service members and veterans
people who experience homelessness
people who are in correctional and detention facilities
survivors of intimate partner violence
people living in rural areas
The commentary authors observe that many clinicians believe that residual impairments due to traumatic brain injury are static once initial recovery has plateaued and do not expect significant changes over the remainder of a person’s life. In contrast, the commentary authors write that the long-term course of traumatic brain injury involves waxing and waning and thus could be better characterized as dynamic rather than static. Accordingly, they call for traumatic brain injury to be considered and managed as a chronic condition.
“Acknowledging traumatic brain injury as a chronic condition and providing coordinated care will make a difference to patients, their families and to how the health care system operates in several ways. It recognizes that for many patients it’s not over after 12 months and, importantly, provides these patients with a place of care where they can be monitored and helped over many years,” said commentary senior author Kurt Kroenke, M.D. of the Regenstrief Institute and the Indiana University School of Medicine.
“Recognizing TBI as a chronic condition and using coordinated care models support the important role of self-management for both the patient and their family. These two steps facilitate collaboration between the limited number of brain care specialists and the primary care clinicians who typically oversee care throughout the TBI survivor’s lifetime.
“Collaborative care models, which we at Regenstrief Institute and others have developed, provide patients living with chronic conditions with the support and coordinated care they need. Medicare recently recognized TBI as one of 18 chronic conditions, hopefully others will follow.”
Dr. Kroenke notes that the chronic condition designation by Medicare opens the door to change in how traumatic brain injury is managed long-term and may guide health care systems to better integrate brain injury specialists—who are in short supply—with primary care providers through collaborative care models. It may also encourage health insurers to provide coverage for the many years of care needed by some survivors.
“We can improve life after brain injury with a more proactive approach and a longer-term view of brain injury as a chronic and dynamic condition. This approach anticipates changes overtime and incorporates strategies to optimize healthy living with coordinated care that is individualized for the lifetime needs of those living with brain injury,” said commentary co-author Flora M. Hammond, M.D., Covalt Professor of Physical Medicine and Rehabilitation and Chair of the Department of Physical Medicine and Rehabilitation at the IU School of Medicine.
“Recognition of TBI as a chronic condition would not only focus more resources on problems associated with living with brain injury but would also enhance both the public’s and professionals’ awareness of how to optimize the health and well-being of persons living with the effects of TBI,” the commentary concludes.
Publication details
John D. Corrigan et al, Recognition of Traumatic Brain Injury as a Chronic Condition: A Commentary, Journal of Neurotrauma (2024). DOI: 10.1089/neu.2024.0356
Tesla CEO and billionaire Elon Musk has long garnered a reputation for being massively wrong in his promises and predictions about the future.
In 2024, for instance, he said that AI would become “smarter than the smartest human” by 2025. He said his company’s SpaceX Starship rocket, which is still exploding during test flights, will land on Mars this year. Like clockwork, he’s predicted that self-driving cars will become a reality “next year” every year for well over a decade now. He promised robotaxis without human safety monitors by mid-2025, which the company still has yet to accomplish.
We could go onand on. In short, it’d be far easier to count the occasions on which he’s been right than when he’s been wrong.
Now, as NBC News reports, users on online prediction markets like Kalshi and Polymarket are making big bucks off of Musk’s astonishing track record for being wrong about the future. Case in point, Polymarket user David Bensoussan made a ten percent return after betting $10,000 that Musk wouldn’t follow through on his threat of forming a new political party following his falling out with president Donald Trump.
He also successfully bet against Musk’s prediction that Tesla would launch an “unsupervised” version of its erroneously-named “Full Self-Driving” software by the end of 2025.
To Bensoussan, it’s a matter of principle.
“He does have a solid fan base, and so if I can help separate them from some of their money, I’m always happy to do that,” he told NBC. “He has a habit of exaggerating timelines, and of saying he’s going to do these amazing things and attaching more immediacy than what his intent may necessarily be.”
In total, Bensoussan’s ploy to bet against Musk’s wildly optimistic predictions has paid off, garnering him over $36,000.
Online prediction markets have surged in popularity in recent years. While benefitting from a largely unfettered regulatory vacuum, sites like Polymarket and Kalshi have also become embroiled in major insider trading controversies. Case in point, earlier this year, an anonymous bettor netted a cool $410,000 after accurately predicting the Trump administration’s attacks on Venezuela mere hours before Trump issued orders authorizing the strikes.
Even Musk himself has frequently voiced his support for prediction markets. His AI company, xAI, has deep integrations of both Kalshi and Polymarket.
But as it turns out, actively betting against his many attempts to reassure investors that a major technological revolution is just around the corner — a gambit that has largely worked out for him in the past — could be pretty lucrative.
As for current predictions, the online platforms aren’t exactly eager to stand behind his latest prophecies. For instance, only 13 percent of Kalshi users believe Musk will make good on his recent threat to buy airline Ryanair by the end of this year. The airline’s CEO, Michael O’Leary, had questioned the economics of adding SpaceX Starlink internet to airliners, resulting in petty name-calling.
Markets expectedly rose, with odds tripling to about 40 percent.
However, as the rules stipulated, a robotaxi service would only count if it was open to the general public — something that only materialized well after July 2025 — leading to plenty of users getting “burned.”
I’m a senior editor at Futurism, where I edit and write about NASA and the private space sector, as well as topics ranging from SETI and artificial intelligence to tech and medical policy.