🚨 “Trump has fallen deeper into the trap… mistake, after mistake, after mistake!”
University of Chicago professor Robert Pape on Piers Morgan Uncensored delivers a brutal assessment: the US president’s strikes on Iran amount to a “catastrophic failure” launching a war without… pic.twitter.com/vMicVSD5g0
My current research is really about understanding long-term effects of stress and really entertaining the question of why we are so transformed when we undergo traumatic experiences and why the effects can linger for so long. We are looking at a variety of hormonal and molecular mechanisms to try to help us understand that.
I’m also interested in treatment of PTSD. And really what to do about the fact that so many people have effects of traumatic experiences that trouble them. They feel haunted by their traumatic experiences or they feel really stuck because of things that have happened to them in the past.
So what’s the best way to get unstuck and what’s the best way to move forward following trauma exposure? So those are the things that my colleagues and I are studying. Why I’ve dedicated my career to studying the effects of trauma is because trauma exposure seems to be everywhere. And increasingly,
the more we seem to be learning about the effects of trauma, in some ways, the more stuck we’re getting as a society. We see it as sort of an insurmountable burden or barrier when, in fact, there’s always been stress and there’s always been trauma.
And truly, there must also be a way to go forward in the face of trauma and use the lessons of trauma to really achieve resilience and post-traumatic growth. I think what’s really helpful is to make the distinction between the experience of stress and the experience of trauma.
So maybe what’s useful is to talk about the difference between stress and trauma. We experience both on pretty much a regular basis, according to statistics. Many people see it as a kind of continuum, with stress being maybe a less serious version of trauma, and trauma’s sort of at the other end of the spectrum.
And that’s not entirely wrong. A stressful event is something that is challenging to you in the moment. It could be a trouble at work, trouble in interpersonal relationships. It could be an illness or coping with really any of a number of things. When most people talk about a traumatic experience,
they’re talking more in the order of life threat, interpersonal violence, childhood abuse, combat, being in a natural disaster. So clearly there is a range of challenging events with trauma being at the other end of the spectrum. But the differences go even deeper than that because when we talk about a stressful situation,
38
1
3
A PTSD researcher explains MDMA-assisted therapy
This isn’t a trip, it’s the most challenging therapy session of your life
Trauma doesn’t end when the danger does, and for decades, science couldn’t explain why. Rachel Yehuda, a leading PTSD researcher, has spent her career inside that question, uncovering the way that trauma can leave impressions on our genes, sometimes passing biological echoes of those events to the next generation.
Now, she’s focused on MDMA therapy, which could actually break the chain.
Timestamps
00:20 Chapter 1: Why trauma sticks 03:05 Stress vs. trauma: what’s the difference? 05:55 Why most people don’t develop PTSD 08:37 Chapter 2: How MDMA-assisted therapy can break the loop 09:19 How trauma warps self-perception 12:40 MDMA-assisted therapy explained 16:38 How societal narratives shape recovery (or worsen it) 23:04 The reality of psychedelic therapy (not a quick fix) 28:55 Chapter 3: Healing can echo across generations 30:48 Epigenetics explained 40:00 Can healing be passed on too? 43:43 PTSD beyond fear: guilt, shame, and trauma 47:01 What real healing looks like
The “U.S.'s stature as a singular outside superpower that can impose security for the region, particularly provide a security shield to its allies, has been shattered." @vali_nasrpic.twitter.com/b0fY6Z0Bk1
He moved like Jagger but killed anything that moved in the middle east. Trump lumbers like the Village People but cuts down schoolgirls and their teachers. Red or Blue they’re coming for you. @MoatsTV#Obama#Trump#GeorgeGallowaypic.twitter.com/l56Yv8K0Xm
Anthropic says its new model, Mythos, poses unprecedented cybersecurity risks and is restricting its release to give cyber defenders more time to harden their systems. Photo illustration by Getty Images
Anthropic’s new AI model, Mythos, is causing a stir among cybersecurity experts and policymakers. The company says its new model is so skilled at finding and exploiting software vulnerabilities that it’s too dangerous to release. Instead, it is limiting access to a small group of major technology companies whose software is the foundation for many other digital services, hoping to give defenders time to strengthen their systems.
Recommended Video
Big Tech is shelling out up to $1 million for new hires
Anthropic is not the only AI lab producing models with these kinds of capabilities, or considering similar release strategies to try to ensure cyber defenders have access to these systems before hackers do. OpenAI is reportedly preparing a new model—internally known as “Spud”—that could match Mythos in cybersecurity capabilities. According to a report from Axios, the company is also working on an advanced cybersecurity-focused system that it plans to release in a phased rollout to a small group of partners, again to try to give defenders a head start.
Some analysts have dismissed these cautious, limited releases as more about marketing and creating hype around new models, rather than purely safety-driven decisions. But most agree that AI-driven cyber capabilities have reached a dangerous tipping point. Even without the powerful new model, they say, existing, publicly available AI models can already carry out sophisticated cyberattacks—sometimes in minutes.
Researchers are concerned about both the scale and accessibility of AI‑enabled attacks. Tasks that once required advanced expertise—like scanning code for vulnerabilities or running attacks that require chaining multiple exploits together—are increasingly being automated or semiautomated by AI systems. Attackers, even those lacking high-level technical skills, can now launch highly automated attacks across thousands of systems at once in a massive, coordinated assault.
In practical terms, that raises questions both for enterprises and policymakers about how to protect critical infrastructure in a world where these advanced AI capabilities will soon be in the hands of bad actors and hostile nation states. Unless government and industry harden defenses, the world could see a wave of devastating cyberattacks taking down banking systems, power grids, hospitals, or water systems. It is exactly such a nightmare scenario that Anthropic says it is hoping to head off by limiting Mythos’s release.
What some researchers say is not clear, however, is how much the new models increase the chances of this kind of cyber-Armageddon. But the reason for their skepticism is not reassuring: They say that much of what Mythos can do may already be possible with smaller, cheaper, openly available models.
Recent research from AI security firm AISLE suggests that several of the vulnerabilities Anthropic highlighted in its announcement—including decades-old bugs—could have been detected by openly available models that anyone can download and run for free.
There are a couple of caveats: Rather than simply pointing an AI model at an entire software application or a complete codebase and asking the AI model to find a way to hack it—as Anthropic appears to have done with Mythos—the AISLE researchers already knew which segments of code contained the bugs and fed the models these code chunks. Smaller models generally have narrower context windows, meaning they can’t take in an entire large codebase at once. But it is possible to imagine a pipeline in which a large codebase is broken into smaller pieces, each of which is fed in turn to a small AI model, allowing it to examine each segment for possible exploits, experts said.
According to Spencer Whitman, chief product officer at AI security firm Gray Swan, the hard part of what researchers achieved with Mythos was autonomously finding the vulnerabilities within large codebases and then testing those exploits. “Finding vulnerabilities is hard because it requires locating weak points buried within millions of lines of code and verifying that these targets result in a real exploit,” he told Fortune. “Mythos claims it autonomously completed both steps.
“The fact that some of these vulnerabilities sat undetected in codebases for decades underscores just how hard the first step actually is—and why automating it is significant,” he added.
Smaller models may be able to achieve comparable results to Mythos, according to Charlie Eriksen, a security researcher at Aikido Security, but they require more technical skill, careful prompting, and better-designed tooling to get there. Models like Mythos, however, may make it considerably easier for even those with less technical skill to carry out sophisticated and devastating cyberattacks.
“This technology is moving so fast that it’s naive to assume others aren’t able to easily replicate similar results, if not already, at least very soon,” he said. “Anybody with a computer can develop very powerful offensive cyber capabilities in a short amount of time, without needing a lot of expertise in cybersecurity.”
A concentration of power
Anthropic’s decision to limit Mythos’s release is also putting unusual power in the hands of a single company. Even though Anthropic says it is consulting with the U.S. government on Mythos’s capabilities and the vulnerabilities it is uncovering (and there are calls for it to work with other allied governments, too), the company is effectively deciding who gets access to one of the most advanced cyber capabilities ever developed.
Some security experts and software developers—especially those committed to open-source software, that is, publicly accessible and often usable for free—argue the world would be safer if Mythos were released so that every defender, not just Anthropic’s chosen partners, could use it to find and patch vulnerabilities.
“Whatever the right judgment call is, the most striking aspect of this situation is how reliant we are on the judgment of a handful of private actors who aren’t accountable to the public,” said Jonathan Iwry, a fellow at the Wharton Accountable AI Lab.
Anthropic did loop in the government early. According to reporting from Axios, the company actively warned U.S.government officials about a new, powerful model that significantly increased the risk of cyberattacks at least a month ago.Anthropic, in a blog post announcing Project Glasswing, later said briefing the government on what the model could do, where the risks were, and how it was managing them, was a “priority from the start.”
Despite these efforts, there’s also a growing “governance gap,” according to Hamza Chaudhry, AI and national security lead at the Future of Life Institute. These systems are being integrated into offensive cyber operations faster than policymakers can build the frameworks to govern how these capabilities are used or secured. In the past, even cyber capabilities developed by and for the use of government, particularly hacking tools developed by the U.S. National Security Agency, have ended up in the hands of bad actors.
For example, in 2016, a hacking group called the Shadow Brokers published a cache of hacking tools and exploits used against major software systems—including Microsoft Windows—that were widely believed to have been developed by the NSA. Some of the leaked NSA exploit code was later used in WannaCry, while NotPetya also relied on the NSA-linked EternalBlue exploit, helping make both attacks among the most damaging in recent history.
The cyber abilities of AI models such as Mythos pose completely new governance challenges, too. With previous hacking tools, a human had to deliberately choose to deploy those exploits. But, according to Anthropic, in safety tests, Mythos would sometimes use its hacking abilities to accomplish some other goal in ways that surprised its creators.
The safety issue is often not the AI model’s coding skills, per se, but its autonomous capabilities, Chaudhry said. As AI systems become more agentic, they are able to set sub-goals, adapt their approach, and continue operating without direct human instruction at every step. The concern is that an AI system might pursue an objective in ways that extend beyond what its operator explicitly intended.
“The agent … pursues its objective function through whatever pathways its intelligence and autonomy identify as optimal,” he said. “An adversary state or non-state actor deploying an autonomous AI agent … is no longer directing actions so much as initiating a process whose specific trajectory they cannot fully predict.”
What enterprises should do
Whether companies have access to Mythos or not, experts say those not currently using AI to secure their systems may already be falling behind. Even with Anthropic limiting widespread access to its new models, AI-driven offensive capabilities are out there in less powerful forms, for those who know how to use them.
Most security teams operate on the assumption that time is somewhat on their side—that there’s at least a gap between a vulnerability existing and an attacker finding it, and another gap between finding it and being able to use it. For most of recent history, that was roughly true. But advanced AI models are collapsing both gaps at once, according to Emanuel Salmona, cofounder and CEO of Nagomi Security.
“Mythos found critical vulnerabilities across every major operating system and browser—some of them decades old—in weeks,” he said. “When that capability is broadly available, and Anthropic’s own people are saying six to 18 months, the organizations that were already behind [on security] don’t just fall further back. The model they built their programs around stops working entirely.”
In 2001, Fortune first convened “The Smartest People We Know,” bringing together CEOs and founders, builders and investors, thinkers and doers. Since then, Fortune Brainstorm Tech has been the place where bold ideas collide. From June 8–10, we will return to Aspen—where it all began—to mark 25 years of Brainstorm. Register now.
Beatrice Nolan is a tech reporter on Fortune’s AI team, covering artificial intelligence and emerging technologies and their impact on work, industry, and culture. She’s based in Fortune‘s London office and holds a bachelor’s degree in English from the University of York. You can reach her securely via Signal at beatricenolan.08
Possessing or publishing pornography depicting incest or adults pretending to be children to be criminalised.
Tech execs could also be held liable to imprisonment or fines if their platforms fail to remove people’s nonconsensual intimate images.
Part of this government’s commitment to tackle abusive online content and halve violence against women and girls.
The public will be better protected from vile online pornography under new Government amendments tabled to the Crime and Policing Bill today (Friday, April 10).
Building on the Government’s work to tackle violence against women and girls, pornography depicting illegal sexual conduct between family members and adults roleplaying as children will be criminalised.
The first of these vital measures will ban anyone from possessing or publishing harmful pornography that shows incest between family members, and sex between step or foster relations where one person is pretending to be under 18.
A further amendment will criminalise the publication and possession of pornography where an adult is roleplaying as a child.
Delivered with the support of many brave campaigners and advocates for change, both amendments recognise that this revolting online content risks normalising child sexual abuse. Each new offence comes with a maximum penalty of five years’ imprisonment for publication.
These crucial changes are the latest measures from the Government to halve violence against women and girls in a decade and protect children from scourge of sexual abuse.
Minister for Victims and Tackling Violence Against Girls Alex Davies-Jones said:
I’ve sadly heard far too many devastating stories from victims and I know we need change now. Tackling violence against women and girls within a decade will take every single one of us.
We have been clear that vile online pornography has real-life consequences for all of us and I’d like to thank every brave campaigner who has worked with us to deliver this vital step. Today we are sending a powerful message: we will stamp out misogynistic and harmful content online and create a safer world.
Today, the Government also tabled an amendment meaning tech execs could be held personally liable if platforms fail to comply with Ofcom’s enforcement decisions to remove people’s intimate images that have been shared without consent.
This would mean senior execs who commit the offence without a reasonable excuse could be liable to imprisonment or a fine, or both.
Online Safety Minister Kanishka Narayan said:
Too many women have endured the distress of having intimate images shared online without their consent.
Today we are tabling measures to hold tech bosses responsible for preventing this abuse. If they fail to remove these images they could face jail or substantial fines. This is not an optional requirement, it’s a duty that every tech leader must take seriously.
Baroness Bertin, Lead reviewer of the Independent Pornography Review, said:
”I greatly welcome the Government’s plans to fully address harmful pornographic content such as incest, step-incest and the mimicking of child sexual abuse. This content that is freely and widely available online is deeply harmful, normalising child sexual abuse and abusive relationships within families.
My review on pornography is clear that the production and dissemination of this content should be made illegal, which is why I laid amendments to the Crime and Policing Bill to criminalise this type of content.
My review is also clear that the people who appear in pornography are at a high risk of exploitation, coercion and violence, which is why the onus should be on pornography platforms to ensure that everyone who appears in pornographic content is over the age of 18 and has given their consent, and that they have the right to withdraw that consent.
Given the ongoing impact of pornography on violence against women and children, I and others across the political divide have worked hard to ensure that this issue has remained in the spotlight.
Today the Government has answered our calls for change, and I am delighted that once again the UK is leading the way on regulating this high harm industry. I commend the Government for their work in this area and look forward to continuing to work with them as we make the online and offline world safer for everyone.
Today’s amendment follows the Government’s ban on depictions of strangulation in pornography and its ban on harmful and degrading nudification apps.
The move follows a series of key changes designed to protect both child and adult victims from violence and abuse.
Repealing the presumption of parental involvement
Restricting parental responsibility from people who have been convicted of a serious sexual offence against any child and where a child is born of rape.
Launching a new and improved Victims Code, which will include the very first child-friendly.
Investing over £1 billion as part of its mission to fight violence against women and girls, with £550 million for victims’ services.
Further information:
Further details on the amendments will be set out in due course.
Publication of pornography depicting adults roleplaying as children or depicting incest will carry a maximum sentence of five years.
Possession of pornography depicting adults roleplaying as children will carry a maximum sentence of three years. Possession of pornography depicting incest will carry a maximum sentence of two years.