| New AI models empower hackers |
![]() |
| Illustration: Brendan Lynch/Axios |
| Top AI and government officials tell Axios CEO Jim VandeHei that Anthropic, OpenAI and other tech giants will soon release new models that are scary good at hacking sophisticated systems at scale. The one to watch: Anthropic is privately warning top government officials that its not-yet-released model — currently branded “Mythos” — makes large-scale cyberattacks much more likely in 2026. The model allows agents to work on their own, with wild sophistication and precision, to penetrate corporate, government and municipal systems. It’s a hacker’s dream weapon. Jim reveals in his new weekly newsletter for CEOs that one source briefed on the coming models says a large-scale attack could hit this year. Businesses are ripe targets. (C-suite only: Request beta of Jim’s newsletter.) Fortune got its hands on an unpublished Anthropic blog post describing Mythos. The post said the model is “currently far ahead of any other AI model in cyber capabilities. “The post adds that Mythos “presages an upcoming wave of models that can exploit vulnerabilities in ways that far outpace the efforts of defenders. “So the threat is no longer theoretical, and will be exacerbated by employees testing agents without realizing they’re making it easier for cybercriminals to hack their company. Here’s why this is different: The new models are even better at powering agents to think, act, reason and improvise on their own without rest or pause or limitation. Think of a warehouse full of the most sophisticated criminals who never sleep, learn on the fly and persist until successful — except the warehouse is infinite. Bad actors can now scale simply with more compute. They aren’t limited by finite personnel. A single person can run campaigns that once required entire teams. At the same time, systems are more vulnerable because so many employees are firing up Claude, Copilot or other agentic models — often at home — and creating agents of their own. They often connect to their internal work systems unwittingly, opening a new door for cybercriminals to enter. The industry has a name for this: “shadow AI.” A Dark Reading poll found that 48% of cybersecurity professionals now rank agentic AI as the No. 1 attack vector for 2026 — above deepfakes, above everything else. The bottom line: Everyone working at every company in America needs to know right now the dangers of using agents, especially unsupervised, anywhere near sensitive information.Share this column. |
-
Archives
- March 2026
- February 2026
- January 2026
- December 2025
- November 2025
- October 2025
- September 2025
- August 2025
- July 2025
- June 2025
- May 2025
- April 2025
- March 2025
- February 2025
- January 2025
- December 2024
- November 2024
- October 2024
- September 2024
- August 2024
- July 2024
- June 2024
- May 2024
- April 2024
- March 2024
- February 2024
- January 2024
- December 2023
- November 2023
- October 2023
- September 2023
- August 2023
- July 2023
- June 2023
- May 2023
- April 2023
- March 2023
- February 2023
- January 2023
- December 2022
- November 2022
- October 2022
- September 2022
- August 2022
- July 2022
- June 2022
- May 2022
- April 2022
- March 2022
- February 2022
- January 2022
- September 2021
- August 2021
- July 2021
- June 2021
- May 2021
- April 2021
- February 2021
- January 2021
- December 2020
- November 2020
- October 2020
- September 2020
- August 2020
- July 2020
- June 2020
- May 2020
- April 2020
- March 2020
- February 2020
- January 2020
- December 2019
- November 2019
- October 2019
- September 2019
- August 2019
- July 2019
- June 2019
- May 2019
- April 2019
- March 2019
- February 2019
- January 2019
- December 2018
- November 2018
- October 2018
- September 2018
- July 2018
- June 2018
- May 2018
- April 2018
- March 2018
- February 2018
- January 2018
- December 2017
- November 2017
- October 2017
- September 2017
- August 2017
- July 2017
- June 2017
- May 2017
- April 2017
- March 2017
- February 2017
- January 2017
- December 2016
- November 2016
- October 2016
- September 2016
- August 2016
- July 2016
- June 2016
- May 2016
- April 2016
- March 2016
- February 2016
- January 2016
- December 2015
- November 2015
- October 2015
- September 2015
- August 2015
- July 2015
- June 2015
- May 2015
- March 2015
- January 2015
-
Meta
