-
Archives
- March 2026
- February 2026
- January 2026
- December 2025
- November 2025
- October 2025
- September 2025
- August 2025
- July 2025
- June 2025
- May 2025
- April 2025
- March 2025
- February 2025
- January 2025
- December 2024
- November 2024
- October 2024
- September 2024
- August 2024
- July 2024
- June 2024
- May 2024
- April 2024
- March 2024
- February 2024
- January 2024
- December 2023
- November 2023
- October 2023
- September 2023
- August 2023
- July 2023
- June 2023
- May 2023
- April 2023
- March 2023
- February 2023
- January 2023
- December 2022
- November 2022
- October 2022
- September 2022
- August 2022
- July 2022
- June 2022
- May 2022
- April 2022
- March 2022
- February 2022
- January 2022
- September 2021
- August 2021
- July 2021
- June 2021
- May 2021
- April 2021
- February 2021
- January 2021
- December 2020
- November 2020
- October 2020
- September 2020
- August 2020
- July 2020
- June 2020
- May 2020
- April 2020
- March 2020
- February 2020
- January 2020
- December 2019
- November 2019
- October 2019
- September 2019
- August 2019
- July 2019
- June 2019
- May 2019
- April 2019
- March 2019
- February 2019
- January 2019
- December 2018
- November 2018
- October 2018
- September 2018
- July 2018
- June 2018
- May 2018
- April 2018
- March 2018
- February 2018
- January 2018
- December 2017
- November 2017
- October 2017
- September 2017
- August 2017
- July 2017
- June 2017
- May 2017
- April 2017
- March 2017
- February 2017
- January 2017
- December 2016
- November 2016
- October 2016
- September 2016
- August 2016
- July 2016
- June 2016
- May 2016
- April 2016
- March 2016
- February 2016
- January 2016
- December 2015
- November 2015
- October 2015
- September 2015
- August 2015
- July 2015
- June 2015
- May 2015
- March 2015
- January 2015
-
Meta
Time to explore the 1% of people who reach a level of awareness. Carl Jung … time now to think in terms of the collective unconscious with such strife in the world
Posted in Uncategorized
Leave a comment
You may be MENSA … but Elite IQ’s do not mean you will lead a privileged life
Posted in Uncategorized
Leave a comment
Jeffrey Sachs … a brave orator. The United States has started a Brazen and Unprovoked War
Posted in Uncategorized
Leave a comment
Futurism: AI Workers, and Even CEOs, Suddenly Turning Against the Trump Administration
AI Workers, and Even CEOs, Suddenly Turning Against the Trump Administration
“If any tech company caves to the Pentagon’s demands, War Secretary Pete Hegseth will have won the ability to surveil our communities… en masse.”
By Joe Wilkins
Published Feb 27, 2026 4:01 PM EST

Sign up to see the future, today
Can’t-miss innovations from the bleeding edge of science and techEmail addressSign Up
The Trump administration has a new rival in its ongoing feud with AI company Anthropic: Silicon Valley’s rank-and-file.
Newly reported by Bloomberg, a coalition of labor groups representing over 700,000 workers from Amazon, Google, Microsoft, and OpenAI have made a formal ask of their corporations to join Anthropic in its refusal to comply with recent demands from the Pentagon.
“We are speaking out today because the Pentagon is demanding that Anthropic abandon two major safety guardrails for Claude, which is the only frontier AI model currently deployed in classified Department of War operations,” reads the letter. “We are writing to urge our own companies to also refuse to comply should they or the frontier labs they invest in enter into further contracts with the Pentagon.”
This week, the Pentagon issued an ultimatum to Anthropic to drop two key guardrails regarding the use of its AI system, Claude: one barring “mass domestic surveillance,” and another prohibiting the Pentagon from using its tech to build AI-powered weapons that can kill without a human operator.
The Pentagon had previously agreed to uphold both guardrails when it entered a contract worth up to $200 million to license Claude for classified use in July of 2025. But following a series of back-and-forth meetings, including discussion of using the company’s AI in a nuclear strike scenario, the Pentagon ordered Anthropic to allow unfettered access to Claude or face its wrath.
“How the Pentagon reacts remains to be seen, but we know they will rapidly seek to onboard other models without these guardrails in place, regardless of whether they try to force Anthropic to comply,” the worker’s letter urges.
“If any tech company caves to the Pentagon’s demands, War Secretary Pete Hegseth will have won the ability to surveil our communities — here and abroad — en masse, at an unprecedented level,” it continues. “He will have the power to build and deploy AI-powered drones that kill people without the approval of any human.”
In the face of mounting pressure from arguably the most powerful military entity in the world, In a statement published on Anthropic’s website, Amodei described the Pentagon’s increasingly desperate stance: “they have threatened to remove us from their systems if we maintain these safeguards; they have also threatened to designate us a ‘supply chain risk’… and to invoke the Defense Production Act to force the safeguards’ removal.”
“These latter two threats are inherently contradictory,” Amodei continued: “one labels us a security risk; the other labels Claude as essential to national security.”
Either way, he says Anthropic “cannot in good conscience accede to their request,” the deadline for which is 5:01pm on Friday, February 27th.
The legion of tech workers and Amodei gained a particularly strange bedfellow: OpenAI CEO Sam Altman, who has become something of a nemesis to Amodei as the rivalry between the two companies has heated up. But swallowing his pride — or perhaps sensing a PR opportunity — Altman sent a memo to staff on Thursday essentially siding with Anthropic against Hegseth and the Pentagon.
“[R]egardless of how we got here, this is no longer just an issue between Anthropic and the [Pentagon]; this is an issue for the whole industry and it is important to clarify our stance,” he wrote. “We have long believed that AI should not be used for mass surveillance or autonomous lethal weapons, and that humans should remain in the loop for high-stakes automated decisions. These are our main red lines.”
Zooming out, the rift draws attention to the growing contradiction between Anthropic’s dedication to ethics and its contract with the Department of Defense. While the $200 million contract is financially immaterial to the $380 billion Anthropic, allowing the Pentagon unlimited access to Claude could come at substantial reputational and legal risk, especially as the United Nations has begun efforts to ban lethal autonomous weapons on a global scale.
In a sense, the Pentagon seems to rely on Anthropic more than the other way around. According to Defense One, it would take the Trump administration three months or more to replace Claude. Still, there’s no accounting for ever-unpredictable Trump administration. Case in point, as the deadline loomed, Trump himself weighted in with a vitriol-laden post.
“The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War, and force them to obey their Terms of Service instead of our Constitution,” he fumigated on Truth Social. “Their selfishness is putting AMERICAN LIVES at risk, our Troops in danger, and our National Security in JEOPARDY.”
“Therefore, I am directing EVERY Federal Agency in the United States Government to IMMEDIATELY CEASE all use of Anthropic’s technology,” he continued. “We don’t need it, we don’t want it, and will not do business with them again!”
However, Trump conceded, the government will need a six-month period to phase out Anthropic products.
More on Anthropic: Anthropic CEO Says Company No Longer Sure Whether Claude Is Conscious
Joe Wilkins
Correspondent
I’m a tech and transit correspondent for Futurism, where my beat includes transportation, infrastructure, and the role of emerging technologies in governance, surveillance, and labor.
Posted in Uncategorized
Tagged ai, anthropic, artificial-intelligence, chatgpt, technology
Leave a comment
The Conversation. Last week, artificial intelligence company ElevenLabs announced Caine has licensed his voice to the company. It will be available on their ElevenReader app, which allows you to listen to any text in a voice of your choosing, as well as being available on their licensing platform, Iconic Marketplace.
Academic rigour, journalistic flair
Arts + CultureBusiness + EconomyEducationEnvironmentHealthPolitics + SocietyScience + TechWorldPodcastsInsights
Author
- Amy Hume Lecturer In Theatre (Voice), Victorian College of the Arts, The University of Melbourne
Disclosure statement
Amy Hume does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Partners
University of Melbourne provides funding as a founding partner of The Conversation AU.
The Conversation UK receives funding from these organisations
DOI
We believe in the free flow of information
Republish our articles for free, online or in print, under Creative Commons licence.Republish this article
Share article
Print article
Few actors are imitated as often as Michael Caine. Even Michael Caine has imitated Michael Caine.
His voice has been used in birthday card greetings and been the source of jokes in various comedy sketches. It is synonymous with a certain type of Britishness.
Last week, artificial intelligence company ElevenLabs announced Caine has licensed his voice to the company. It will be available on their ElevenReader app, which allows you to listen to any text in a voice of your choosing, as well as being available on their licensing platform, Iconic Marketplace.
To understand why Caine’s voice is so iconic (and wanted by AI) we need to look deeper at what people actually hear in it.
Why do people love listening to Michael Caine?
Caine was born in London in 1933. His mother was a cook and a cleaner, and his father worked in a fish market. Caine speaks with a Cockney accent, setting him aside from most other actors of his generation.
Cockney hails from London’s East End and is often associated with London’s working class – think Eliza Doolittle from My Fair Lady, the Artful Dodger from Oliver!, or Bert the Chimney Sweep from Mary Poppins (although Dick van Dyke’s accent is not the most accurate, it’s still recognisably Cockney).
Our mission is to share knowledge and inform decisions.
About us
Traditionally, you were said to be a true Cockney if you were born within earshot of the Bow Bells – the bells of St. Mary-le-Bow church on Cheapside.
That distinctiveness matters because the accent carried heavy class meaning in mid-20th century Britain.
We don’t hear many contemporary examples of Cockney. Accents change and evolve over time and it has gradually been replaced by a new dialect called Multicultural London English (MLE).
While most actors of his age acquired a “stage accent” – known as Received Pronunciation (RP) – Caine made a conscious decision to hold onto his working-class roots and not change his accent. Instead, he built his career on it.
I could’ve gone to voice lessons, but I always thought if I had any use […] I could fight the class system in England.
His accent became cultural capital and helped him land roles in Alfie (1966), The Italian Job (1969) and Jack Carter (1971). By the 1970s, he was a British cultural icon.
What do we hear when we hear celebrity voices?
Hearing a person’s voice is never just about acoustics. We hear social meaning: culture, identity, character and story.
Sociolinguist Asif Agha coined the term “enregisterment” to describe how a way of speaking becomes publicly recognised as signalling particular social types and values.
Over time, Caine’s voice has become enregistered as a recognisable Cockney accent associated with East London and historically linked to a working-class identity. Hearing his voice activates a socially shared register of meanings attached to Cockney.
This contrasts with, say, Queen Elizabeth II, whose accent was enregistered with royalty, prestige and wealth.
Another useful concept here is what sociolinguists sometimes call “dialectal memes”: the images and character types that circulate around particular accents. These memes are transmitted through books, television, film, and even celebrity figures themselves.
Caine has been a carrier of Cockney dialectal memes in popular culture.
When you look at it this way, AI voice licensing commodifies not just the acoustic properties of Caine’s voice, but the enregistered social meanings audiences recognise in it.
What AI licensing means for Caine
ElevenLabs describes its Iconic Marketplace platform as “the performer-first approach the entertainment industry has been calling for”. Through licensing, actors maintain ownership of their voices in a digital, AI landscape.
Caine licensing his voice theoretically ensures he receives credit and compensation, and prevents unauthorised clones appearing elsewhere.
It is possible this is exactly the direction actors want AI to go in – for use of their voice to be controlled by themselves, with clear credit and payment.
However, this model is not without risk to the actor or the listener. We should ask: do we need to hear something in Caine’s voice? Will we process information differently or hear it with more authority if it’s delivered in the voice of a cultural icon like Caine?
Giving power over to machines
People who admire Caine may want him to read to them. Some will be willing to pay for it. We need to remain conscious of the decisions we are making here.
In the 1960s, computer scientist Joseph Weizenbaum, creator of the world’s first chatbot, Eliza, warned about the dangers of forming relationships with machines. He was alarmed to see users confiding in Eliza and responding to the chatbot as if it actually understood them, even when they knew it did not.
What happens if an AI voice is not actually generic, but recognisably tied to a real human?
An actor’s likeness and voice may be protected with licensing, but their human self is not. That creates a pathway to attachment or even infatuation.
Caine is not just licensing his voice, but also the Cockney persona audiences recognise in it. Suddenly, a machine speaks with the authority of a real human behind it.
Posted in Uncategorized
Leave a comment
Anthropic defies Pentagon over AI guardrails
| Anthropic defies Pentagon over AI guardrails |
| Amid pressure from the Pentagon to give in to its demands to loosen its safeguards, Anthropic continues to stand firm. |
| In a statement on Thursday afternoon, Anthropic CEO Dario Amodei made it clear that the company cannot accede to the Department of War’s demand to roll back its safeguards that prevent its AI models from being used in two key areas: mass surveillance of U.S. citizens and fully autonomous weapons. |
| Amodei noted that AI’s use in mass surveillance posed “serious, novel risks to our fundamental liberties.” And while the tech may someday be helpful in fully autonomous weaponry, the guardrails simply don’t exist today to deploy this safely. |
| “In a narrow set of cases, we believe AI can undermine, rather than defend, democratic values,” Amodei said in his statement. “Some uses are also simply outside the bounds of what today’s technology can safely and reliably do.” |
| Amodei said that its Claude models are widely deployed throughout the defense and intelligence community, including in the government’s classified networks, in national laboratories, and in mission-critical applications such as intelligence analysis, modeling and simulation, operational planning, and cybersecurity operations. Thus far, its safeguards haven’t presented an issue in these cases, he said. |
| Though Anthropic’s “strong preference” is to continue to support military action, it will only do so with its safeguards in place. Otherwise, it cannot “in good conscience” submit to their requests and continue its relationship. |
| Amodei’s response is the latest move in the fight between the company and the Pentagon. Earlier this week, the agency took its first steps in blacklisting Anthropic by labelling it a “supply chain risk,” a label generally reserved for companies from adversarial countries. |
| The unprecedented move would not only threaten Anthropic’s contract with the military but also force all defense vendors to cut ties with Anthropic. And after his meeting with Amodei, Secretary of War Pete Hegseth contradicted himself by threatening to invoke the Defense Protection Act, forcing Anthropic to tailor its models to military desires regardless.Additionally, the Pentagon struck a deal with xAI on Monday to use its Grok models in classified systems, including weapons development and battlefield operations. |
| Policymakers, however, have started to warn that the sparring match between Anthropic and the Pentagon will only sour future relationships between the government and Silicon Valley AI firms, with Dean Ball, former AI adviser to the Trump Administration, calling Hegseth’s contradictory threats “incoherent.” |
![]() |
| Anthropic standing firm in its decision not to give in to the Pentagon’s threats was its only option, given that the company has built its reputation around AI safety and only deploying AI with guidelines that ensure it does no harm. Though the company is confronting its moral and ethical standards with recent changes to its Responsible Scaling Policy, backing down would have been a sharp about-face, betraying its core principles. Though the fallout could cost Anthropic a large chunk of its revenue from government agencies and vendors, there may be a silver lining: Gaining further trust with its primary audience of risk-averse but AI-hungry enterprises. |
Posted in Uncategorized
Tagged ai, artificial-intelligence, chatgpt, openai, technology
Leave a comment
