Time to explore the 1% of people who reach a level of awareness. Carl Jung … time now to think in terms of the collective unconscious with such strife in the world

Posted in Uncategorized | Leave a comment

You may be MENSA … but Elite IQ’s do not mean you will lead a privileged life

Posted in Uncategorized | Leave a comment

The mystery we call the brain … in every human being

Posted in Uncategorized | Leave a comment

Jeffrey Sachs … a brave orator. The United States has started a Brazen and Unprovoked War

Posted in Uncategorized | Leave a comment

Futurism: AI Workers, and Even CEOs, Suddenly Turning Against the Trump Administration

AI Workers, and Even CEOs, Suddenly Turning Against the Trump Administration

“If any tech company caves to the Pentagon’s demands, War Secretary Pete Hegseth will have won the ability to surveil our communities… en masse.”

By Joe Wilkins

Published Feb 27, 2026 4:01 PM EST

Donald Trump standing behind a wooden podium, speaking into a microphone in a formal setting with an audience in the background. The image has a stylized, high-contrast effect with a blue and orange color tone.
Illustration by Tag Hartman-Simkins / Futurism. Source: Kenny Holston / Pool / Getty Images

Sign up to see the future, today

Can’t-miss innovations from the bleeding edge of science and techEmail addressSign Up

The Trump administration has a new rival in its ongoing feud with AI company Anthropic: Silicon Valley’s rank-and-file.

Newly reported by Bloomberg, a coalition of labor groups representing over 700,000 workers from Amazon, Google, Microsoft, and OpenAI have made a formal ask of their corporations to join Anthropic in its refusal to comply with recent demands from the Pentagon.

“We are speaking out today because the Pentagon is demanding that Anthropic abandon two major safety guardrails for Claude, which is the only frontier AI model currently deployed in classified Department of War operations,” reads the letter. “We are writing to urge our own companies to also refuse to comply should they or the frontier labs they invest in enter into further contracts with the Pentagon.”

This week, the Pentagon issued an ultimatum to Anthropic to drop two key guardrails regarding the use of its AI system, Claude: one barring “mass domestic surveillance,” and another prohibiting the Pentagon from using its tech to build AI-powered weapons that can kill without a human operator.

The Pentagon had previously agreed to uphold both guardrails when it entered a contract worth up to $200 million to license Claude for classified use in July of 2025. But following a series of back-and-forth meetings, including discussion of using the company’s AI in a nuclear strike scenario, the Pentagon ordered Anthropic to allow unfettered access to Claude or face its wrath.

“How the Pentagon reacts remains to be seen, but we know they will rapidly seek to onboard other models without these guardrails in place, regardless of whether they try to force Anthropic to comply,” the worker’s letter urges.

“If any tech company caves to the Pentagon’s demands, War Secretary Pete Hegseth will have won the ability to surveil our communities — here and abroad — en masse, at an unprecedented level,” it continues. “He will have the power to build and deploy AI-powered drones that kill people without the approval of any human.”

In the face of mounting pressure from arguably the most powerful military entity in the world, In a statement published on Anthropic’s website, Amodei described the Pentagon’s increasingly desperate stance: “they have threatened to remove us from their systems if we maintain these safeguards; they have also threatened to designate us a ‘supply chain risk’… and to invoke the Defense Production Act to force the safeguards’ removal.”

“These latter two threats are inherently contradictory,” Amodei continued: “one labels us a security risk; the other labels Claude as essential to national security.”

Either way, he says Anthropic “cannot in good conscience accede to their request,” the deadline for which is 5:01pm on Friday, February 27th.

The legion of tech workers and Amodei gained a particularly strange bedfellow: OpenAI CEO Sam Altman, who has become something of a nemesis to Amodei as the rivalry between the two companies has heated up. But swallowing his pride — or perhaps sensing a PR opportunity — Altman sent a memo to staff on Thursday essentially siding with Anthropic against Hegseth and the Pentagon.

“[R]egardless of how we got here, this is no longer just an issue between Anthropic and the [Pentagon]; this is an issue for the whole industry and it is important to clarify our stance,” he wrote. “We have long believed that AI should not be used for mass surveillance or autonomous lethal weapons, and that humans should remain in the loop for high-stakes automated decisions. These are our main red lines.

Zooming out, the rift draws attention to the growing contradiction between Anthropic’s dedication to ethics and its contract with the Department of Defense. While the $200 million contract is financially immaterial to the $380 billion Anthropic, allowing the Pentagon unlimited access to Claude could come at substantial reputational and legal risk, especially as the United Nations has begun efforts to ban lethal autonomous weapons on a global scale.

In a sense, the Pentagon seems to rely on Anthropic more than the other way around. According to Defense One, it would take the Trump administration three months or more to replace Claude. Still, there’s no accounting for ever-unpredictable Trump administration. Case in point, as the deadline loomed, Trump himself weighted in with a vitriol-laden post.

“The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War, and force them to obey their Terms of Service instead of our Constitution,” he fumigated on Truth Social. “Their selfishness is putting AMERICAN LIVES at risk, our Troops in danger, and our National Security in JEOPARDY.”

“Therefore, I am directing EVERY Federal Agency in the United States Government to IMMEDIATELY CEASE all use of Anthropic’s technology,” he continued. “We don’t need it, we don’t want it, and will not do business with them again!”

However, Trump conceded, the government will need a six-month period to phase out Anthropic products.

More on Anthropic: Anthropic CEO Says Company No Longer Sure Whether Claude Is Conscious

Joe Wilkins

Correspondent

I’m a tech and transit correspondent for Futurism, where my beat includes transportation, infrastructure, and the role of emerging technologies in governance, surveillance, and labor.

Posted in Uncategorized | Tagged , , , , | Leave a comment

The Conversation. Last week, artificial intelligence company ElevenLabs announced Caine has licensed his voice to the company. It will be available on their ElevenReader app, which allows you to listen to any text in a voice of your choosing, as well as being available on their licensing platform, Iconic Marketplace.


Become an author

Sign up as a reader

Sign in

Academic rigour, journalistic flair

Arts + CultureBusiness + EconomyEducationEnvironmentHealthPolitics + SocietyScience + TechWorldPodcastsInsights

Author

  1. Amy Hume Lecturer In Theatre (Voice), Victorian College of the Arts, The University of Melbourne

Disclosure statement

Amy Hume does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

Partners

University of Melbourne provides funding as a founding partner of The Conversation AU.

The Conversation UK receives funding from these organisations

View the full list

DOI

CC BY ND

We believe in the free flow of information

Republish our articles for free, online or in print, under Creative Commons licence.Republish this article

Share article

Print article

Few actors are imitated as often as Michael Caine. Even Michael Caine has imitated Michael Caine.

His voice has been used in birthday card greetings and been the source of jokes in various comedy sketches. It is synonymous with a certain type of Britishness.

Last week, artificial intelligence company ElevenLabs announced Caine has licensed his voice to the company. It will be available on their ElevenReader app, which allows you to listen to any text in a voice of your choosing, as well as being available on their licensing platform, Iconic Marketplace.

To understand why Caine’s voice is so iconic (and wanted by AI) we need to look deeper at what people actually hear in it.

Why do people love listening to Michael Caine?

Caine was born in London in 1933. His mother was a cook and a cleaner, and his father worked in a fish market. Caine speaks with a Cockney accent, setting him aside from most other actors of his generation.

Cockney hails from London’s East End and is often associated with London’s working class – think Eliza Doolittle from My Fair Lady, the Artful Dodger from Oliver!, or Bert the Chimney Sweep from Mary Poppins (although Dick van Dyke’s accent is not the most accurate, it’s still recognisably Cockney).

Our mission is to share knowledge and inform decisions.

About us

Traditionally, you were said to be a true Cockney if you were born within earshot of the Bow Bells – the bells of St. Mary-le-Bow church on Cheapside.

That distinctiveness matters because the accent carried heavy class meaning in mid-20th century Britain.

We don’t hear many contemporary examples of Cockney. Accents change and evolve over time and it has gradually been replaced by a new dialect called Multicultural London English (MLE).

While most actors of his age acquired a “stage accent” – known as Received Pronunciation (RP) – Caine made a conscious decision to hold onto his working-class roots and not change his accent. Instead, he built his career on it.

He once said,

I could’ve gone to voice lessons, but I always thought if I had any use […] I could fight the class system in England.

His accent became cultural capital and helped him land roles in Alfie (1966), The Italian Job (1969) and Jack Carter (1971). By the 1970s, he was a British cultural icon.

What do we hear when we hear celebrity voices?

Hearing a person’s voice is never just about acoustics. We hear social meaning: culture, identity, character and story.

Sociolinguist Asif Agha coined the term “enregisterment” to describe how a way of speaking becomes publicly recognised as signalling particular social types and values.

Over time, Caine’s voice has become enregistered as a recognisable Cockney accent associated with East London and historically linked to a working-class identity. Hearing his voice activates a socially shared register of meanings attached to Cockney.

This contrasts with, say, Queen Elizabeth II, whose accent was enregistered with royalty, prestige and wealth.

Another useful concept here is what sociolinguists sometimes call “dialectal memes”: the images and character types that circulate around particular accents. These memes are transmitted through books, television, film, and even celebrity figures themselves.

Caine has been a carrier of Cockney dialectal memes in popular culture.

When you look at it this way, AI voice licensing commodifies not just the acoustic properties of Caine’s voice, but the enregistered social meanings audiences recognise in it.

What AI licensing means for Caine

ElevenLabs describes its Iconic Marketplace platform as “the performer-first approach the entertainment industry has been calling for”. Through licensing, actors maintain ownership of their voices in a digital, AI landscape.

Caine licensing his voice theoretically ensures he receives credit and compensation, and prevents unauthorised clones appearing elsewhere.

It is possible this is exactly the direction actors want AI to go in – for use of their voice to be controlled by themselves, with clear credit and payment.

However, this model is not without risk to the actor or the listener. We should ask: do we need to hear something in Caine’s voice? Will we process information differently or hear it with more authority if it’s delivered in the voice of a cultural icon like Caine?

Giving power over to machines

People who admire Caine may want him to read to them. Some will be willing to pay for it. We need to remain conscious of the decisions we are making here.

In the 1960s, computer scientist Joseph Weizenbaum, creator of the world’s first chatbot, Eliza, warned about the dangers of forming relationships with machines. He was alarmed to see users confiding in Eliza and responding to the chatbot as if it actually understood them, even when they knew it did not.

What happens if an AI voice is not actually generic, but recognisably tied to a real human?

An actor’s likeness and voice may be protected with licensing, but their human self is not. That creates a pathway to attachment or even infatuation.

Caine is not just licensing his voice, but also the Cockney persona audiences recognise in it. Suddenly, a machine speaks with the authority of a real human behind it.

Posted in Uncategorized | Leave a comment

Anthropic defies Pentagon over AI guardrails

Anthropic defies Pentagon over AI guardrails
Amid pressure from the Pentagon to give in to its demands to loosen its safeguards, Anthropic continues to stand firm.
In a statement on Thursday afternoon, Anthropic CEO Dario Amodei made it clear that the company cannot accede to the Department of War’s demand to roll back its safeguards that prevent its AI models from being used in two key areas: mass surveillance of U.S. citizens and fully autonomous weapons. 
Amodei noted that AI’s use in mass surveillance posed “serious, novel risks to our fundamental liberties.” And while the tech may someday be helpful in fully autonomous weaponry, the guardrails simply don’t exist today to deploy this safely.  
“In a narrow set of cases, we believe AI can undermine, rather than defend, democratic values,” Amodei said in his statement. “Some uses are also simply outside the bounds of what today’s technology can safely and reliably do.”
Amodei said that its Claude models are widely deployed throughout the defense and intelligence community, including in the government’s classified networks, in national laboratories, and in mission-critical applications such as intelligence analysis, modeling and simulation, operational planning, and cybersecurity operations. Thus far, its safeguards haven’t presented an issue in these cases, he said. 
Though Anthropic’s “strong preference” is to continue to support military action, it will only do so with its safeguards in place. Otherwise, it cannot “in good conscience” submit to their requests and continue its relationship. 
Amodei’s response is the latest move in the fight between the company and the Pentagon. Earlier this week, the agency took its first steps in blacklisting Anthropic by labelling it a “supply chain risk,” a label generally reserved for companies from adversarial countries. 
The unprecedented move would not only threaten Anthropic’s contract with the military but also force all defense vendors to cut ties with Anthropic. And after his meeting with Amodei, Secretary of War Pete Hegseth contradicted himself by threatening to invoke the Defense Protection Act, forcing Anthropic to tailor its models to military desires regardless.Additionally, the Pentagon struck a deal with xAI on Monday to use its Grok models in classified systems, including weapons development and battlefield operations.
Policymakers, however, have started to warn that the sparring match between Anthropic and the Pentagon will only sour future relationships between the government and Silicon Valley AI firms, with Dean Ball, former AI adviser to the Trump Administration, calling Hegseth’s contradictory threats “incoherent.” 
Anthropic standing firm in its decision not to give in to the Pentagon’s threats was its only option, given that the company has built its reputation around AI safety and only deploying AI with guidelines that ensure it does no harm. Though the company is confronting its moral and ethical standards with recent changes to its Responsible Scaling Policy, backing down would have been a sharp about-face, betraying its core principles. Though the fallout could cost Anthropic a large chunk of its revenue from government agencies and vendors, there may be a silver lining: Gaining further trust with its primary audience of risk-averse but AI-hungry enterprises.
Posted in Uncategorized | Tagged , , , , | Leave a comment

10,000 drones released for Chinese New Year

Posted in Uncategorized | Leave a comment

Chay Bowes on X: Biden administration the horrors of people in Afghanistan clinging to the wings of the planes and falling to their deaths. US$ 8 billion thrown aside but now it is back in use by the Taliban in their fight with Pakistan

Posted in Uncategorized | Leave a comment

Iran News: Propaganda from their perspective!

Posted in Uncategorized | Leave a comment