@MarioNawfal viral AI Lego video after Iran closed the Strait of Hormuz again

Posted in Uncategorized | Leave a comment

Epstein: grooming billionaires for business; but using same techniques of grooming of young people for sex. This is often referred to networking, convenors, connectors and other names

Mar 17, 2026 The Daily Beast PodcastGet 15% off OneSkin with the code beast at https://www.oneskin.co/beast#oneskinpod

Anand Giridharadas joins Joanna Coles to unpack what the Epstein files actually tell us, not just about one disgraced financier, but about the elite network that worked with him. The bestselling author explains why so many of what he calls the Epstein class stayed in his orbit even after Epstein’s crimes were widely known. Coles and Giridharadas dig into the strange rituals of this rarefied class and examine emails involving figures like Larry Summers and former Obama White House counsel Kathy Ruemmler. They also confront the darker question at the center of the scandal: How a network built on access, status, and mutual advantage created a culture where no one ever seemed to break ranks—even when they knew the crimes Jeffrey Epstein committed.

00:00:00 – Jeffrey Epstein’s “Elite Connector” Tactics & The Epstein Class 00:13:08 – Why Billionaires Have 4 Homes: The “Four Pack” & Tax Avoidance 00:28:43 – Kathy Ruemmler’s Emails: Contempt for the “Rest Stop” Class 00:41:09 – Grooming the Powerful: The Promise of “Interesting” and “Fun” 01:00:51 – How Institutions Failed: The Absence of Courage in the Elite 01:12:00 – “Never With Women”: Why Epstein Excluded Women from Conversations 01:28:09 – Jordan Neely & Daniel Penny: A Portrait of an Age of Division #epsteinfiles#podcast#news

Posted in Uncategorized | Leave a comment

How Trump’s Failing Health Reveals His True Self : The Daily Beast. A family member, in fact his niece, Mary Trump

Apr 15, 2026Mary Trump sits down with Joanna Coles to talk about her Uncle Donald, explaining the history of dementia that runs in the family and how the president’s cognitive decline is exposing who he has been all along.

Posted in Uncategorized | Leave a comment

Nick Delanty: Migrants in Ireland … now and the future

Posted in Uncategorized | Leave a comment

Matteo Salvini the Italian Deputy Prime Minister has again called for a return to Russian gas purchases to prevent an “energy lockdown” “Instead of closing schools and factories, let’s return to purchasing gas from all over the world, including Russia, because we are not at war with Russia.”

Chay Bowes

@BowesChay

Matteo Salvini the Italian Deputy Prime Minister has again called for a return to Russian gas purchases to prevent an “energy lockdown” “Instead of closing schools and factories, let’s return to purchasing gas from all over the world, including Russia, because we are not at war with Russia.”

Posted in Uncategorized | Leave a comment

@MarioNawfal: 500 million barrels of oil: gone. $50 billion wiped out in just 50 days since the Iran war kicked off. That’s enough fuel to run the entire global shipping industry for 4 months. Largest energy shock in modern history. Never seen anything like this. Source:

Mario Nawfal

@MarioNawfal

500 million barrels of oil: gone. $50 billion wiped out in just 50 days since the Iran war kicked off. That’s enough fuel to run the entire global shipping industry for 4 months. Largest energy shock in modern history. Never seen anything like this. Source:

@KobeissiLetter

Posted in Uncategorized | Leave a comment

Clash Report: Iran’s Ghalibaf. Profound warning

Posted in Uncategorized | Leave a comment

@rkmtimes: BlackRock predicts the global $10 Trillion per Day Financial Data Flowing Through Submarine cable could damage $150 Trillion annual bank payments, SWIFT messages, securities trades, foreign exchange, credit card authorizations, and other cross-border financial flows.

BlackRock predicts the global $10 Trillion per Day Financial Data Flowing Through Submarine cable could damage $150 Trillion annual bank payments, SWIFT messages, securities trades, foreign exchange, credit card authorizations, and other cross-border financial flows.

Iran says, it will cut the global internet, financial cable and undersea Tier-1 cables that could lead major setback to West and world.

Posted in Uncategorized | Leave a comment

Futurism: Millions of Americans Are Talking to AI Instead of Going to the Doctor, and It’s Giving Them Horrendously Flawed Medical Advice

Millions of Americans Are Talking to AI Instead of Going to the Doctor, and It’s Giving Them Horrendously Flawed Medical Advice

What could possibly go wrong?

By Victor Tangermann

Published Apr 17, 2026 4:06 PM EDT

Add Futurism(opens in a new tab)More information

An illustration shows a robot hand reaching out to touch a stethoscope.

Sign up to see the future, today

Can’t-miss innovations from the bleeding edge of science and tech Email address Sign Up

While Google’s AI may no longer recommend eating rocks or confidently telling users to put glue on their pizza, even cutting-edge AI chatbots remain staggeringly incompetent at dispensing medical advice.

In a new study published this week in the journal JAMA Network Open, researchers asked 21 frontier large language models (LLMs) to “play doctor” when confronted with realistic symptoms that an actual patient could feasibly ask about.

The results painted a damning picture. The AIs’ failure rates exceeded 80 percent when provided with given ambiguous symptoms that could match more than one condition, and for more straightforward cases that included including physical exam findings and lab results, they still failed 40 percent of the time. The researchers also found that unlike human clinicians, the “LLMs collapse prematurely onto single answers,” resulting in “weak performance” across all models.

“Despite continued improvements, off-the-shelf large language models are not ready for unsupervised clinical-grade deployment,” said corresponding author and Massachusetts General Hospital associate chair of innovation and commercialization Marc Succi in a statement. “Differential diagnoses are central to clinical reasoning and underlie the ‘art of medicine’ that AI cannot currently replicate,” he added.

Translated into the real world, an AI that leaps to conclusions when not represented with the full picture could have devastating consequences. Say, if a person were to ask a chatbot about a rash or a sudden onset cough, they may be presented with misleading information and potentially dangerous advice.

The results highlight the considerable risks of relying on AI for live-or-die health advice, a worrying trend that’s already playing out across the country. As a recent survey by the West Health-Gallup Center on Healthcare in America found, one in four American adults — the equivalent of 66 million people — are already asking ChatGPT and other chatbots like it for medical advice.

Respondents often said they were seeking information both before and after seeing a healthcare professional. In many cases, they’re foregoing seeking real-world medical assistance entirely after talking to a chatbot. Among those who asked AI for health advice, 14 percent — the equivalent of over nine million Americans — said they never saw a provider they would’ve otherwise seen if it weren’t for the tech.

According to the survey, 27 percent said they didn’t want to pay for a doctor’s visit as a reason for consulting AI, while 14 percent said they were unable to pay for one. Some participants said they didn’t have time or ability to visit a doctor.

“Artificial intelligence is already reshaping how Americans seek health information, make decisions and engage with providers, and health systems must keep pace,” said West Health Policy Center president Tim Lash in a statement.

Taken together, the two studies paint a damning picture of the current healthcare landscape in the US. Not only are millions of Americans heavily relying on AI tools, they’re frequently being presented with flawed advice by hallucinating LLMs — and choosing not to seek help from far more knowledgeable professionals.

AI have already caught a large amount of flak from experts for doling out bad medical advice, from Google’s AI Overviews giving dangerously inaccurate or out of context information to transcription tools used by doctors inventing nonexistent medications.

Even if the information they’re giving is wrong, AI is giving patients a sense of certainty. Almost half of respondents in the latest survey said that talking to a chatbot about medical problems had made them feel more confident when talking to a provider, 22 percent said it helped them identify issues earlier, and 19 percent said it allowed them to avoid unnecessary tests or procedures.

At the same time, many Americans remain highly skeptical of AI’s medical advice. Roughly a third of participants who said they consulted AI for health issues said they distrusted the tool. One in ten respondents said the AI gave them potentially unsafe advice.

One thing’s for sure: the AI industry is in dire need of regulatory oversight.

More on AI and medical advice: Frontier AI Models Are Doing Something Absolutely Bizarre When Asked to Diagnose Medical X-Rays

Victor Tangermann

Senior Editor

I’m a senior editor at Futurism, where I edit and write about NASA and the private space sector, as well as topics ranging from SETI and artificial intelligence to tech and medical policy.

TAGS IN THIS STORY

Posted in Uncategorized | Tagged , , , , | Leave a comment

AI: Mother Reportedly Doesn’t Know Her Son Died Because She’s Been Talking to an AI Version of Him

Mother Reportedly Doesn’t Know Her Son Died Because She’s Been Talking to an AI Version of Him

Her family “hoped to conceal the news” of her only child’s death.

By Victor Tangermann

Published Apr 18, 2026 9:45 AM EDT

Add Futurism(opens in a new tab)More information

The octogenarian mother of a man who was killed in a road accident has been unknowingly talking to his AI clone via regular video calls.
Getty / Futurism

Sign up to see the future, today

Can’t-miss innovations from the bleeding edge of science and techEmail addressSign Up

For years now, a thriving cottage industry in China has been offering families the opportunity to speak to an AI clone of their deceased loved ones — for a monthly fee, of course.

But what if the surviving member of the family isn’t clued in to the fact that they’re speaking with an AI representation, rather than their actual loved one?

According to the South China Morning Post, which quotes reporting from a Chinese news outlet called Litchi News, the octogenarian mother of a man who was killed in a road accident has been unknowingly talking to his AI clone via regular video calls, thinking it’s actually him.

The woman is reportedly suffering from heart disease. Her family, who resides in Shandong province, “hoped to conceal the news” of her only child’s death, per the SCMP.

It’s a tragic story of grief in the age of AI, when the tech is making inroads toward replicating the appearance and voice of the dead. At the same time, the ruse raises thorny ethical questions when it comes to eldercare. Should we really intentionally deceive seniors to protect them from their own emotions?

The woman’s grandson reportedly reached out to an AI tech businessman, providing him with pictures, videos, and audio recordings of his recently deceased father. For his part, the AI businessmen joked to Litchi News that he’s in the business of “deceiving people’s emotions,” also saying that “what we do is to comfort the living.”

The AI clone informed the woman that her son had moved and was unable to meet her in the flesh.

“You should call me more often so that I know whether you live well or not in another city,” the unsuspecting mother told the AI, as quoted by the SCMP. “I am missing you so much. I feel so sorry that I cannot see you in person.”

“OK, mum,” the AI replied. “But I am too busy. I cannot talk to you for a long time. You take care of yourself. When I have made enough money, I will return home to pay my filial piety to you.”

Though the story is perfectly believable, forgive us for harboring some degree of skepticism. While Litchi News does appear to exist — it seems to be owned by the Jiangsu Broadcasting Corporation, China’s third largest TV network — we were unable to locate the original story or independently verify Zhang’s claims.

Regardles, netizens were appalled by the story, arguing that the woman’s family had gone too far.

“This is one of the worst likely uses of AI,” one Reddit user commented.

“This is going to harm this woman more than the truth,” another user added.

More on the deceased and AI: People Are Selling AI Clones of Dead Relatives for Just $150

Victor Tangermann

Senior Editor

I’m a senior editor at Futurism, where I edit and write about NASA and the private space sector, as well as topics ranging from SETI and artificial intelligence to tech and medical policy.

Posted in Uncategorized | Tagged , , , , | Leave a comment