Anand Giridharadas joins Joanna Coles to unpack what the Epstein files actually tell us, not just about one disgraced financier, but about the elite network that worked with him. The bestselling author explains why so many of what he calls the Epstein class stayed in his orbit even after Epsteinâs crimes were widely known. Coles and Giridharadas dig into the strange rituals of this rarefied class and examine emails involving figures like Larry Summers and former Obama White House counsel Kathy Ruemmler. They also confront the darker question at the center of the scandal: How a network built on access, status, and mutual advantage created a culture where no one ever seemed to break ranksâeven when they knew the crimes Jeffrey Epstein committed.
00:00:00 – Jeffrey Epsteinâs “Elite Connector” Tactics & The Epstein Class 00:13:08 – Why Billionaires Have 4 Homes: The “Four Pack” & Tax Avoidance 00:28:43 – Kathy Ruemmlerâs Emails: Contempt for the “Rest Stop” Class 00:41:09 – Grooming the Powerful: The Promise of “Interesting” and “Fun” 01:00:51 – How Institutions Failed: The Absence of Courage in the Elite 01:12:00 – “Never With Women”: Why Epstein Excluded Women from Conversations 01:28:09 – Jordan Neely & Daniel Penny: A Portrait of an Age of Division #epsteinfiles#podcast#news
Apr 15, 2026Mary Trump sits down with Joanna Coles to talk about her Uncle Donald, explaining the history of dementia that runs in the family and how the presidentâs cognitive decline is exposing who he has been all along.
Matteo Salvini the Italian Deputy Prime Minister has again called for a return to Russian gas purchases to prevent an “energy lockdown” “Instead of closing schools and factories, let’s return to purchasing gas from all over the world, including Russia, because we are not at war with Russia.”
500 million barrels of oil: gone. $50 billion wiped out in just 50 days since the Iran war kicked off. That’s enough fuel to run the entire global shipping industry for 4 months. Largest energy shock in modern history. Never seen anything like this. Source:
We are not militarily stronger than America. It is clear that they have more money, equipment, and resources, and because they have carried out so many aggressions around the world, they also have more experience than us. The Zionist regime, which is Americaâs⌠pic.twitter.com/kbYbxNxEU2
BlackRock predicts the global $10 Trillion per Day Financial Data Flowing Through Submarine cable could damage $150 Trillion annual bank payments, SWIFT messages, securities trades, foreign exchange, credit card authorizations, and other cross-border financial flows.
Iran says, it will cut the global internet, financial cable and undersea Tier-1 cables that could lead major setback to West and world.
Canât-miss innovations from the bleeding edge of science and tech Email address Sign Up
While Googleâs AI may no longer recommend eating rocks or confidently telling users to put glue on their pizza, even cutting-edge AI chatbots remain staggeringly incompetent at dispensing medical advice.
In a new study published this week in the journal JAMA Network Open, researchers asked 21 frontier large language models (LLMs) to âplay doctorâ when confronted with realistic symptoms that an actual patient could feasibly ask about.
The results painted a damning picture. The AIsâ failure rates exceeded 80 percent when provided with given ambiguous symptoms that could match more than one condition, and for more straightforward cases that included including physical exam findings and lab results, they still failed 40 percent of the time. The researchers also found that unlike human clinicians, the âLLMs collapse prematurely onto single answers,â resulting in âweak performanceâ across all models.
âDespite continued improvements, off-the-shelf large language models are not ready for unsupervised clinical-grade deployment,â said corresponding author and Massachusetts General Hospital associate chair of innovation and commercialization Marc Succi in a statement. âDifferential diagnoses are central to clinical reasoning and underlie the âart of medicineâ that AI cannot currently replicate,â he added.
Translated into the real world, an AI that leaps to conclusions when not represented with the full picture could have devastating consequences.Say, if a person were to ask a chatbot about a rash or a sudden onset cough, they may be presented with misleading information and potentially dangerous advice.
The results highlight the considerable risks of relying on AI for live-or-die health advice, a worrying trend thatâs already playing out across the country. As a recent survey by the West Health-Gallup Center on Healthcare in America found, one in four American adults â the equivalent of 66 million people â are already asking ChatGPT and other chatbots like it for medical advice.
Respondents often said they were seeking information both before and after seeing a healthcare professional. In many cases, theyâre foregoing seeking real-world medical assistance entirely after talking to a chatbot. Among those who asked AI for health advice, 14 percent â the equivalent of over nine million Americans â said they never saw a provider they wouldâve otherwise seen if it werenât for the tech.
According to the survey, 27 percent said they didnât want to pay for a doctorâs visit as a reason for consulting AI, while 14 percent said they were unable to pay for one. Some participants said they didnât have time or ability to visit a doctor.
âArtificial intelligence is already reshaping how Americans seek health information, make decisions and engage with providers, and health systems must keep pace,â said West Health Policy Center president Tim Lash in a statement.
Taken together, the two studies paint a damning picture of the current healthcare landscape in the US. Not only are millions of Americans heavily relying on AI tools, theyâre frequently being presented with flawed advice by hallucinating LLMs â and choosing not to seek help from far more knowledgeable professionals.
Even if the information theyâre giving is wrong, AI is giving patients a sense of certainty. Almost half of respondents in the latest survey said that talking to a chatbot about medical problems had made them feel more confident when talking to a provider, 22 percent said it helped them identify issues earlier, and 19 percent said it allowed them to avoid unnecessary tests or procedures.
At the same time, many Americans remain highly skeptical of AIâs medical advice. Roughly a third of participants who said they consulted AI for health issues said they distrusted the tool. One in ten respondents said the AI gave them potentially unsafe advice.
Iâm a senior editor at Futurism, where I edit and write about NASA and the private space sector, as well as topics ranging from SETI and artificial intelligence to tech and medical policy.
Canât-miss innovations from the bleeding edge of science and techEmail addressSign Up
For years now, a thriving cottage industry in China has been offering families the opportunity to speak to an AI clone of their deceased loved ones â for a monthly fee, of course.
But what if the surviving member of the family isnât clued in to the fact that theyâre speaking with an AI representation, rather than their actual loved one?
According to the South China Morning Post, which quotes reporting from a Chinese news outlet called Litchi News, the octogenarian mother of a man who was killed in a road accident has been unknowingly talking to his AI clone via regular video calls, thinking itâs actually him.
The woman is reportedly suffering from heart disease. Her family, who resides in Shandong province, âhoped to conceal the newsâ of her only childâs death, per the SCMP.
Itâs a tragic story of grief in the age of AI, when the tech is making inroads toward replicating the appearance and voice of the dead. At the same time, the ruse raises thorny ethical questions when it comes to eldercare. Should we really intentionally deceive seniors to protect them from their own emotions?
The womanâs grandson reportedly reached out to an AI tech businessman, providing him with pictures, videos, and audio recordings of his recently deceased father. For his part, the AI businessmen joked to Litchi News that heâs in the business of âdeceiving peopleâs emotions,â also saying that âwhat we do is to comfort the living.â
The AI clone informed the woman that her son had moved and was unable to meet her in the flesh.
âYou should call me more often so that I know whether you live well or not in another city,â the unsuspecting mother told the AI, as quoted by the SCMP. âI am missing you so much. I feel so sorry that I cannot see you in person.â
âOK, mum,â the AI replied. âBut I am too busy. I cannot talk to you for a long time. You take care of yourself. When I have made enough money, I will return home to pay my filial piety to you.â
Though the story is perfectly believable, forgive us for harboring some degree of skepticism. While Litchi News does appear to exist â it seems to be owned by the Jiangsu Broadcasting Corporation, Chinaâs third largest TV network â we were unable to locate the original story or independently verify Zhangâs claims.
Regardles, netizens were appalled by the story, arguing that the womanâs family had gone too far.
âThis is one of the worst likely uses of AI,â one Reddit user commented.
âThis is going to harm this woman more than the truth,â another user added.
Iâm a senior editor at Futurism, where I edit and write about NASA and the private space sector, as well as topics ranging from SETI and artificial intelligence to tech and medical policy.