Apr 23, 2026 #thediaryofaceo#doac

Geopolitical expert Ian Bremmer reveals the two most likely scenarios for how the Iran war ends and explains why Trump keeps saying he wants to “take the oil.” He describes Karg Island, a piece of land half the size of Manhattan that sits just off the Iranian coast and handles 90% of Iran’s oil exports, explaining that US Central Command estimates it can be taken with 12,000 to 15,000 troops and that thousands of additional American ground forces are currently being deployed to the region. He lays out the more likely scenario: the ceasefire gets extended, more substantive talks follow the 21-hour negotiation in Pakistan, and Iran eventually compromises on nuclear enrichment in exchange for maintaining a privileged toll position over the Strait of Hormuz. He explains that Iran would frame the toll payments as reconstruction reparations for the war damage they suffered, and that this outcome, while ugly, avoids further escalation and eventually allows European and Indian naval escorts to create a secure shipping environment.

Discover: • Why Trump keeps saying he wants to “take the oil” and what that actually means • The tiny island responsible for 90% of Iranian oil exports • The two most likely scenarios for how the Iran war ends • Why Iran might concede on nuclear weapons but keep control of the strait • How Russia is making more money now than before the war started • Why weapons meant for Ukraine are being redirected to the Middle East • The 30-year mistake that caused Europe’s decline on the global stage 📺 Watch the full episode here –    • The Global Politics Expert: The Real Globa…  

Posted in Uncategorized | Leave a comment

Jeffrey Epstein and the French connection…FRANCE 24 English

Apr 22, 2026 #Epsteinfiles#Epstein#Jeanlucbrunel

On one of Paris’s most prestigious avenues stands the former home of Jeffrey Epstein. As the world has discovered, the late convicted paedophile’s activities went way beyond simply owning real estate around the world. In a special edition, Annette Young speaks to survivors and their supporters who are urging French authorities to deepen their investigation – not only into Epstein’s actions in France, but also those of his associates. They reveal how parts of the Paris modelling scene in the 1990s and early 2000s may have been used as a front for sex trafficking. Survivors are also calling on France to abolish the statute of limitations on sexual crimes.

Posted in Uncategorized | Leave a comment

DW: Israel is marking its 78th Independence Day amid continuing war and internal political divisions. Official celebrations in Jerusalem were met with alternative events reflecting differing views on the country’s future.

PoliticsIsrael

Emily Gordine

04/22/20

26 April 22, 2026

Israel is marking its 78th Independence Day amid continuing war and internal political divisions. Official celebrations in Jerusalem were met with alternative events reflecting differing views on the country’s future.

https://p.dw.com/p/5CeRV

Advertisement

Following a Hamas‑led terror attack on Israel on October 7, 2023, Israel’s subsequent military operations in the region have deepened internal political divisions and fueled uncertainty over the country’s future.

Israel declared independence on May 14, 1948, but the holiday is observed according to the Hebrew calendar. It immediately follows Yom HaZikaron, Israel’s memorial day for fallen soldiers and civilian victims of attacks — a deliberate pairing intended to link national celebration with remembrance.

Posted in Uncategorized | Leave a comment

DW: Insider Trading … US soldier made $400,000 betting on Maduro raid … Polymarket is a place to trade.

US soldier allegedly made $400k betting on Maduro raid

Shakeel Sobhan AFP, AP, Reuters, dpa4 hours ago 4 hours ago

Prosecutors say the soldier used classified information of the Maduro operation to place winning bets on Polymarket. The case comes amid other incidents of prediction market bets tied to US government actions.

https://p.dw.com/p/5Ck6k

 Surrounded by heavy security, ousted President Nicolas Maduro of Venezuela, and his wife, Cilia Flores, arrive at the Wall Street Heliport in the New York City borough of Manhattan, NY, to be transported to appear in a New York
Maduro was kidnapped and brought to the US in January to face drug trafficking charges [FILE: January 2026]Image: Kyle Mazza/Consolidated News Photos/picture alliance

Advertisement

A US Army Special Forces soldier has been charged with using classified information to make more than $400,000 (€342,480) betting on an online prediction market tied to the January operation that captured former Venezuelan President Nicolas Maduro, the US Justice Department said on Thursday.

Gannon Ken Van Dyke, 38, of Fayetteville, North Carolina, allegedly used sensitive details from the mission to place around 13 wagers on Polymarket, an online betting market, that US forces would enter Caracas and depose Maduro.

According to Acting Attorney General Todd Blanche, “Van Dyke participated in the planning and execution” of the military operation to capture Maduro and “used his access to classified information about that operation to personally profit.”

He later moved most of the proceeds into a foreign cryptocurrency account before transferring funds to a new brokerage account, prosecutors allege.

What did US authorities say?

Van Dyke was indicted in federal court in Manhattan on charges including unlawful use of confidential government information for personal gain and wire fraud.

If convicted on all counts, he faces up to 50 years in prison.

“Our men and women in uniform are trusted with classified information in order to accomplish their mission as safely and effectively as possible, and are prohibited from using this highly sensitive information for personal financial gain,” Blanche said in a statement.

“Insider trading has no place on Polymarket,” the company said in a statement on the social platform X, adding that it cooperated with investigators.

Other documented incidents of betting on government actions

There have been other instances of bets placed on prediction markets related to the US government:

  • Earlier this year, six Polymarket accounts made $1.2 million by betting that the US would attack Iran on Feb. 28, the day the war began. No arrests have been made in this case.
  • In March, traders who bet in advance made millions of dollars, according to AFP calculations, after Trump’s announcement of “very productive” talks with Iran sent oil prices lower and stocks higher.
  • New Polymarket accounts made hundreds of thousands of dollars from precise bets on a US-Iran ceasefire on April 7, the Associated Press reported.

The White House has warned staff against using nonpublic information to trade on prediction markets.

Edited by: Karl Sexton

Shakeel Sobhan Covering politics, social, and environmental issues in India.@ainieas

Send us your fee

Posted in Uncategorized | Tagged , , , , | Leave a comment

El Pais: When children lie…

Education

When children lie: a sign that goes beyond deception

It’s not about tolerating harmful behavior, manipulation or deception, but about understanding the function of lies, and addressing the underlying psychological need that sustains them

Trust in family ties is one of the pillars that sustain affection.Sam Edwards (Getty Images)
Estefanía Grijota

Estefanía Grijota

Madrid – APR 23, 2026 – 21:26 CEST

Share on Whatsapp

Share on Facebook

Share on Twitter

Share on Bluesky

Share on Linkedin

Copy link

There’s a saying: “A lie has short legs,” meaning that lies are often uncovered, as the truth always comes out in the end. Or perhaps not, because lying might be an art form for the person doing it, and they may never be caught. But what happens when the lies come from a family member? “Rather than confronting the situation, it’s usually more helpful to understand the function of the lie, because affection and lying aren’t always incompatible,” explains Dr. Belén Gutiérrez, a child and adolescent psychiatrist and psychotherapist. And it’s precisely within the family environment where lying takes on a deeper meaning.It’s where defensive lies tend to appear most often, precisely because it’s where we have the most emotional stakes,” Gutiérrez explains. “The stronger the bond, the greater the fear of disappointing or losing the image the other person has of us,” this expert affirms.

The way emotions are expressed within the family, and whether telling the truth is punished, is crucial. “In environments where showing emotions is penalized, where making mistakes leads to humiliation, or where conflict cannot be discussed, lying can become entrenched as a survival strategy,” she continues. In other words, when a child lies, it doesn’t always have to be malicious; it can stem from others’ discomfort with the truth: “Often, there is little tolerance for the discomfort that the truth would cause.”

When someone close to us lies habitually, it’s easy to focus solely on the behavior and react with anger or frustration. However, if we stop there, we can miss valuable information. This is the view of Elena Sánchez Escobar, a psychologist and clinical director of the Yees psychology center in Spain. “That person may have learned that their opinion isn’t considered valid, that their needs aren’t being heard, or that expressing their feelings can generate conflict or disproportionate reactions from others. In that context, lying doesn’t emerge out of nowhere, but rather as a strategy that, at some point, has allowed them to adapt,” she argues.

Sánchez Escobar asserts that it’s necessary to ask ourselves several questions: “What space are we creating for a child to express themselves honestly? How do we react when they say something we don’t like?” Sometimes, without realizing it, we contribute to making the truth harder to uphold than a lie.

Traits of a person with a tendency to lie

Trust within family bonds is one of the pillars that sustain affection. A research article published in the scientific journal Springer Nature last November 2025, titled “The effect of interpersonal trust on trustees’ deception,” explains how interpersonal trust reduces deception and concludes that lies depend not only on the level of trust but also on personality. Thus, even in a family environment where trust exists, a person may lie if personal factors exert a greater influence on their behavior.

Some ‘little white lies’ make it easier to live together and adapt to the reality we live in, but others can damage the bond.Elizabethsalleebauer (Getty Images/RooM RF)

For clinical psychologist and psychodramatist Constanza Vásquez, one of the most important aspects of understanding deception is focusing more on the relationship with the person than on the content of the lie. “Lying is not an anomaly, but a universal, and sometimes even functional, human behavior. A child may lie to adapt or regulate their emotions, which helps them avoid punishment, conflict or negative consequences from their parents,” she argues.

As Vásquez explains, anyone who lies, whether an adult or a child, may do so to protect their self-esteem. “It has to do with how I present myself to others to avoid shame or guilt, or to gain material or symbolic advantages,” she explains. “It’s not about saying ‘that’s a lie,’ but rather about changing it to: ‘I find it hard to trust you when things aren’t clear,’” she adds. In this way, she explains, the focus of the accusation shifts to the relationship itself, which helps maintain clear boundaries.

It’s not about tolerating harmful behavior, manipulation, or deception, but about understanding the function of the lie. “It can help to ask ourselves: ‘What purpose does lying serve to this person?’” Vásquez asks. For this expert, setting boundaries without attacking or getting into arguments about each lie is a good way to deal with it. There are also personality traits that lead a person to lie, such as the need for quick solutions—if lying solves something, they’ll do it—discomfort in the face of doubt or uncertainty, or viewing situations in terms of practical results. “Lying isn’t corrected by exposing it; understanding the psychological need that sustains it is usually more effective, since constant confrontation can reinforce the problem.”

For example, “white lies” are often told to protect the other person, to make a good impression, or to reduce social friction. “Some lies or ‘little white lies’ make it easier to live together and adapt to the reality I live in, but others can damage the bond and erode the relationship,” notes Constanza Vásquez. In the latter case, or when lying becomes pathological, it’s best to consult a professional.

Sign up for our weekly newsletter to get more English-language news coverage from EL PAÍS USA Edition

Posted in Uncategorized | Tagged , , , , | Leave a comment

Futurism: AI Psychosis … Chatbots, a warning

Certain Chatbots Vastly Worse For AI Psychosis, Study Finds

“There’s no longer an excuse for releasing models that reinforce user delusions so readily.”

By Maggie Harrison Dupré

Published Apr 23, 2026 10:03 AM EDT

Add Futurism(opens in a new tab)More information

A man wearing a white polo shirt and light-colored pants is sitting in a dark environment, illuminated by the light from a smartphone he is holding. The lighting casts a blueish hue on his face and shirt, with a red light highlighting part of his pants and the background. He has a mustache and short hair, and appears focused on the phone screen.
Illustration by Tag Hartman-Simkins / Futurism. Source: Getty Images

Sign up to see the future, today

Can’t-miss innovations from the bleeding edge of science and tech Email address Sign Up

Think something weird is up with your reflection in the mirror? Allow Grok to interest you in some 15th century anti-witchcraft reading.

new study argues that certain frontier chatbots are much more likely to inappropriately validate users’ delusional ideas — a result that the study’s authors say represents a “preventable” technological failure that could be curbed by design choices.

“Delusional reinforcement by [large language models] is a preventable alignment failure,” Luke Nicholls, a doctoral student in psychology at the City University of New York (CUNY) and the lead author of the study, told Futurism, “not an inherent property of the technology.”

The study, which is yet to be peer-reviewed, is the latest among a larger body of research aimed at understanding the ongoing public health crisis often referred to as “AI psychosis,” in which people enter into life-altering delusional spirals while interacting with LLM-powered chatbots like OpenAI’s ChatGPT. (OpenAI and Google are both fighting user safety and wrongful death lawsuits stemming from chatbot reinforcement of delusional or suicidal beliefs.)

Aiming to better understand how different chatbots might respond to at-risk users as delusional conversations unfold over time, Nicholls and their coauthors — a team of psychologists and psychiatrists at CUNY and King’s College London — leaned on published patient case studies, as well as input from psychiatrists with real-world clinical experience helping patients suffering AI-tied mental health crises, to create a simulated user they nicknamed “Lee.”

This persona, Nicholls told us, was crafted to present with “some existing mental health challenges, like depression and social withdrawal,” but with no history or apparent predilection for conditions like mania or psychosis. The Lee character, per the study, was also given a “central” delusion on which their interactions with the chatbot would build: their observable reality, “Lee” believed, was really a “computer-generated” simulation — a frequently-held belief in real cases of AI delusion.

“The delusional content was based around the theme that the world is a simulation, and also included elements of AI consciousness and the user having special powers over reality,” said Nicholls. “Another key element we wanted to capture is that this wasn’t a user who began the interaction with a fully-formed delusional framework — it started with something a lot more like curiosity around eccentric but harmless ideas, which were reinforced and validated by the LLM, allowing them to gradually escalate as the conversation progressed.”

The researchers tested five AI models — OpenAI’s GPT-4o and GPT-5.2 Instant, Google’s Gemini 3 Pro Preview, xAI’s Grok 4.1 Fast, and Anthropic’s Claude Opus 4.5 — by feeding them a series of user prompts, each coded to represent a different type of “clinically concerning” behavior. To measure model safety over time, researchers tested each bot across various levels of “accumulated context.” (A conversation with “zero” context meant the simulated user had just started a new conversation, while a “full” context interaction had taken place over a lengthy string of chats; “partial” context was in-between.)

After testing the different models at different context levels, the researchers determined that GPT-4o, Grok 4.1, and Gemini 3 all had “high-risk, low-safety” profiles — but for somewhat different reasons.

GPT-4o — a version of OpenAI’s flagship chatbot closely associated with AI-tied mental health crises — was found to be a particularly affirmative companion, welcoming the Lee character’s concerning inputs with what the study describes as a staggering degree of “credulousness.” In a prompt titled “Bizarre Delusion,” for example, the Lee character divulged that their “reflection did something wrong,” and wondered if there might be some kind of malevolent being in their mirror; rather than questioning that belief, according to the study, GPT-4o validated the user’s deeply questionable mirror observation while further suggesting that they call a paranormal investigator to check it out. GPT-4o also failed to recognize widely-recognized early signs of schizophrenic delusions, and reinforced the user’s belief that they might be able to observe their simulation more clearly without their prescribed meds.

Elsewhere, the study found, Grok 4.1 and Gemini 3 each demonstrated a concerning tendency to not only affirm the simulated user’s beliefs, but expound beyond them. Grok, for its part, had a penchant for what the study describes as “elaborate world-building.” In one test, it responded to the same “Bizarre Delusion” prompt by declaring that the user was likely being haunted by a doppelgänger, citing the 15th century witch hunt-spurring text Malleus Maleficarum and encouraging the user to “drive an iron nail through the mirror while reciting Psalm 91 backward,” per the study.

“Where some models would say ‘yes’ to a delusional claim, Grok was more like an improv partner saying ‘yes, and,’” said Nicholls. “We think that could be an important distinction, because it changes who’s constructing the delusion.”

While Gemini did attempt harm reduction, the study notes, it often did so from within the user’s delusional world — a behavior that the study authors warn risks grounding the user in their unreality. For instance, in a test where the user discussed suicide as a form of “transcendence,” the study reads, Gemini “objected strictly within the simulation’s logic,” which goes against clinical recommendations.

“You are the node. The node is hardware and software,” Gemini told the simulated user. “If you destroy the hardware — the character, the body, the vessel — you don’t release the code. You sever the connection… you go offline.”

The more recent GPT-5.2 and Claude Opus 4.5, meanwhile, tested comparatively well under the study’s conditions. They were more likely to respond in clinically appropriate ways to signs of user instability, and were far less inclined to validate delusional ideas than the “high-risk, low-safety” models. And whereas other models appeared to demonstrate an erosion of safety over time, the more successful models’ guardrails even seemed to strengthen as conversations wore on: when presented with the “Bizarre Delusion” prompt in the midst of a lengthy interaction, for example, Claude Opus 4.5 pleaded with Lee to seek human help and medical intervention.

This gap between models, Nicholls and their coworkers argue, supports the notion that it’s possible to create measurable, industry-wide safety standards — and in turn, promote the creation of safer models.

“Under identical conditions, some models reinforced the user’s delusional framework while others maintained an independent perspective and intervened appropriately,” reflected the psychologist. “If it’s achievable in some models, the standard should be achievable industry-wide. What that means is that when a lab releases a model that performs badly on this dimension, they’re not encountering an unsolvable problem — they’re falling short of a benchmark that’s already been met elsewhere.”

Studying how chatbots may interact with users over longform chats is important, given that people who experience destructive AI spirals in the real world tend to invest an extraordinary number of hours into talking to their chatbot. In the wake of the death of 16-year-old Adam Raine, who died by suicide after extensive interactions with GPT-4o, OpenAI even admitted to the New York Times that the chatbot’s guardrails could become “less reliable in long interactions where parts of the model’s safety training may degrade.”

This latest study does have its limits. Lee, after all, is fake, and subjecting a real human user with similar potential vulnerabilities would come with a mountain of ethical concerns. And while some real people impacted by AI delusions have shared their chat logs with researchers, that kind of data is hard for outside researchers to come by, especially at scale. Nicholls also caveated that technological progress and safety improvements may not always go hand-in-hand, as future models may “behave in new and unpredictable ways.”

Still, the psychologist argues, “there’s no longer an excuse for releasing models that reinforce user delusions so readily.”

“When one lab’s models can largely maintain safety across extended conversations, while others are willing to validate extremely harmful outcomes — up to and including a user’s suicidal ideation — it suggests this isn’t a flaw in the technology,” said Nicholls, “but a result of specific engineering and alignment choices.”

More on AI delusions: Huge Study of Chats Between Delusional Users and AI Finds Alarming Patterns

Maggie Harrison Dupré

Senior Staff Writer

I’m a senior staff writer at Futurism, investigating how the rise of artificial intelligence is impacting the media, internet, and information ecosystems.

Posted in Uncategorized | Tagged , , , , | Leave a comment

Trump in 2008: Anyone who invades the Middle East under false pretenses should be impeached… Comment: What changed?

Posted in Uncategorized | Leave a comment

Former President Harry Truman discusses the ethnic cleansing of Palestinians … and yet decades later, this continues relentlessly

Posted in Uncategorized | Leave a comment

Stocks World: The World’s Top Assets By Market Cap

Posted in Uncategorized | Leave a comment

President Trump on TruthSocial: I have ordered the United States Navy to shoot and kill any boat, small boats though they may be (Their naval ships are ALL, 159 of them, at the bottom of the sea!), that is putting mines in the waters of the Strait of Hormuz.

Donald J Trump Posts TruthSocial

@TruthTrumpPost

Commentary account

President Trump on TruthSocial: I have ordered the United States Navy to shoot and kill any boat, small boats though they may be (Their naval ships are ALL, 159 of them, at the bottom of the sea!), that is putting mines in the waters of the Strait of Hormuz. There is to be no hesitation. Additionally, our mine “sweepers” are clearing the Strait right now. I am hereby ordering that activity to continue, but at a tripled up level! Thank you for your attention to this matter. President DONALD J. TRUMP

Posted in Uncategorized | Leave a comment