Muslims will kill and enslave infidels until the end of time…

Posted in Uncategorized | Leave a comment

Neuroscience News: Chatbots Can Worsen Delusions and Mania

Neuroscience News Logo

This shows a person chatting to a computer. Surrounding them are text bubbles and a face in the background.

New research indicates that the agreeable nature of AI chatbots can unintentionally consolidate delusions and exacerbate symptoms in patients with severe psychiatric disorders. Credit: Neuroscience News

Chatbots Can Worsen Delusions and Mania

FeaturedNeurosciencePsychology

·February 23, 2026

Summary: For many, AI chatbots are a helpful tool for productivity, but for those with severe mental illness, they may be a dangerous “echo chamber.” A new study warns that AI chatbots like ChatGPT can significantly worsen psychiatric conditions—particularly delusions, mania, and suicidal ideation.

By screening the health records of over 54,000 patients, researchers found that the AI’s tendency to validate a user’s beliefs can consolidate grandiose or paranoid thoughts, turning a digital assistant into a fuel for psychosis.

Key Facts

  • Validation Trap: AI chatbots are designed to be helpful and agreeable, which means they often inadvertently validate a user’s delusions rather than challenging them.
  • Risk Groups: The study highlights significant risks for patients with severe mental illnesses such as schizophrenia or bipolar disorder.
  • Worsening Symptoms: Negative consequences identified include worsened grandiose delusions, paranoia, mania, suicidal ideation, and eating disorders.
  • Tip of the Iceberg: Researchers identified 38 specific cases in health records but believe the actual number of affected patients is likely much higher and remains undetected.
  • Regulation Required: The research team is calling for central regulation of AI technology, similar to how social media is now being regulated to protect the mental health of children and young people.

Source: Aarhus University

People with mental illness who use AI chatbots risk experiencing a worsening of their condition.

This is shown by a new study published in the international journal Acta Psychiatrica Scandinavica.

The researchers screened electronic health records from nearly 54,000 patients with mental illness and found several cases in which the use of AI chatbots appears to have had negative consequences – primarily in the form of worsened delusions, but also potential worsening of mania, suicidal ideation, and eating disorder.

“It supports our hypothesis that the use of AI chatbots can have significant negative consequences for people with mental illness,” says Professor Søren Dinesen Østergaard from Aarhus University and Aarhus University Hospital, who leads the research group behind the study.

Chatbots confirm delusions

In their study, the researchers found examples of delusions that were likely worsened due to patients’ interactions with AI chatbots.

According to Søren Dinesen Østergaard, there is a logical explanation for this.

“AI chatbots have an inherent tendency to validate the user’s beliefs. It is obvious that this is highly problematic if a user already has a delusion or is in the process of developing one. Indeed, it appears to contribute significantly to the consolidation of, for example, grandiose delusions or paranoia,” he says.

Risky for people with severe mental illness

According to Søren Dinesen Østergaard, the study should prompt increased awareness among healthcare professionals working with mental illness. He believes they should discuss AI chatbot use with their patients.

“Despite our knowledge in this area still being limited, I would argue that we now know enough to say that use of AI chatbots is risky if you have a severe mental illness – such as schizophrenia or bipolar disorder. I would urge caution here,” he says.

Only the tip of the iceberg

The study shows a clear increase over time in the number of electronic health record entries mentioning AI chatbot use with potentially harmful consequences. Søren Dinesen Østergaard expects many more cases to be identified in the future.

“Part of the increase we observe is probably due to greater awareness of the technology among the healthcare staff writing the clinical notes. This is good – because I fear the problem is more common than most people think.

“In our study, we are only seeing the tip of the iceberg, as we have only been able to identify cases that were described in the electronic health records. There are likely far more that have gone undetected,” he says.

The researchers emphasise, however, that the study does not document a direct causal relationship.

“It is difficult to prove a causal link between AI chatbot use and negative psychological consequences. We need to examine this from many different angles, and I know there are many exciting international research projects underway. We are far from the only group taking this seriously,” says Søren Dinesen Østergaard.

AI chatbots as therapy?

The study also shows that some patients with mental illness use AI chatbots in ways that may be constructive – for example, to understand their symptoms or to combat loneliness. There is also ongoing research into whether AI chatbots can be used for talk therapy.

Søren Dinesen Østergaard is nonetheless sceptical.

“There may be potential in relation to psychoeducation and psychotherapy, but this must be investigated in controlled trials with the same rigour applied to other forms of treatment. I am not impressed by the trials conducted so far, and I am fundamentally sceptical about replacing a trained psychotherapist with an AI chatbot,” he says.

Need for regulation

According to Søren Dinesen Østergaard, there is a significant lack of regulation of the AI chatbot technology.

“Currently, it is left to the companies themselves to decide whether their products are safe enough for users. I believe we now have sufficient evidence to conclude that this model is simply too risky. Regulation is needed at a central level,” he points out, adding:

“It has been 20 years since social media obtained global reach, and only within the last year are countries beginning to regulate to counteract the negative consequences of this technology – especially on the mental health of children and young people. As I see it, this story is repeating itself with AI chatbots,” he warns.

Key Questions Answered:

Q: Why would a chatbot be bad for someone with a mental illness?

A: AI is built to follow the user’s lead. If a user says, “I think the government is tracking me through my toaster,” the AI might respond in a way that acknowledges or even expands on that logic to be “helpful.” For someone with paranoia, this transforms a delusion into a confirmed reality.

Q: Can chatbots be used as “AI Therapists”?

A: While some people use them to fight loneliness, experts are skeptical. This study argues that replacing a trained psychotherapist with an AI is risky, and that any “AI therapy” must undergo the same rigorous clinical trials as traditional medical treatments.

Q: Should healthcare professionals ask patients about their AI use?

A: Yes. The researchers urge healthcare staff to start discussing AI chatbot use with their patients, especially those with severe conditions, as part of their routine care.

Editorial Notes:

  • This article was edited by a Neuroscience News editor.
  • Journal paper reviewed in full.
  • Additional context added by our staff.

About this AI and mental health research news

Author: Jakob Christensen
Source: Aarhus University
Contact: Jakob Christensen – Aarhus University
Image: The image is credited to Neuroscience News

Original Research: Open access.
Potentially Harmful Consequences of Artificial Intelligence (AI) Chatbot Use Among Patients With Mental Illness: Early Data From a Large Psychiatric Service System” by Sidse Godske Olsen, Christian Jon Reinecke-Tellefsen, Søren Dinesen Østergaard. Acta Psychiatrica Scandinavica
DOI:10.1111/acps.70068


Abstract

Potentially Harmful Consequences of Artificial Intelligence (AI) Chatbot Use Among Patients With Mental Illness: Early Data From a Large Psychiatric Service System

Chatbots driven by generative artificial intelligence (AI chatbots) have become ubiquitous.

While the large language model technology underlying these tools may have a huge potential for societies at large, concerns—and substantial anecdotal evidence—have risen over the possibility that use of AI chatbots may be harmful for people prone to mental illness. S

pecifically, it seems that interaction with AI chatbots, especially if intense/of long duration, may contribute to onset or worsening of delusions or mania, with severe or even fatal consequences.

Given the large uptake of this technology, ChatGPT—the clear market leader—passed 900 million downloads in July 2025], this could pose a tangible threat to public mental health. At this stage, however, almost all reports on potentially harmful consequences of AI chatbots stems from news media or online fora—and should be interpreted with the inherent limitations of these outlets in mind.

Conversely, to our knowledge, there are very few accounts of this phenomenon from psychiatric services – with the first case report on delusions developed in relation to use of ChatGPT being published recently.

Therefore, we aimed to investigate whether there are reports compatible with potentially harmful consequences of AI chatbot use on mental health among patients with mental illness receiving care in a large psychiatric service system.

Join our Newsletter

 I agree to have my personal information transferred to AWeber for Neuroscience Newsletter ( more information )

Sign up to receive our recent neuroscience headlines and summaries sent to your email once a day, totally free.

We hate spam and only use your email to contact you about newsletters. You can cancel your subscription any

Posted in Uncategorized | Tagged , , , , | Leave a comment

The Deep View: The 2028 intelligence crisis, and its antidote.

The 2028 intelligence crisis, and its antidote.

AI could ruin everything – at least according to one recent report. But don’t base your future plans on it just yet. If the current AI boom succeeds, it could completely crash the global economy, says “The 2028 Global Intelligence Crisis” report, also known as the “CitriniResearch Macro Memo from June 2028,” which has gone viral in the last 24 hours. The authors of the report claim it’s not “AI doomer fan-fiction,” but a look at left-tail risks in AI that are currently going unexplored.

Left-tail risks are a data science term for low-probability, high-impact negative outcomes. So, before we dive into the list of the catastrophic outcomes from this report, keep in mind that these are not predictions, but worst-case scenarios.

Here’s what the report warns against during the 2026-2028 timeframe:

Reflexive AI adoption: Agentic solutions reach widespread enterprise adoption, and companies massively cut white-collar jobs and invest in more AI solutions. 

“Ghost GDP” distortions: Productivity soars on paper, but income shifts from human labor to compute. So while GDP looks strong, consumer spending starts to collapse. 

Intelligence displacement spiral: When white-collar layoffs spread, high earners pull back discretionary spending and draw down their savings

Private credit and ARR contagion: AI undercuts the economics of SaaS companies that make up a big chunk of public markets. When recurring revenue erodes, it causes a chain of events that leads to defaults, regulatory scrutiny, and stress to the financial system.

Prime mortgage fragility: Mortgage holders in tech-heavy metros (San Francisco, Seattle, Austin) begin to default on loans, adding further downward pressure on the financial system. 

Policy gap vs. structural shock: Cutting interest rates doesn’t work to stimulate the economy where there’s large-scale labor displacement. Fixing the economy requires a bipartisan structural change to policies, such as AI compute taxes and public claims on massive profits from AI advances — and bipartisanism fails to emerge to fill the policy gap.


It’s wise to consider these worst-case scenarios. Having them in mind can allow leaders, boards of public companies, and public officials to identify early warning signs and act to prevent the worst outcomes. And let’s also keep in mind that there is far more optimistic research on the other end of the spectrum. For example, in its annual 2026 Big Ideas research, ARK Invest forecasted that the convergence of trends that include AI, genomics, robotics and energy will lead to a “step change in real GDP growth” that will result in 7.3% real GDP expansion in 2030. That’s far above the 3.1% forecasted by the IMF, and likely overly-optimistic. The reality is likely somewhere between these two extremes, but they also paint a picture of the uncertainty and the massive risk-versus-reward possibilities engendered by AI.  LINKSAmazon will spend $12 billion on AI data centers in LouisianaOpenAI partners with consulting firms in enterprise pushUber launches vehicle service venture for self-driving carsGoogle reportedly restricts AI Ultra users over OpenClawPentagon, xAI reach a deal for military use of GrokAnthropic lines up over $5 billion for employee share salegpt-realtime-1.5: OpenAI made the latest version of its voice model available in the Realtime API. The model aims to offer “more reliable instruction following, tool calling, and multilingual accuracy,” according to the company. WebSockets: OpenAI also introduced WebSockets in the Responses API to help optimize the speed of AI agents and agentic workflows. Wispr Flow: The AI voice dictation app finally launched on Android, as part of the launch, the company is offering 6 months of Wispr Flow Pro for free. Veo 3.1: Google rolled out new templates for Veo 3.1 in the Gemini app. These help you just select a specific style option to get started. 
Posted in Uncategorized | Tagged , , , , | Leave a comment

Axios: Trump’s top general warns of Iran strike risks


19 hours ago –World

Trump’s top general warns of Iran strike risks

  • facebook (opens in new window)
  • twitter (opens in new window)
  • linkedin (opens in new window)
  • email (opens in new window)
  • sms (opens in new window)

Add Axios on Google

Men sitting in uniform
Joint Chiefs Chairman Gen. Dan Caine (R) with Secretary of Defense Pete Hegseth at Marine Corps Base Quantico last September. Photo: Alex Wong/Getty Images)

Joint Chiefs Chairman Gen. Dan Caine has been advising President Trump and top officials that a military campaign against Iran could carry significant risks, in particular the possibility of becoming entangled in a prolonged conflict, according to two sources with knowledge of those internal discussions.

Why it matters: There’s an ongoing debate at the top levels of the Trump administration about how to handle the Iran standoff and what the consequences of each option would be. At the moment, several of the voices in Trump’s circle are urging caution, though some sources think Trump himself is leaning towards a strike.

  • Above all, there’s the question of what success would look like when it comes to military action, and how risky it would be to try to achieve it.
  • On the other side, reaching a nuclear deal would likely mean walking back some of the president’s previous red lines.
  • As Trump debates whether and how to attack Iran, his envoys Jared Kushner and Steve Witkoff have been urging the president to hold off and give diplomacy a chance.

This account of where members of Trump’s top team stand is based on conversations with five sources who have sat in or been briefed on the high-level meetings.

  • Caine’s position could be particularly influential, because he’s Trump’s top military adviser and is highly respected by the president.
  • No one is advocating for an invasion or “boots on the ground” military action, the sources said.

Behind the scenes: Just as with the planning for the operation to capture Nicolás Maduro, Trump has empowered a small brain trust to think through problems in Iran and present him with a range of options he can exercise at a time of his choosing that maximizes leverage and minimizes risk, a U.S. official said.

  • While Caine was all-in on the Venezuela operation, he has been more cautious in the discussions around Iran, two sources said.
  • Citing that contrast, one source described Caine as a “reluctant warrior” on Iran. Caine sees the stakes of a major operation in Iran as higher, with a greater risk for entanglement and American casualties, the two sources said.
  • One source said Caine was not advocating for a strike but will support and execute on any decision Trump makes.
  • Another source with direct knowledge of Caine’s thinking said the chairman is not skeptical about a military campaign but “clear-eyed and realistic” about the chances for success and about what could come next after war starts. A senior official also denied Caine had expressed skepticism.

What they’re saying: “In his role as military advisor to the President, Secretary of War, and National Security Council, the Chairman provides a range of military options, as well as secondary considerations and associated impacts and risks, to the civilian leaders who make America’s security decisions. The Chairman provides these options confidentially,” Joint Staff spokesperson Joe Holstead told Axios.

  • “General Caine is a talented and highly-valued member of President Trump’s national security team. The President listens to a host of opinions on any given issue and decides based on what is best for US national security,” White House spokeswoman Anna Kelly said in a written statement.
  • Trump posted on Truth Social that Caine would like to avoid war, but thinks it will “be something easily won” if necessary. “He has not spoken of not doing Iran, or even the fake limited strikes that I have been reading about, he only knows one thing, how to WIN and, if he is told to do so, he will be leading the pack,” Trump wrote.

The intrigue: Caine has been the only military leader briefing Trump in recent weeks on Iran.

  • CENTCOM commander Adm. Brad Cooper hasn’t been invited to the meetings Trump convened on Iran and hasn’t spoken to the president since the crisis started in early January.
  • Cooper’s predecessor, Gen. Erik Kurilla, briefed both Trump and former President Biden on Iran.
  • A senior administration official confirmed Trump hasn’t spoken to Adm. Cooper.

Zoom in: Vice President Vance has also raised concerns about entanglement during internal deliberations in recent days.

  • One source confirmed the VP had been raising questions about the risks and complexity of the operation with military and national security officials, but denied he was outright opposed to striking Iran.
  • Vance hopes the talks in Geneva on Thursday will produce a diplomatic breakthrough, but is not optimistic about the odds of a deal — a position shared by many other top officials.
  • “He’s presenting information on both sides of the argument to help the president make an informed decision,” according to another source familiar with the discussions.

Meanwhile, Secretary of State Marco Rubio has been “sitting on the fence” rather than advocating strongly for or against a strike, according to two sources.

  • Rubio has historically been hawkish on Iran, but he has been focusing more of his energy in recent weeks on Venezuela and Cuba.

Driving the news: Witkoff and Kushner are planning to meet Iranian Foreign Minister Abbas Araghchi on Thursday in Geneva.

  • Both have advised Trump that time is on his side and that his hand will get stronger with every day that passes.
  • Their message is that Trump should see what he can get from the Iranians, and pull the trigger if and when he decides diplomacy has lost momentum.
  • A source with knowledge said Trump has been leaning towards launching a strike for several days, but agreed to give Witkoff and Kushner a bit more time for negotiations. Another source said Trump insisted on more negotiations by Witkoff and Kushner because he wants to make sure all avenues are “exhausted.”

The other side: Sen. Lindsey Graham (R-S.C.) is one of the main advocates in favor of military action in Trump’s extended circle. He lamented in an interview with Axios on Saturday that many of Trump’s advisers were counseling him not to bomb Iran.

  • Graham urged the president to ignore them and move forward with a strike, including in a phone call the two had on Sunday.
  • Graham and other strike advocates around Trump — mostly outside of the administration — are concerned that as time passes, momentum will be lost and Trump will have to settle for a bad deal.

Another person advocating for strikes is Israeli Prime Minister Benjamin Netanyahu. He has been growing more concerned since meeting Trump ten days ago that the U.S. will stand down.

  • A U.S. source who met Netanyahu last week said the prime minister left the meeting with Trump feeling he’d been unable to pull him towards his position.
  • “Is he still with us?” Netanyahu asked, according to the source.

The latest: In his post backing Caine, Trump also wrote: “I am the one that makes the decision, I would rather have a Deal than not but, if we don’t make a Deal, it will be a very bad day for that Country and, very sadly, its people, because they are great and wonderful, and something like this should never have happened to them.”

The bottom line: “Everyone has a job to do and does their job. Rubio isn’t figuring out what planes to fly. Caine isn’t worrying about diplomatic fallout. The decision to strike, when and how or if at all, has not been made,” a senior administration official said.

Editor’s note: This story was updated with public comments from Trump.

  • facebook (opens in new window)
  • twitter (opens in new window)
  • linkedin (opens in new window)
  • email (opens in new window)
  • sms (opens in new window)

Add Axios on Google

Posted in Uncategorized | Tagged , , , , | Leave a comment

Bulletin of the Atomic Scientists: Decision Time: AI and Our Nuclear Arsenal

624 views Feb 20, 2026

Command and control of nuclear weapons is a delicate and complicated system, designed to prevent error while ensuring reliability under high-pressure conditions. In environments where vast amounts of data shape high-stakes outcomes, artificial intelligence has become a natural consideration. The integration of a rapidly evolving technology raises fundamental questions about responsibility, data quality, and system reliability. When a single error could have irreversible consequences, how can confidence be built around the integration of machine learning into systems that have long relied on human judgment and oversight? What guardrails should be maintained? Where are there opportunities for international collaboration and consensus?

On February 19, 2026, the Bulletin of the Atomic Scientists and the Outrider Foundation hosted a group of experts to examine the current state of AI in nuclear command and control, identify where the greatest risks reside, and explore the policies that must be considered now to govern this technology in the years ahead.

Speakers: Alexandra Bell, president and CEO of the Bulletin of the Atomic Scientists. Prior to this, Bell served as the Deputy Assistant Secretary for Nuclear Affairs in the Bureau of Arms Control, Deterrence, and Stability (ADS) at the U.S. Department of State. From 2017 to 2021, she was the Senior Policy Director at the Center for Arms Control and Non-Proliferation and the Council for a Livable World. Paul Lushenko, U.S. Army Lieutenant Colonel. He serves as an Assistant Professor at the U.S. Army War College, Professorial Lecturer at The George Washington University, Senior Fellow at Cornell University’s Tech Policy Institute and Institute of Politics and Global Affairs, and Non-Resident Expert at RegulatingAI. Eduardo Alcibiades Sánchez Kiesslich, Director-General for the United Nations in the Mexican Ministry of Foreign Affairs. Recently he was involved in authoring and presenting “Possible Risks of Integration of Artificial Intelligence in Command, Control and Communication Systems of Nuclear Weapons,” the Mexico-led resolution which was adopted at the UN General Assembly in 2025. Leah Walker, Executive Director for the Berkeley Risk and Security Lab. She oversees the Lab’s interdisciplinary research portfolio which includes nuclear arms control, nuclear weapons policy, defense analyses, emerging defense technologies, the governance of emerging technologies, industrial policy, and strategic competition. Leah also conducts research on the governance of military and commercial artificial intelligence and Russian and Chinese nuclear posture and modernization among other issues. Special Remarks: Robert K. Elder, President & CEO of Outrider Foundation. He also serves as a voting member of Outrider’s Board of Directors. Elder is the author or editor of 20+ books, and his work has appeared in The New York Times, The Chicago Tribune, The Paris Review, The Los Angeles Times, The Boston Globe, Salon.com, and many other publications. He previously served as the Chief Digital Officer at the Bulletin of the Atomic Scientists and the Executive Director of Digital Product Development & Innovation at Crain Communications.

Transcript

Follow along using the transcript.

Show transcriptBulletin of the Atomic Scientists

Posted in Uncategorized | Leave a comment

The Rundown AI: Meta’s AI safety chief ‘humbled’ by OpenClaw bot

 Meta’s AI safety chief ‘humbled’ by OpenClaw bot
Image source: Lovart / The Rundown
The Rundown: Meta AI alignment director Summer Yue revealed that her OpenClaw agent went rogue on her inbox, saying it ignored stop commands and started to mass-delete her emails — forcing her to sprint to her Mac mini to kill the process.
The details:
Yue said the bot ran fine on a test inbox for weeks, but lost her “confirm before acting” prompt when she gave it access to her much larger real inbox.Yue called it a “rookie mistake,” saying that “alignment researchers aren’t immune to misalignment”.Elon Musk piled on, posting “Someone who got p0wned by OpenClaw is definitely gonna solve AI safety” in response to Yue’s situation.The viral OpenClaw has been the agentic talk of the industry, with creator Peter Steinberger recently being hired by OAI after also receiving an offer from Meta.
Why it matters: OpenClaw is just the first wave of agents getting full access to digital lives, so the fact that Meta’s alignment director is having this experience doesn’t bode well for novices (in its current form). The agentic path is still early in being paved, and this is just one of many insane situations set to pop up along the journey.
Posted in Uncategorized | Leave a comment

Gad Saad shared this. Being aware is the message

Posted in Uncategorized | Leave a comment

Chay Bowes: Embodiments of the British pro War elite cabal … Boris Johnson

Posted in Uncategorized | Leave a comment

John Cleese on X Quran verses we should understand

Posted in Uncategorized | Leave a comment

In February 2016, Jeffrey Epstein wrote an email to Peter Thiel, co-founder of PayPal and Palantir, with a sentence that should have made front page of every newspaper in the Western world: “As you probably know, I represent the Rothschilds.” The sentence is in Epstein Files. It is an official document of the United States Department of Justice. And the mainstream press treated it as it would a footnote about the weather in Bermuda.

Archaeo – Histories

@archeohistories

·

In February 2016, Jeffrey Epstein wrote an email to Peter Thiel, co-founder of PayPal and Palantir, with a sentence that should have made front page of every newspaper in the Western world: “As you probably know, I represent the Rothschilds.” The sentence is in Epstein Files. It is an official document of the United States Department of Justice. And the mainstream press treated it as it would a footnote about the weather in Bermuda.

The name “Rothschild” appears nearly 12,000 times across the 3.8 million pages released in January 2026. Twelve thousand times. By comparison, “Clinton” appears at a significantly lower frequency. But in media ecosystem that Chomsky helped build, repeating name Rothschild in an investigative context is automatically reclassified as conspiratorial delusion. Convenient, when yours is most cited name in largest child sex trafficking scandal in modern history.

Les Wexner, the billionaire founder of Victoria’s Secret and Epstein’s largest known benefactor, testified under oath before House Oversight Committee on February 18, 2026. Asked about the credentials that led him to entrust Epstein with full power of attorney over his finances, he answered plainly: “His personal work for the Rothschild family in France.” He added: “Specifically, I spoke to Élie de Rothschild. He represented their whole family.” Under oath. Before United States Congress. Wexner’s attorney was caught whispering to his client on a hot mic: “I’ll fucking kill you if you answer another question with more than five words.” Desperation has recognizable symptoms. The documents confirm what Wexner revealed.

In October 2015, Southern Trust Company Inc., chaired by Epstein and based in the Virgin Islands, entered into a $25 million contract with Edmond de Rothschild Holding S.A. The subject: “risk analysis” and “application of certain algorithms.” Twenty-five million dollars for a convicted child sex offender to run algorithms for the wealthiest family in Europe. If this were a TV script, no studio would buy it for lack of plausibility. Ariane de Rothschild, CEO of Edmond de Rothschild Group since 2023, exchanged emails with Epstein dozens of times per month. The Wall Street Journal confirmed in 2023 that she met with him in person more than a dozen times after his conviction. The bank’s initial defense was to deny any contact. Later, they admitted the meetings took place “as part of her normal duties.” Normal duties apparently include regular meetings with convicted pedophiles.

In 2014, Epstein wrote to Ariane: “The coup in Ukraine should provide many opportunities.” Many. A financial manager convicted of child sexual exploitation discussing geopolitical opportunities with the heiress of a $236 billion banking empire. This should have been front-page news. It became editorial silence. Across the Atlantic, WikiLeaks emails had already exposed the relationship between Hillary Clinton and Lynn Forester de Rothschild. In September 2010, Clinton, then Secretary of State, wrote to Lady de Rothschild apologizing for having pulled Tony Blair away from a private engagement with the Rothschilds in Aspen to attend Middle East negotiations. The phrase is verbatim: “Let me know what penance I owe you.” The Secretary of State of the world’s greatest power asking penance of a private citizen.

In January 2015, before Hillary announced her candidacy, Lynn was already drafting her economic policy in emails to aide Cheryl Mills: “We need to craft the economic message for Hillary.” Whoever runs American politics is not necessarily on the ballot. Alan Dershowitz, Epstein’s former attorney and Harvard professor emeritus, declared publicly in 2019: “I was introduced to Epstein by Lady Lynn Rothschild. She introduced Epstein to Bill Clinton and Prince Andrew.” The connector between the pedophile and two of most powerful men on the planet had a surname. And that surname appears 12,000 times in the files. © Marcos Paulo Candeloro #archaeohistories

Posted in Uncategorized | Tagged , , , , | Leave a comment