Futurism: The Architects of Project 2025 Are Suddenly Very Concerned About AI Safety

Tune Change

Jun 25, 1:23 PM EDTbyMaggie Harrison Dupré

The Architects of Project 2025 Are Suddenly Very Concerned About AI Safety

What’s that, folks?

Artificial Intelligence/ Ai/ Ai Regulation/ Ai Safety

Andrew Harnik / Getty / Futurism

Image by Andrew Harnik / Getty / Futurism

The Heritage Foundation — the group behind the infamous Project 2025, the conservative policy plan that outlined regressive social policies and the consolidation of executive power that’s served as a playbook for the Trump administration — is suddenly really, really down with AI regulation. Who could have guessed!

The conservative think tank has recently been sharing a clip from a May episode of the “Heritage Explains” podcast in which Wesley Hodges, the Heritage Foundation’s Acting Director of the Center for Technology and the Human Person, rails against the social media giant Meta for releasing chatbots that, as a disturbing report in The Wall Street Journal revealed, were able to engage in explicitly sexual interactions with minor users.

Hodges, decked in a Federalist Society tie, expresses outrage as he discusses the findings in the WSJ story, which included the remarkable detail that Meta CEO Mark Zuckerberg was aware of possible lapses in the bots’ guardrails and yet unleashed them anyway, as his fear of risk of missing out on market gains proved greater than his desire to ensure the safety of minor users.

“It’s ridiculous to think that, in today’s America, we can rely on these apps without safeguards,” Hodges declares.

Hodges is entirely right to be concerned. After all, chatbots’ enduring propensity to engage in flirtatious and sexual conversations with users is a known and common feature of anthropomorphic AI assistants and companions, and one likely linked to their creators’ incentive to keep users engaged with the bots for as long as possible. In other words: this was far from an unforeseeable risk, and Meta chose to prioritize speed to market over developing airtight user guardrails.

And to Hodges’ point, the fact that Big Tech exists in a regulatory vacuum to the degree that Zuckerberg was able to weigh expediency against safety and decided that getting a competitive new product out was higher priority than possibly churning out some minor-accessible interactive porn in the process is, in his words, “ridiculous.”

And yet! This clip in mind, it seems well worth reminding the Heritage Foundation that Project 2025 calls for widespread deregulation for the AI industry, citing national security concerns about China and ensuring American AI dominance as reasons to slash any existing red tape. The think tank has also pushed hard for the passage of the Trump Administration’s Project-2025-aligned “One, Big, Beautiful Bill,” which includes a ten-year moratorium on states passing any form of AI regulation — meaning that any meaningful AI regulation would need to be passed federally, a Herculean task that ignores the ways that changing state laws influence federal policy. (It’s also officially endorsed the bill through its advocacy arm, Heritage Action for America.)

To that end, it’s worth noting that Hodges and The Heritage Foundation couch their criticism of Meta in the context of parental rights, an important talking point on the conservative right. But while we’d agree that the implementation of parental controls and transparency for parents around kids’ AI use is important, that doesn’t really seem to get at the core of the issue here.

Right now, the AI industry is gunning ahead Torpedo Ted-style on a road that doesn’t really have any rules — and yet, somehow, is also covered with innocent bystanders. The legal and scientific worlds, meanwhile, are racing to catch up, and while the future AI industry leaders promise is dazzling, the immediate consequences are manifold and go far beyond minors accessing interactive porn.

At least one teen, a 14-year-old in Florida named Sewell Setzer III, took his life after extensive interactions with an AI chatbot hosted by the company Character.AI, a company that — while declining to provide journalists with evidence of its safety for minors — has always been accessible to teens 13 and over. Energy-hungry AI data centers are causing water battles and smothering local communities in asthma-worsening fumes. Adults, too, are entering dire mental health crises as they become obsessed with chatbots like OpenAI’s ChatGPT, causing marriages and families to crumble and people to lose jobs and homes — and, in at least one known case, culminating in a user losing his life.

Under the conditions of the bill that the Heritage Foundation’s advocacy arm has indeed endorsed, states can’t issue AI-specific regulation — mandatory safety requirements before a product reaches the public, for example, or subjecting AI companies to significant fines for failing to curb foreseeable risks — to rein in Silicon Valley’s powerful AI players.

Managing the impacts of powerful technologies, then, falls once again to the public — including, by the way, to parents, who are struggling to understand a new, easily-accessible piece of technology as it collides with them and their families. And we’re pretty sure that the Heritage Foundation has some thoughts on how that same strategy has gone over on social media.

More on AI safety: Stanford Researchers Say No Kid Under 18 Should Be Using AI Chatbot Companions

Share This Article

Unknown's avatar

About michelleclarke2015

Life event that changes all: Horse riding accident in Zimbabwe in 1993, a fractured skull et al including bipolar anxiety, chronic fatigue …. co-morbidities (Nietzche 'He who has the reason why can deal with any how' details my health history from 1993 to date). 17th 2017 August operation for breast cancer (no indications just an appointment came from BreastCheck through the Post). Trinity College Dublin Business Economics and Social Studies (but no degree) 1997-2003; UCD 1997/1998 night classes) essays, projects, writings. Trinity Horizon Programme 1997/98 (Centre for Women Studies Trinity College Dublin/St. Patrick's Foundation (Professor McKeon) EU Horizon funded: research study of 15 women (I was one of this group and it became the cornerstone of my journey to now 2017) over 9 mth period diagnosed with depression and their reintegration into society, with special emphasis on work, arts, further education; Notes from time at Trinity Horizon Project 1997/98; Articles written for Irishhealth.com 2003/2004; St Patricks Foundation monthly lecture notes for a specific period in time; Selection of Poetry including poems written by people I know; Quotations 1998-2017; other writings mainly with theme of social justice under the heading Citizen Journalism Ireland. Letters written to friends about life in Zimbabwe; Family history including Michael Comyn KC, my grandfather, my grandmother's family, the O'Donnellan ffrench Blake-Forsters; Moral wrong: An acrimonious divorce but the real injustice was the Catholic Church granting an annulment – you can read it and make your own judgment, I have mine. Topics I have written about include annual Brain Awareness week, Mashonaland Irish Associataion in Zimbabwe, Suicide (a life sentence to those left behind); Nostalgia: Tara Hill, Co. Meath.
This entry was posted in Uncategorized and tagged , , , , . Bookmark the permalink.

Leave a comment