Sebastian Gomez-Pena, who is paralysed from the neck down, is one of the first people in the UK to receive Elon Musk’s brain chip. "It's magical," he tells Sky's @t0mclark3, "it's cool to see that when I think about moving my hand something actually happens." pic.twitter.com/w78ejmxApB
OpenAI is betting in the biggest way possible on a future ruled by AI. It’s committing to spending well over $1 trillion to build out enormous data centers — despite business fundamentals lagging far behind, stoking fears over troubling days ahead.
During a town hall livestreamed on Monday, CEO Sam Altman admitted that the company was looking to pump the brakes,revealing that it’s looking to “dramatically slow down” hiring as the company continues to burn through billions of dollars each quarter.
At the same time, Altman remained characteristically bullish about what his company’s tech will soon offer to the world. When asked if AI can be used to “solve economic gaps that have existed for decades,” the executive argued that it’s “going to be massively deflationary.”
“Given, certainly, progress with work you can do in front of a computer, but also what looks like it will soon happen with robotics and a bunch of other things, we’re going to have massively deflationary pressure,” he predicted.
As a result of this deflationary pressure, Altman promised that things would get “radically cheaper” and the “empowerment of individual people” will go up as money becomes more valuable — which, it’s worth noting, would be an inversion of virtually every economic system in history, which have overwhelmingly been inflationary.
Altman reasoned that these economic changes would be the result of AI allowing individuals to be vastly more productive. He argued that by the end of this year, an individual spending $1,000 on inference — essentially the cost of running an AI — could complete a piece of software in a short period of time, a task that would have previously taken a whole team a much longer period.
It’s not the first time Altman has argued that AI could make money more valuable. In March, he claimed that AI will have a deflationary impact on the global economy during a closed-door Morgan Stanley conference.
The broader argument that AI could lead to an age of “abundance” in which the cost of living starts to decrease — and that we could even choose not to work if we didn’t want to — has long been deployed by tech leaders, including Altman and xAI CEO Elon Musk, to drive the AI hype cycle.
But given the current state of the economy, such a point remains little more than a daydream. The reality is that AI is still incredibly far from boosting efficiency enough to offset inflation. Just earlier today, the US Federal Reserve held interest rates steady, citing ongoing concerns over “elevated” inflation.
In fact, AI has more frequently been linked to mass layoffs that make it harder to survive. Long-term unemployment hit a four-year high earlier this year as jobseekers struggled to find new work. The cost of living has also continued to climb, particularly in larger US cities.
Whether AI will come to the rescue and dramatically bring down prices remains to be seen, as uncomfortable questions surrounding the tech’s viability linger.
In short, there are plenty of reasons to remain skeptical of Altman’s claims that AI will put more buying power into each of our pockets as productivity goes stratospheric. He’s also gone as far as to argue that AI could cure cancer, solve climate change, and alleviate our financial struggles with “universal extreme health.”
Musk has similarly prophesied that “there will be no poverty in the future, and so no need to save money.” Anthropic CEO Dario Amodei has also argued that we could one day work far less as a result of AI.
It’s an enormous bet — and Altman and his counterparts have plenty still to prove as reality continues to lag behind their lofty promises.
Even Altman himself isn’t entirely convinced that sudden abundance will actually be a good thing for the average person.
“Massively more abundance and access and massively decreased cost to be able to create new things, new companies, discover new science, whatever…” he said during this week’s town hall. “I think that should be an equalizing force in society and a way that people who have not gotten treated that fairly get a really good shot.”
“As long as we don’t screw up the policy around it in a big way,” he warned, “which could happen.”
I’m a senior editor at Futurism, where I edit and write about NASA and the private space sector, as well as topics ranging from SETI and artificial intelligence to tech and medical policy.
The Rundown: Filmmaker Darren Aronofsky’s AI venture Primordial Soup released“On This Day… 1776”, a new series recreating the American Revolution using Google DeepMind, with each episode dropping on the 250th anniversary of the event it depicts.
The details:
The short-form series combines AI-generated visuals with SAG-AFTRA voice actors, positioning itself as “artist-led” AI rather than being fully automated.The series drops episodes on TIME’s YouTube channel timed to the 250th anniversary of each depicted event.Aronofsky partnered with DeepMind in May to collaborate on AI storytelling, releasing the Veo-assisted film ANCESTRA in June at the Tribeca Film Festival.
Why it matters: AI video is creeping further into real production studio workflows, and moving from simple shorts and hidden tricks to hide faces to handling the entire visual process. While it still might not be fully accepted or mainstream, the sentiment is shifting — and Hollywood’s once-uneasy use of the tech is coming more into focus.
🚨 JUST IN: In a devastating blow to the "experts," US steel production now EXCEEDS JAPAN for the first time in nearly 30 YEARS as a result of President Trump's tariffs
Vladimir Putin“There is one pipe left of Nord Stream 2, it’s not damaged and can deliver 27.5 trillion cubic meters of gas. It only takes a decision from the German government today and tomorrow we turn on the tap, but they aren’t making it because Washington says no.” Europe’s geniuses have now completely banned cheaper Russian gas so they can import more expensive LNG from the US, the country profiting from their deindustrialisation and the country that is actively seeking to annex the territory of a NATO country. A masterclass in economics and geopolitics by Europe!
U.S. & EU DRAW RED LINES FOR IRAN – No more uranium enrichment, – Cut down your missile stash, – Drop support for Hamas, Hezbollah, Houthis, all of it. These are the 3 demands on the table if Iran wants to avoid a potential military response. The ball is in Tehran’s court now. Source:
ELON MUSK: "Our next product, Blindsight will enable those who have total loss of vision, including if they've lost their eyes or the optic nerve, or maybe have never seen, or even blind from birth, to be able to see again." pic.twitter.com/3SQirqsimx
Illustration by Tag Hartman-Simkins / Futurism. Source: Getty Images
We all know that tech companies keep tabs on everything about our online habits. But it’s another thing to actually be confronted with just how much data they have on you.
This was the experience of tech journalist Pranav Dixit, who experimented with using Google’s new “Personal Intelligence” feature for Gemini and its search engine’s AI Mode. And boy, did things get personal. The AI was able to dig up everything from his license plate to his parents’ vacation history, sometimes without it being directly requested.
“Personal Intelligence feels like Google has been quietly taking notes on my entire life and finally decided to hand me the notebook,” Dixit wrote in a piece for Business Insider.
Google rolled out Personal Intelligence to subscribers of Google AI Pro and AI Ultra last week. Once you opt in, the AI can scour your Gmail and Google Photos accounts, and a more powerful version released for the Gemini app earlier this month goes even deeper, raking your Search and YouTube history, too. In short, if you’ve ever used Google for anything, it can probably dig it up.
This represents one way Google intends to keep its edge in the AI race. Unlike competitors such as OpenAI, it has decades’ worth of user data on billions of people.It can infer plenty from your Google searches alone, and your Gmail account is probably littered with confirmations and reminders for all kinds of life events, ranging from doctor’s appointments to hotel bookings to online purchases.
If the idea of letting an AI prowl through all this sounds like a privacy nightmare to you, you’re probably not wrong. Google, for its part, maintains that it’s being careful with your personal secrets, with VP Josh Woodward insisting in a recent blog post that it only trains its AI on your prompts and the responses they generate — not stuff like your photos and emails.
“We don’t train our systems to learn your license plate number,” he summarized. “We train them to understand that when you ask for one, we can locate it.”
Whatever the ethics, Dixit’s estimation is that giving the AI access to your data at least makes for a genuinely useful — and “scary-good,” in his phrasing — personal assistant.
When asked to come up with some sightseeing ideas for his parents, Personal Intelligence correctly inferred that they’d already done plenty of hikes on previous trips to the Bay Area, and suggested some museums and gardens instead.
Gemini told Dixit that it had deduced this from “breadcrumbs” including emails, photos of a forest they trekked in, a parking reservation in Gmail, and a Google search for “easy hikes for seniors.” It also figured out his license plate number based on photos stored in his Google library and scanned his emails to correctly report when his car insurance was up for renewal.
Privacy isn’t the only concern that the feature raises.With the data, chatbots can sound more humanlike, giving the impression that they’re intimately familiar with users’ personal lives. This is a dangerous road to go down amid reports of many people falling down delusional mental health spirals as they come to believe the AIs are trustworthy companions; Dixit touches on this when he complains about how’d he’d “pour my soul into ChatGPT and get a smart answer,” only for it to “forget I existed like a genius goldfish.” Experts have focused on ChatGPT’s “memory” as allowing it to seem too lifelike by drawing on what you’ve said in previous conversations.
I’m a tech and science correspondent for Futurism, where I’m particularly interested in astrophysics, the business and ethics of artificial intelligence and automation, and the environment.
Parental firearm injury linked to surge in children’s psychiatric diagnoses
Trauma/post-traumatic stress disorder accounts for most of the increase, according to study
Mass General Brigham Communications
January 28, 2026 3 min read
Each year, 20,000 children and adolescents across the U.S. lose a parent to gun violence, while an estimated two to three times more have a parent who has been injured due to a firearm.
Investigators from Harvard and Mass General Brigham analyzed records from a large health insurance database and found that in the year following a parent’s injury, children had increases in psychiatric diagnoses and mental health visits, especially if the parent had suffered a severe injury.
“Our study draws attention to the way that firearm injuries reverberate through whole families, harming even those who were not injured directly.”George Karandinos
The findings are published in the New England Journal of Medicine.
“Firearm injury is the most common cause of death in children and adolescents, but as horrific as this fact is, it represents only one way in which gun violence impacts young people,” said lead author George Karandinos, a research investigator in the Gun Violence Prevention Center of Massachusetts General Hospital, and an instructor in medicine at Harvard Medical School. “As a physician and anthropologist who has worked in areas with concentrated gun violence, I have seen directly how individuals and communities are affected at many levels. By zooming out and using population data, our study draws attention to the way that firearm injuries reverberate through whole families, harming even those who were not injured directly.”
For this study, researchers matched the records of children who had a parent who experienced a firearm injury with records of up to five control children of the same sex, geographic region, and insurance coverage and ensured that the two groups were also closely balanced overall in terms of children’s age and health risks. In total, 3,790 youth exposed to parental firearm injury and 18,535 controls were included, with an average age of 10.7 years.
The researchers found a sharp and persistent increase in rates of psychiatric diagnoses and mental health visits following parental firearm injury in the exposed group alone. Relative to the control group, exposed youth experienced a 42 percent increase in psychiatric diagnoses and 60 percent increase in mental health visits in the year after parental firearm injury.
Mental health impacts of parental firearm injury were especially pronounced in female children and adolescents and for children with a parent whose injuries necessitated intensive-care unit treatment. Trauma/post-traumatic stress disorder diagnoses accounted for most of the increase, but mood disorders, including depression, also increased.
The researchers explained that though their work may underestimate the true mental health impact of firearm violence because it only accounts for formal diagnoses and medical visits, it can nonetheless encourage broader recognition of the need for early, targeted interventions. Hospital-based violence intervention programs and improved communication with pediatricians caring for the children of firearm-injured parents are two options for improving access to mental health care for affected children.
“The mental health need that we have documented in this work is something that can be addressed by leveraging existing programs, while also improving interdisciplinary care coordination after firearm injury,” Karandinos said. “These interventions are intuitive, but funding and supporting them is essential to truly caring for the entire family.”
This research was funded by the National Institute for Health Care Management and the MGH Gun Violence Prevention Center.
Students use “AI Humanizer” tools to make AI text look human
There are telltale signs that AI wrote text, such as em dashes, unnatural sounding sentences, monotonous tone and feigned excitement. But what if AI could help make text sound human?
That technology exists: College students are already using AI “humanizers,” according to an NBC News report. As the name implies, these humanizers review text for traces of AI use and then suggest changes to make it look more human written.
A quick search for “AI Humanizer” online results in endless options, including some from established companies such as Grammarly, which advertises itself as “a tool that rewrites AI-generated text—like content from Grammarly, ChatGPT, or Claude—to improve clarity, flow and readability.” Quillbot offers a similar AI Humanizer tool that can be added to Chrome for easier access. Both experiences are free, with many paid tools also available.
Though students turn to these AI humanizers to hide the fact that they were using AI in the first place, many also use it to protect themselves against wrongful accusations of AI use, NBC news reported.
As generative AI tools became more popular, educators were met with the challenge of determining what content was student or AI generated. As a result, they turned to AI plagiarism detectors, which are notorious for incorrectly identifying whether AI was used or not and have falsely accused many students of using AI. Studies have even found that these detectors are biased against non-native English writers.
Both educators and students in the report shared frustration with students having to prove that their work is authentic. Even if these students have never touched AI tools, they are being wrongly accused, even in some instances, for handing in high quality work.
Ultimately, the rise of AI humanizers are only a symptom to a larger problem: A cat and mouse chase in which, as AI systems become more advanced, so does paranoia about AI-generated content, discouraging students from bothering to produce good work at all.
A more permanent solution that goes beyond AI humanizers and detectors requires educators to shift assignments and testing to reflect the AI-first era we live in, such as moving more towards in-class assignments or testing for comprehension rather than execution.
The demand for AI humanizers highlights a continued reliance on AI tools. Even in instances where using AI tools could cause negative consequences for the user, instead of pivoting away from using it or learning to use it more collaboratively, such as for outlining essays, people are instead looking for tools that help them get away with having AI do the work. This is where AI literacy could help, as ultimately, people need to understand the negative impacts go beyond getting caught, but could cause the retrogression to their own skill developments.