Al Jazeera: The released ‘Epstein files’ reveal how Jeffrey Epstein used student visas …

Posted in Uncategorized | Leave a comment

The Deep View: Goodbye virtual assistant, hello AI Chief of staff

Goodbye virtual assistant, hello AI chief of staff
For over a decade, the tech industry has been racing to create the perfect virtual assistant. Now, the goal is rapidly shifting due to AI agents. The new north star is to create an AI chief of staff.
So, what’s the difference, and what does that look like? 
Siri, Alexa, Google Assistant, and countless other virtual assistants have been trying to cozy up to us for over a decade, automating tasks and reducing the number of steps to get things done. But they’ve mostly been pretty disappointing, limited to little more than announcing the weather, setting cooking timers, and helping dictate messages.
The new AI agent boom, led by Claude Code / Claude Cowork and OpenClaw, has changed the equation in just the past two months. Even just a few weeks ago, we were talking about how these new tools were finally launching the era of true “personal AI agents” that can perform routine tasks on our behalf. 
But the concept has advanced even further in recent weeks, especially driven by the OpenClaw community. The enthusiasts who are spinning up OpenClaw instances are racing ahead, giving us a picture of the future of agents. They have rapidly gone from trying to create super-smart, widely skilled agents to orchestrating teams of agents, each with dedicated strengths and specific capabilities, not unlike human teams. For many of these folks, they now have a stack of Mac minis running multiple agents and sub-agents.
And that’s where a different concept has now emerged: the AI chief of staff.
I first heard it from Jason Calacanis on This Week in Startups, where Calacanis was recently speaking to Mitesh Agrawal of Positron AI and said, “This is going to save you inbox management [and] Slack management. These are the chores that you typically, as CEO, would hire a chief of staff [or] an executive assistant to do. You’ve now used an OpenClaw agent to do [it].”
Agrawal reported that it’s saving him an average of at least 30 minutes per day. Calacanis pointed out that’s over 3 hours per week (for a startup founder) and 150 hours per year, which is the equivalent of getting 3 extra weeks per year. That kind of time savings compounds over time, and it’s likely to accelerate as the technology improves and gets easier for more people to access. 
Another concept Calacanis used to understand the power of this chief of staff phenomenon was to view it as democratizing services previously available only to the ultra-wealthy. He compared it to the way Uber can act like your personal chauffeur, Airbnb can function like having vacation homes in beautiful cities around the world, and JSX can make it feel like having a private jet. A chief of staff is someone who knows your needs and preferences and can act independently to make important tasks happen on your behalf. It was previously something limited to presidents, heads of state, and CEOs. That’s a lot for AI agents to live up to. But if they can pull it off, it will become one of the most powerful upgrades to everyday life that the current AI boom can deliver.
Posted in Uncategorized | Tagged , , , , | Leave a comment

HomeSweetHome: Wake up Ireland. Homelessness is a crisis.

Posted in Uncategorized | Leave a comment

Netflix: Hitler, Nuremberg and a Journalist, who we know so little about. An interview: “William L. Shirer (1960), starring Mike Wallace

Posted in Uncategorized | Leave a comment

Futurism: AI Will Destroy Millions of White Collars Jobs in the Coming Months, Andrew Yang Warns, Driving Surge of Personal Bankruptcies

AI Will Destroy Millions of White Collars Jobs in the Coming Months, Andrew Yang Warns, Driving Surge of Personal Bankruptcies

“Do you sit at a desk and look at a computer much of the day? Take this very seriously.”

By Joe Wilkins

Published Feb 19, 2026 9:01 AM EST

Close-up of Andrew Yang wearing a dark suit jacket and a light-colored shirt, with a microphone headset on his left cheek. The background features a large yellow circle over a blue grid pattern with a crumpled paper texture. The image has a halftone effect applied to the man's face and clothing.
Illustration by Tag Hartman-Simkins / Futurism. Source: Marco Bello / Getty Images

Andrew Yang — millionaire entrepreneur, noted Ivy leaguer, and one-time presidential hopeful — has a grim warning for his fellow salaried professionals: AI is about to wipe “millions” of office jobs over the coming months.

In an essay published on his Substack and flagged by Business Insider, Yang explained what he calls the “great disemboweling of white-collar jobs” due to AI.

“Do you sit at a desk and look at a computer much of the day?” he challenged. “Take this very seriously.”

“This automation wave will kick millions of white-collar workers to the curb in the next 12-18 months,” Yang wrote. “As one company starts to streamline, all of their competitors will follow suit. It will become a competition because the stock market will reward you if you cut headcount and punish you if you don’t. As one investor put it, ‘sell anything that consists of people sitting at a desk looking at a computer.’”

Yang predicts that mid-career office professionals will be among the first to go. Right now, there are around 70 million office workers in the United States, but “expect that number to be reduced substantially, by 20-50 percent in the next several years,” the entrepreneur warned.

Yang went so far as to urge anyone in mid-career management, particularly those who own homes in the affluent burbs of Silicon Valley or Westchester County, New York to put there house up for sale now, to avoid the mad scramble once the labor apocalypse hits. “It might not feel great being first, but you don’t want to be last,” Yang wrote.

Going on, Yang predicted that personal bankruptcies will “surge” as office workers struggle to find gainful employment to maintain their lifestyles. This, he says, will also come for service workers down-wind of office labor — those employed as drycleaners, hair stylists, and dog walkers.

The “great disemboweling” will likewise impact recent college grads — a section which is already suffering through a brutal hiring market in the US — according to Yang.

All of this will result in even greater unrest and angst as the wealth generated by the AI spending boom will largely go to the few CEOs and executives at the top of the food chain. “Imagine what people are going to think when we all feel like serfs to AI overlords that have soaked up the white-collar work?” Yang posits.

At the end of the day, Yang’s pronouncement reads like a lot of the other AI doomsaying out there, much of which comes, ironically, from the tech moguls themselves. Like those prophecies, Yang’s ends when it comes time to supply a constructive answer to the world-upending crisis beyond some vague idea of universal basic income.

Expect it to get incredibly, intergenerationally rough out there,” Yang concluded. “Batten down the hatches, and do what you can for yourself and those around you.”

More on AI: It Turns Out That Constantly Telling Workers They’re About to Be Replaced by AI Has Grim Psychological Effects

Joe Wilkins

Correspondent

I’m a tech and transit correspondent for Futurism, where my beat includes transportation, infrastructure, and the role of emerging technologies in governance, surveillance, and labor.

Posted in Uncategorized | Tagged , , , , | Leave a comment

Axios: New wave of mutating drugs

New wave of mutating drugs
 
Illustration of a skull made out of different types of pills.
Illustration: Allie Carl/Axios
 
Overdose deaths are falling. But America’s illicit drug supply is evolving into lethal cocktails: fentanyl plus stimulants, sedatives and novel synthetics that hide in party powders and pressed pills, Axios’ Russell Contreras reports.🧬 

Why it matters: Those polydrug blends — with nicknames “pink cocaine,” “rhino tranq,” “benzo-dope” and others — are harder to detect, reverse and warn against. Some can even result in the loss of limbs.

That makes recent overdose declines fragile and public-health victories reversible if policy, testing and treatment don’t catch up.

“When we crack down on one drug, the market innovates,” Sheila Vakharia, managing director of the Department of Research and Academic Engagement at the Drug Policy Alliance, tells Axios.

Catch up quick: The Trump administration has highlighted record fentanyl seizures at the southern border and credited tougher enforcement with helping drive overdose deaths down.

Yes, but: Experts say the domestic drug market is adapting in real time.

Targeting fentanyl has incentivized illegal drug suppliers to experiment with other compounds that bypass detection, creating what Vakharia describes as a “whack-a-mole” cycle.Keep reading.
    
 
Posted in Uncategorized | Leave a comment

Axios: Coming Pain at the Pump – Iran

Coming pain at pump
 
A line chart that tracks weekly U.S. regular gasoline prices from February 2021 to February 2026. Prices rose from $2.50 per gallon in February 2021 to a peak of $5.01 in June 2022, then declined steadily to about $2.92 by February 2026, showing a sharp mid-period spike followed by gradual decrease.Data: EIA. Chart: Axios Visuals

The cost to fill up your car will probably climb soon — and the oil price spike over Iran tensions isn’t the only reason, Axios’ Ben Geman writes:

A seasonal cost rise typically starts at the end of February or early March as spring-break season kicks off and refiners start producing costlier fuel.Share this story … Get Axios Future of Energy.
    
Posted in Uncategorized | Leave a comment

Conversation: Could Andrew Mountbatten-Windsor’s arrest bring down the British Monarchy?

Arts + CultureBusiness + EconomyEducationEnvironmentHealthPolitics + SocietyScience + TechWorldPodcastsInsights

Authors

  1. Jo Coghlan Associate Professor, Humanities, Arts and Social Sciences, University of New England
  2. Huw Nolan Animal Welfare Scientist and Pop Culture Researcher, University of New England
  3. Lisa J. Hackett Senior Lecturer, Cultural History, University of New England

Disclosure statement

The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

Partners

University of New England provides funding as a member of The Conversation AU.

The Conversation UK receives funding from these organisations

View the full list

DOI

CC BY ND

We believe in the free flow of information

Republish our articles for free, online or in print, under Creative Commons licence.Republish this article

Share article

Print article

When a royal faces scrutiny, it can feel like a rupture with tradition. Yet across the ages, British royals have repeatedly fallen under suspicion. What makes the arrest of Andrew Mountbatten-Windsor so striking is that we have to reach back to the 17th century to find anything comparable.

The royals are by no means strangers to scandal, but allegations of law-breaking are another matter entirely. Mountbatten-Windsor’s fall from grace will have huge repercussions for the British royals, and it also gives us an insight into how the handling of the royals has changed since Queen Elizabeth’s death.

When the crown fell

This is not the first time the British royals have crossed paths with the law. In 1483, Richard III became associated with the disappearance of his nephews, the Princes in the Tower. The two princes were legitimate heirs and therefore direct threats to Richard’s claim to the throne. He was never tried in court, and historians still debate the evidence.

The most dramatic confrontation between monarchy and law came with Charles I. He was accused of treason during the English Civil War. He was arrested in 1649, tried and publicly executed. This act stunned Europe and shattered the belief royals were above the law.

As a consequence, England abolished the monarchy and became a republic under Oliver Cromwell. So the last time a member of the royal family was arrested and tried, the crown itself fell.

That precedent matters because it underscores how rare royal arrests are. For more than three centuries the monarchy has avoided that spectacle. The fact Andrew’s arrest forces comparison with Charles I reveals how rare the moment is.

Our mission is to share knowledge and inform decisions.

About us

Reputation as royal strategy

By the 19th century, the monarchy survived less through force and more through reputation. Under Queen Victoria (1837-1901), the crown cultivated domestic virtue and moral seriousness as a shield against instability. Respectability became a strategic defence against scandal.

However, fame and power inevitably lead to very high public interest, and scandals made their way into print culture and later mass media. Prince Albert Victor, the grandson of Queen Victoria, was accused of being Jack the Ripper. It’s a claim historians have largely rejected as conspiracy theory, yet it persists because it speaks to fears about royal cover-ups.

James II was removed from the throne in 1688 during the Glorious Revolution amid claims he undermined Protestantism laws and promoted Catholic officials. His perceived abuse of power, rather than a single prosecutable crime, cost him the throne.

In the 20th century, Edward VIII generated a different kind of unease. After his abdication in 1936, evidence emerged of his sympathy toward Nazi Germany followed by his 1937 meeting with Adolf Hitler in Germany. While there was no prosecution, it did cause serious damage to Edward’s standing and public trust.

The collapse of deference

For much of the 20th century, the monarchy operated within a culture of deference. The press refrained from reporting royals’ private lives and indiscretions were quietly managed. The arrangement insulated the royal family from sustained exposure. However, this began to change after a series of scandals in the 1990s. This eventually led Elizabeth II to call 1992 her annus horribilis.

The rise of tabloid journalism eroded old boundaries, and digital media dissolved them entirely. Silence now intensifies suspicion rather than calming it, as was the case with royal silence about the Princess of Wales’ health in early 2024, forcing them to go public with her cancer battle.

Influence, access and optics

Even before Andrew Mountbatten-Windsor’s arrest, the optics were damaging.

His arrest lands in this transformed landscape. During his tenure as the United Kingdom’s Special Representative for International Trade and Investment, he cultivated relationships with political leaders and wealthy business figures across the Middle East and Central AsiaCritics questioned whether he blurred the line between official trade promotion and private networking.

The 2010 “cash for access” episode involving Mountbatten-Windsor’s wife Sarah Ferguson deepened that perception. She was filmed offering introductions to Andrew in exchange for substantial payment. Although she apologised and Andrew denied involvement, the imagery of monetised proximity to the crown was corrosive.

In 2021, an undercover investigation suggested the queen’s cousin Prince Michael of Kent was prepared to use his royal status to assist a fictitious company in exchange for payment. He denied wrongdoing, but the harm was done.

A brand without insulation

Under Elizabeth II, longevity conferred authority and steadiness that often softened scandal. Under Charles II, the institution appears more exposed. Mountbatten-Windsor’s arrest disrupts and exposes the royal family to reputational damage. While he was later released, the scandal still has a long way to play out.

Charles is a constitutional monarch. He can’t interfere in police investigations or prosecutorial decisions without provoking a constitutional crisis. His authority is symbolic rather than executive.

But he can excise Andrew’s inner circle, including his daughters, further from public life. He has already stripped his brother of his royal titles and told him to leave his home, Royal Lodge.

Yet even that has limits. Charles’s power now rests less on control than on credibility. In a permanently watchful society, judgement is delivered not in private but in full view.

The precedent that lingers

The last time a reigning monarch was arrested, England abolished the monarchy and became a republic. The historical echo is impossible to ignore. It reminds us that when the crown becomes entangled with criminal process, the consequences resonate beyond the individual.

Mountbatten-Windsor’s arrest underscores how fragile that trust can be and how decisively it is shaped by the court that really matters, that of public opinion. While Andrew is not the king, the scandal may have been softened if his brother Charles acted more decisevly and sooner to remove him from the inner circles of the monarchy.

Royal scandals chip away at the sense of mystery that has long protected the crown. The monarchy survives not because it holds real political power, but because it represents stability, dignity and something slightly removed from everyday life.

When royals are caught up in scandal, that sense of distance collapses, and the institution can begin to feel more fragile than untouchable.

Posted in Uncategorized | Leave a comment

Axios: The key to problem-solving

1 big thing: The key to problem-solving
Illustration of a brain with arms and legs swinging happily on a swing set
Illustration: Sarah Grillo/Axios
 
Tell a workaholic: Play isn’t just good for your health — it could make you more productive at work, Axios’ Carly Mallenbaum writes.

Why it matters: Taking play breaks can reduce stress, strengthen teamwork, and boost creativity and problem-solving.

Being in a high-pressure, serious state can put your brain into a fear mode that shuts down cognitive functioning, says Piera Gelardi, Refinery29 co-founder and author of the forthcoming book “The Playful Way” (out in April).

“Whereas when we’re in a playful state, we’re in a more exploratory, open-minded, experimental place, and that’s usually where solutions come from,” she says.🧩 “Play deprivation” could affect your mental health, according to Stuart Brown, clinical researcher and founder of the National Institute for Play.
Adding some play to your workday can look like:✏️ Stepping away to doodle for a few minutes.🎶 Listening to your favorite song — without scrolling on your phone.🌳 

Going for a walk to seek awe in the outdoors.

To find more opportunities to play, identify the grown-up version of the thing you loved as a kid, Gelardi says.

If you spent hours dressing up dolls, explore your personal style now.

If playing house made you lose track of time, people-watch and imagine their backstories.
If you thrived on sports teams, build more movement into your day.💭 
Carly’s thought bubble: My two favorite play activities are…Going to movement class or another kind of sober dance party.Running around the house with my curious, energetic toddler.Share this story … Different “play modes”
Posted in Uncategorized | Leave a comment

The Harvard Gazette: ‘Harvard Thinking’: Preserving learning in the age of AI shortcuts

‘Harvard Thinking’: Preserving learning in the age of AI shortcuts

Illustration of a boy reading with class in background.
Illustrations by Liz Zonarich/Harvard Staff

Samantha Laine Perfas

Harvard Staff Writer 

February 18, 2026 long read

In podcast, teachers talk about how they’re using technology to supercharge critical thinking rather than replace it 

Concerns of artificial intelligence supplanting human thinking are rising amid the exponential growth in recent years of generative AI’s capabilities. One thing is clear: The technology is not going away. 

“I feel it would be irresponsible for me not to embrace it entirely, both in the classroom and in research,” said Michael Brenner, the Catalyst professor of Applied Mathematics in the School of Engineering and Applied Sciences. In his view, anyone who doesn’t is going to lag in their careers and their ability to advance science. “It’s just changed everything.” 

Reports of AI harming the social, emotional, and cognitive development of students need to be taken seriously, said Tina Grotzer, a cognitive scientist at the Graduate School of Education. The challenge for educators is determining how to incorporate the technology in the classroom so that it enhances learning rather than replaces it.  

Ying Xu, an assistant professor at the Graduate School of Education, meanwhile, encourages parents who are worried about how to introduce AI to their kids to consider it in relationship to “the larger ecosystem of a child’s life,” including building healthy relationships, spending time outdoors, and pursuing hobbies. 

In this episode of “Harvard Thinking,” host Samantha Laine Perfas talks with Brenner, Grotzer, and Xu about how educators and parents can harness AI’s educational benefits while mitigating potential risks. 


https://player.simplecast.com/4de771cf-179f-448d-bc02-3701a8a74342?dark=false


Listen on: Spotify Apple YouTube


The transcript

Tina Grotzer: Once you start to know what your mind can do that’s so much better than AI, it kind of makes sense that some tasks are well-relegated to AI and other tasks are not. That is going to be a constant challenge to figure out those relationships and lines over time.  

Samantha Laine Perfas: As generative AI tools become more ubiquitous, there’s a debate around whether or not they should be embraced in spaces of learning. Recent reports suggest that the risks of using these tools might outweigh the benefits, threatening cognitive development by doing the thinking for their users. Homework that used to take hours of practice and comprehension is now completed in minutes, potentially undercutting students’ development of basic skills. This is forcing educators into a dilemma: How do they make the most of AI’s potential, while also protecting students’ ability to think for themselves? 

Welcome to “Harvard Thinking,” a podcast where the life of the mind meets everyday life. Today we’re joined by: 

Michael Brenner: I’m Michael Brenner. I’m the Catalyst Professor of Applied Mathematics in the School of Engineering and Applied Sciences at Harvard.  

Laine Perfas: He is also a research scientist at Google, where he runs a team that studies how large language models can accelerate science. Then: 

Grotzer: Tina Grotzer. I am on the faculty at the Graduate School of Education.  

Laine Perfas: She’s a cognitive scientist and studies how people learn. And our final guest:  

Ying Xu: Ying Xu. I’m an assistant professor of education at Harvard Graduate School of Education.  

Laine Perfas: She researches how to best design and evaluate AI technologies so they enhance children’s development rather than harm it.  

And I’m Samantha Laine Perfas, your host and a writer for The Harvard Gazette. Today we’ll talk about the role of AI in our educational settings and how we might best use it to enhance our learning rather than replace it. 

There are concerns that generative AI tools will increasingly affect our ability to learn. So I want to talk about the act of learning itself and critical thinking. What is it that we’re afraid will be affected?  

Grotzer: We’ve been a knowledge economy, so a lot of what we do in schools is about learning facts, memorizing things, getting a lot of information that we can then hopefully transfer forward and use in the real world. But learning also involves thinking about your mind, how we use our minds well, understanding our minds, understanding what critical thinking is, what creative thinking is. And so much of it we do so naturally that I think kids don’t even realize that they need to learn to do it. They don’t reflect on the amazing learning that their minds are doing day after day. That’s really important for this conversation. 

Xu: We are thinking about learning along two dimensions. One is what they learn, like the facts and the information. And the other one is their ability to learn. And those are really the foundational capacities that allow students to acquire new knowledge and skills in the future, and critical thinking is one of these very important foundational capacities. 

Brenner: I’m very interested in what Ying and Tina just said. I see the problem from maybe the other side, which is that my teaching focuses on advanced undergraduates and graduate students who are basically trying to learn to make models to understand or improve the world. For example, making large language models, the foundation of generative AI. And my career has really been driven by making scientific advances and trying to make discoveries using all of the tools in actual practice. And I should say upfront before I say the next thing that I am quite concerned about generative AI and its use and what it’s going to do to the people learning basic skills. Having said that, I feel it would be irresponsible for me not to embrace it entirely, both in the classroom and in research. And the reason is that anyone who doesn’t embrace it is going to lose. They’re going to get behind in terms of their careers, they’re going to get behind in terms of not making the greatest advances; it’s just changed everything. I’m excited that Tina and Ying are here because I’ve always worried that I don’t know how to teach people how to do the chain rule, which is the first thing you have to learn when you do calculus. But once you know that, then you need to proceed as I’m describing. But I have no idea what we’re supposed to do about actual learning.  

Laine Perfas: I think that brings up an important point. Learning is obviously a lifelong endeavor, so when we talk about AI tools, I imagine there is a huge difference between incorporating it into an elementary or middle school or high school classroom. Does it matter the age at which these tools are being introduced and incorporated into learning? 

Grotzer: I work with a lot of teachers who are thinking about instructional design, and what I often see in their work is that for the youngest children, they think that the kids need to learn a lot of information, and then once they have a lot of information they can learn how to think with it. And I think that’s unfortunate because you see the very youngest kids experimenting, exploring, demonstrating curiosity. This is all to Michael’s point, really, that AI is in the world, and why would we stop them from learning about the things that are out there? I do think that the very youngest children don’t have a reflective sense of what it is. But they’re going to interact with it, so I think figuring out how it can be a part of their world and a part of their exploration and their play without creating assumptions around — you know, it’s not really your friend, the anthropomorphizing of AI, things like that — are important. When we start to look at placing developmental limits, that becomes problematic when we’re talking about learning in the real world every day.  

Xu: When I talk to parents, the question that gets asked the most is at what age should we introduce AI toys at my home and at what age it starts to be safe for my kids to interact with AI tools. And what I would say is, “What kinds of tools are we thinking of introducing?” There are some tools that are more specialized, like the ones designed specifically to teach phonics and to support math and science. And there are also some tools that are designed with kids in mind that support this safe exploration, like Tina just mentioned. But there is another category that is more of this general tool that could be an assistant for almost everything. I do feel that to be able to use this kind of general tool, it does require some cognitive skills that our students need time to develop. For example, what we found was self-regulation is one of the very important skills that students need to be able to interact with AI tools effectively because they need to actually make a plan. And this is the kind of learning that I wanted to engage: I am going to do the thinking myself but only use AI for some scaffolding. And this needs some regulation to control the temptation: “I’m not going to ask AI for answers for all the questions.” 

We did a survey with 7,000 high school students. We asked them, “Do you feel that you are relying on AI too much for your learning?” And almost half of those students said that, “Yes, I feel that I’m using AI a bit too much.” And then over 40 percent said, “I tried to limit my usage, but it was so difficult I failed.” 

Laine Perfas: It’s interesting that there is the self-identification of I don’t want to be using it as much, and also, I can’t help myself, which I think may be why technology in general can be such a tricky thing for young people. It seems that there’s this tension: AI is in the world. It’s going to be around us. How do we bring it in in a way that enhances learning versus replaces learning?  

Brenner: I think that because we have AI, students should do more, they should solve harder problems. They should learn more. That’s my answer. And I’ve done this in various ways through classes at Harvard. One example is from a class that I teach, which is a graduate class in applied mathematics, which has always been thought of as one of the harder classes that we have.  

“I think that because we have AI, students should do more, they should solve harder problems. They should learn more.” 

I taught it last spring, and the day before I went to lecture, just for fun, I stuck in my last homework problem set into Gemini and asked it if it could solve it. And it basically solved everything. I was quite shocked because these are things that people spend their lives learning how to do. But I decided that I would walk in and just say to the students, “Well, this now happened. And so my syllabus is now invalid.” Basically, I’m not going to read problem sets given by ChatGPT. And so instead what I did is I completely changed the class and made two different things, which I think raised the level significantly. One is that every week, instead of giving them a problem set where I gave problems and asked them to solve it — which is the way I’d always done it — I instead told them that they needed to invent a problem that was in a particular category, and that the problem had to have the characteristic that the following chatbots could not solve the problem. And if they found the problem that the best chatbot couldn’t solve, they got extra credit. Meanwhile, they had to verify that their solution was correct by doing certain numerical calculations, and they had to convince someone else in the class that it was correct. We did that every week. There were 60 students in the class. By the end of the semester, we had 600 problems that, for the most part, the best chatbots at the moment couldn’t solve. We as a class wrote a paper about this and it was just published with everybody in the class as authors.  

But then that was not enough, because of course, teaching students, you still have to know that they understand. So what I did — and I’ve now started doing — is in lieu of a final exam, I basically carried out oral exams with each of the students in which they had to walk in, and I said, “You’ve invented these 10 problems,” and I made them go to my blackboard, and I picked one — I didn’t let them choose — and I made them go and solve it, basically, then explain to me how they came up with it. I learned more about the students than I ever have in the past in such a class. Subjectively, they knew more than anyone that I’d ever taught before because they were forced to go and read things that they ordinarily wouldn’t need to read because they had to push the state of the art.  

Grotzer: I have an assignment that I give to my students that I’ve given them for the last 20 years, and it has a number of points in this semester where they turn it in and they get feedback. And this was the first year where I could clearly see significant use of AI in the assignment. And what was supposed to be a 20- or 30-page assignment — I’m reading 60 pages of glop, it’s not terribly thoughtful.  

We had a long talk about, if you are designing instruction for the next generation, do you want to design the very best instruction that you can? You’re paying to come to school to learn how to do these things. Do you want to leave without pushing at the edges of your own learning and your own competence? So let’s talk about the ways that you’re using this. Some of the ways that students used it made a lot of sense. They had it quiz them about things they were trying to do. They asked for feedback on certain aspects, they explored possibilities and then went through and selected the ones that were most powerful for their own instructional designs. They also — very interestingly, some of them used it to create different perspectives on something that they were creating. So if you were a parent reading about this assignment, what would you wonder about? If you were a kid reading this assignment, what problems would be confusing to you? I think it’s really encouraging them to continue to work at the edge of their competence. Use it as a tool to transform what they’re doing. Which is really what I was hearing from Michael when he’s thinking about these assignments that he’s getting that are just much more advanced than you otherwise would’ve gotten.  

One of the issues is that we have a crisis of purpose in education, and we need to be rethinking what it is that young people learn as they progress through our educational system. Ying talked about the importance of self-regulation, so that’s important all the way through. We can all get behind that, but they’re always going to be in a world where there are going to be new innovations, and the innovations are going to get better and better in some respects. I think it forces us to look at what it is that we’re teaching and why.  

Xu: I wanted to linger on the purpose of education. In the same survey that we asked the 7,000 high school students, “What is the purpose of your learning?”we asked whether they feel that learning math and learning English is still as important as before AI was introduced. Not surprisingly, we saw a huge decrease of students’ motivation in those two important subjects. This is a wake-up call and a moment for educators to think about how we could restructure education and restructure the classroom and learning so that it’s more relatable for what students want to do in their life. 

“This is a wake-up call and a moment for educators to think about how we could restructure education so that it’s more relatable for what students want to do in their life.” 

Laine Perfas: Thinking about the purpose of education and classrooms and how they are needing to evolve to account for this technology, which is a tale as old as time. We had to evolve when calculators were invented, when computers were invented. But just thinking about that and thinking about the traditional assignment as we have known them or as I knew them growing up, so many of them were transactional in nature. An example might be writing an email. It’s not about, “I want to have a really good grasp of language so I’m a good communicator.” It’s, “I want to get this email written as quick as possible.” I’m wondering how you see things needing to switch so that students remember that the point is learning, versus getting work done as efficiently as possible. 

Grotzer: I’d like to put a plug in for metacognition, which is understanding and thinking about your own mind and your own thinking, as one of the shifts that we need to make in terms of the purposes of education. We talk a lot about getting to know how human-embodied minds are powerful. And we contrast that with what AI can presently do. Once you start to know what your mind can do that’s so much better than AI, it makes sense that some tasks are well-relegated to AI and other tasks are not. That is going to be a constant challenge to figure out those relationships and lines over time, especially given how much AI is advancing and that it’s really a moving target.  

“Once you start to know what your mind can do that’s so much better than AI, it makes sense that some tasks are well-relegated to AI and other tasks are not.” 

In my “Becoming an Expert Learner” class, we look at aspects of human-embodied minds. And in every class, I ask them to start thinking about, Can AI do these things? How does AI contrast to the human mind in this sense? And then they make a great big Venn diagram and really think about it. Our minds are very different from what AI is doing.  

But that’s my plug for metacognition. I think we need to be metacognitive both about our minds and about AI, and that’s a new purpose for education. 

Brenner: Just to respond to that briefly, I agree that we should be metacognitive about what we’re learning and why we’re learning it, and we should think about it and we should teach our students how to do that. I agree that’s important. On the other hand, the one thing where I would push back slightly — and again, this only applies to my world, which is teaching people towards the end of their formal education — is that, I used to think I was a kind of creative person, basically. I’ve written a lot of papers in my life and I’ve done a lot of things, and these machines are basically as good as I am, if not better, at the moment, across many tasks. There’s a real question in my mind about whether we should be teaching people how to program computers at all, right? I actually think we should, and I’m not arguing against it, but I think it’s at least a reasonable question to ask given the fact that it is extraordinary what can be created by a person without technical expertise.  

To give you an example of this, I spent some time in a previous rendition of myself working with the ALS community, and I befriended a person who’s an amazing human being named Steve who has ALS. Steve uses a piece of software, which is called Dasher, that was written in a computer programming language that basically doesn’t really exist anymore, like 30 years ago. And it’s his primary communication, and he was always bothering me to fix the code for him so that it could do more things. Steve emailed me the other day, and to be clear, Steve types with his eyes — if he’s listening to this, hi, Steve. He emailed me the other day that he’s now using GPT-5 to fix the code himself. I think that we’re just living in a different world and that we need to rethink what the goals of education are, what we’re trying to teach people so that they can actually innovate.  

Laine Perfas: I feel like each of you have mentioned in various capacities, we should let the machines do the things that they’re really good at and also focus on the value of the human. And one of the things that I keep coming across is we learn from each other so much, the value of the teacher in a young student’s life, or Michael, thinking about how you redesigned your class. And part of it was to have students explain to you what they did and how they came up with it. So I’d love to talk about how you see an opportunity for things like chatbots to be there for instruction, while also not losing that critical human-to-human interaction.  

Grotzer: There was a piece in the Chronicle where a student was asking the writer about use of AI, and he had written an essay and he showed the feedback that he had gotten from his professor to this writer. And he said, “I used AI to write my essay.” And the writer said, “Yes, I can see that.” And then he said, “But look at the feedback I got from my professor.” And the writer looked at it and said, “And you know that your professor used AI to respond to you?” It was beautiful feedback if you looked at what was given, but it wasn’t feedback that was going help anyone because it wasn’t about anything that someone had generated. So, you know, the machines are talking to the machines at that point. And then we’ve truly lost the purpose of education. The social piece is so important and that ability for two minds to really grapple with something. The other thing we know from the tutoring research is that what good tutors do is they manage motivation and they give information in certain ways while constantly managing the social-emotional context and the kinds of things that we might expect a tutor to respond with. They might not fill in the information. They might just let it sit there. They might let someone get to a certain level and hold back, and then the next time they might come to that level. AI isn’t like an expert human tutor. That social piece of really knowing the social-emotional component is critical. 

Xu: I would agree with that. I think AI could do a really good job in generating the kinds of information that is very similar with what humans could provide. But learning is much more than just exchanging information and receiving feedback. And there is this relationship-building piece in the learning process that is very difficult to be replaced by AI. In my own studies, I looked at how much kids learn from an AI tutor versus a human tutor, although sometimes I do find that the learning outcomes are similar, just the amount of information the kids are able to retain was similar across the different learning formats. But kids reported that they had a higher level of enjoyment when they talked to a human tutor. Also, throughout observation, they are just more animated and more engaged in the entire learning process. It does improve the kids’ interest in learning, their confidence, their communication. Those are the aspects that are very difficult to be replaced by AI. In my own class, I did a small experiment with my students to understand the kind of social aspects in learning. We gave them identical essay feedback, but I just told one group of students, “This is the essay feedback generated by an AI,” and the other group of students, I told them, “This is your feedback generated by me,” but they looked at exactly identical feedback. I asked them to tell me, “How useful do you think the feedback is?” And just knowing that the feedback coming from an instructor actually significantly increased students’ perception of the usefulness of the feedback. What really matters is not only the kind of information students receive, but they know that the instructors really care about them and want them to succeed. I think that is a very useful ingredient in learning. 

Grotzer: The way that we structure schools — I think about middle schools and high schools where, so many kids are coming through a classroom that teachers don’t have good time to really get to know each one. As we think about this shift in purpose of education, that’s something else we could be thinking about a shift in. Because a good teacher is holding so much social-emotional information about a student. They’re balancing motivational factors with cognitive factors. They are giving very subtle hints to get somebody to think about stuff without giving it away. They often escalate how much support they’re given so that a student doesn’t totally fail, so that they walk away with some sense of success and that they can take on something the next time. As we rethink schooling in the age of AI, we should be rethinking how we enable the context that allows teachers to get to know their students deeply and well. I have young people in my household, some who went to large, urban public schools and didn’t really feel so known. And I have some who went to little tiny schools, six kids in your graduating class or 20 in your graduating class. And that difference of really feeling known, it reverberated throughout their early adolescence and how they felt about themselves, how they felt about grownups. Those relationships are holding environments as kids launch into what’s been called the rocket years, you know, the years in their 20s as they’re establishing careers and their own sense of competence in the world. I think we should be rethinking the structures of schools to accommodate these very kinds of social relationships that you’re talking about, Sam. 

Laine Perfas: Thinking about everything we’ve talked about at the end of the day, the goal here is to think through how we put guardrails in place so that we can protect and foster the next generation of thinkers, whether it’s Michael’s camp of using AI to bring the sciences further than they’ve ever been before, but also making sure that we are protecting the relational, emotional, critical-thinking component of learning that’s so vital. So, as we close, what should educators be thinking about when they think about their classrooms and incorporating tools, and what should parents be thinking about when they’re having conversations with their children about the purpose and role of AI in their life? 

Brenner: What I think about, is: Can I prove or measure that students are learning more by the use of tools? I think people should be experimenting and trying and just see what works and what doesn’t. But I think what’s very important in that type of an environment is to have some way of actually measuring that it’s working, because in the end, it really does matter if we’regoing to change the way people are educated.  

Grotzer: One of the things that I think is so important for young people coming up through our educational system is that they have a sense of agency about their learning and they feel empowered to use it and to use it well. AI is going to be a part of that. Sometimes you might see limits on your use of it in certain ways, but that is in service of protecting your ability to learn and use your mind well in the future to be able to achieve your goals out in the world. What is your moonshot? How do you want to see your contribution to the world? How can AI and your amazing mind work in service of that? For that reason, sometimes in educational contexts we might put guardrails, but those guardrails are really ones that are in service of your future abilities. 

Xu: What I would add is that I think AI is very important, but there are also many other things going on in young people’s lives that are equally important. So if we zoom out, it’s likely AI could be a good resource, but there are so many other opportunities and experiences kids are engaged in. For example, like their families, their friends, their time spent on their hobbies, and time in nature. Those are very important things that are playing a role in shaping children’s development. From a kind of a broader bird’s-eye view, AI probably doesn’t matter as much in isolation. What does matter is how it fits into the larger ecosystem of a child’s life. So maybe this is a message to parents and educators, and yes, it’s important to think about AI, but don’t forget that there are many other things in your child’s world that matter just as much. 

Laine Perfas: Thank you all for joining me for such a great conversation. 

Xu: Thank you, Sam. This was really fun.  

Grotzer: Thank you for the opportunity. It’s been a pleasure. 

Laine Perfas: Thanks for listening. For a transcript of this episode and to listen to our other episodes, visit harvard.edu/thinking. If you’re a fan of this podcast, rate and review us on Apple and Spotify. It helps others discover us. This episode was hosted and produced by me, Samantha Laine Perfas. It was edited by Ryan Mulcahy, Paul Makishima, and Sarah Lamodi. Original music and sound design by Noel Flatt. Produced by Harvard University, copyright 2026. 


Recommended reading

Share this article

Posted in Uncategorized | Tagged , , , , | Leave a comment