How AI and surveillance capitalism are undermining democracy
By Suresh Venkatasubramanian | August 21, 2025
AI is smoothing out people’s individuality and instead placing each person into a group that’s deemed to behave a certain way. Image: Jamillah Knowles & We and AI / People and Ivory Tower AI 2 / Licenced by CC-BY 4.0 Share
On March 6, 2025, Axios reported that the State Department had launched a new social media surveillance program called “Catch and Revoke.” The intended goal of this program was to use artificial intelligence to assist in reviewing “tens of thousands of student visa holders’ social media footprints” to find “evidence of alleged terrorist sympathies expressed after Hamas’ attack on Israel.”
Whether you find this a horrifying development, an exciting application of AI, a flagrant violation of First Amendment rights, or even just a headscratcher, this incident captures the dynamics of how artificial intelligence, surveillance, and threats to democracy all come together. In a nutshell: AI’s promise of behavior prediction and control fuels a vicious cycle of surveillance which inevitably triggers abuses of power.
Throughout history, humans have always searched for ways to predict (and control) behavior, whether this constituted consulting an oracle, throwing bones, reading tea leaves, or even examining the shape of a person’s face and body to determine personality traits (which seems awfully contemporary if you start diving into the literature on “emotion AI”). As people became more adept at collecting data of various kinds, the field of statistics emerged to aid them in using data for prediction. (One of the amusing facts about AI research is that virtually every debate one encounters about the appropriate use of artificial intelligence in some social setting has parallels in history, often much earlier, which make it clear that efforts to predict and control behavior was never about AI at all.)
The problem with using data to make predictions is that the process can be used as a weapon against society, threatening democratic values. As the lines between private and public data are blurred in modern society, many won’t realize that their private lives are becoming data points used to make decisions about them. AI has supercharged these capabilities, smoothing out people’s individuality and instead placing each person into a group that’s deemed to behave a certain way. And while data and AI can be used for good, the only way these beneficial outcomes can be achieved is with restrictive, well-designed controls to prevent damage to democracy, much like humans did with nuclear energy.
The AI move. The statistics/big data/machine learning/data science move was to marry the core techniques for behavior prediction with computational magic and vast amounts of new kinds of data. This could all be translated into the high-dimensional vector representations that machine learning algorithms chewed up and spit out. These algorithms are masterful at finding patterns in data and using those patterns to make predictions about the future. As techniques for learning patterns from data became more complex, spawning an entire zoo of methods—supervised, unsupervised, semi-supervised, online, reinforced—the nature of the data and the kind of prediction being called for became less important than the magic box in the middle used to make the extrapolation. Humans had decontextualized the problem of prediction.
Finding patterns that purport to predict how people behave is only the first step. The “AI for good” impulse surfaces quickly here: If those in charge can predict how people behave in different circumstances, maybe they can help them be their best selves. If experts can predict who is likely to have bad financial habits, maybe they can direct more education their way, or nudge them into better practices. If educators can predict which students are going to struggle in school, maybe they can direct more resources to help them. A recent article on AI and wellness in Marie Claire exemplifies this sentiment: What if, with the aid of AI, “your watch could not only detect diseases and health issues before they arise but also communicate directly with our doctors to flag us for treatment? What if it could speak with the rest of your gadgets in real time, and optimize your environment so your bedroom was primed for your most restful sleep, keep your refrigerator full with the food your body actually needs and your home fitness equipment calibrated to give you the most effective workout for your energy level? What if, with the help of AI, your entire living environment could be so streamlined that you were immersed in the exact kind of wellness your body and mind needed at any given moment, without ever lifting a finger?”
Russian networks flood the Internet with propaganda, aiming to corrupt AI chatbots
What makes AI prediction both powerful and lucrative is being able to control what happens next. If a bank can claim to predict what people will do with a loan, it can use that to decide whether they should get one. If an admissions officer can claim to predict how students will perform in college, they can use that to decide which students to admit.
The data trade. Once people use AI to “know” something about themselves, someone will sell that information to someone else who not only wants to “know” something about a person or a group of people but to elicit a specific behavior from that person or those persons that can emerge from that knowledge.
Experts have commented copiously on this dynamic—of data collection for the sole purpose of fueling an entire economic enterprise of data trading—most prominently under the umbrella term “surveillance capitalism.” What AI does is make this a surveillance ratchet, a device that only goes in one direction, which goes something like this: To make the inferences I want to make to learn more about you, I must collect more data on you. For my AI tools to run, I need data about a lot of you. And once I’ve collected this data, I can monetize it by selling it to others who want to use AI to make other inferences about you. AI creates a demand for data but also becomes the result of collecting data.
Here’s the problem with being awash in data and inferences: It never stays limited to any innocent or well-intentioned purpose. It almost immediately becomes a tempting weapon for those who want to wield power over a society, with or without consent from the people in that society. And that’s where the biggest threats to democracy and democratic governance come from.
Blurred lines. I view democracy expansively. If society believes in government by, for, and of the people, its members must be comfortable expressing themselves in public, being themselves in private, and having clear lines between the two spheres of activity. As countless scholars have warned us, there are serious implications for society when these lines get blurred.
But this is exactly what AI-driven surveillance does. There are many spheres of life that people assumed were private or were promised to be private, but because of AI-driven surveillance have been thrust into the public sphere—with serious consequences. In a post-Dobbs regime, menstrual data trackers are now used for the legal pursuit of abortion seekers. In fact, earlier this month, a jury found that “Meta violated the California Invasion of Privacy Act when it intentionally recorded the sensitive health information of millions of women through the period tracking app Flo.” In their pursuit of deportation targets, ICE is now using people’s most private medical and financial information shared with the government with the promise of privacy protection. Amazon’s Echo devices have been subject to warrants for the audio recordings made by the device inside our homes—recordings that were made even when the people present weren’t talking directly to the device. (In my house we have loud and enthusiastic political discussions and have disconnected our Echo entirely for fear of our statements being misconstrued later on.)
Things people once thought were private are no longer kept in the private domain. In addition, there are the things people knew were public—sort of—but never imagined would be taken out of context and weaponized. Amazon’s Ring doorbells face outward, looking at our streets, our neighbors, and everyone who comes to our door. People may have no expectation of privacy out in public, but would they expect the police to commandeer the video feed from a neighbor’s doorbell to judge the degree of threateningness if they were to wave their hands wildly in an animated discussion about their lawn or an incident in our neighborhood? That’s what Ring’s new—and old—CEO wants to do with the now-ubiquitous doorbells.
Social media feeds are certainly not private. But people have their own style and personality in how they post, and the most common social media blowup is when someone re-sends another person’s post out of context and causes an internet pile on. Now what happens if that out-of-context post is processed by AI to determine if the person reposting is a terrorist sympathizer, as the State department is now proposing to do? And what if those posts are now combined with surveillance footage from a Ring camera as a person marches down the street as part of a protest that is now interpreted as being sympathetic to a terrorist organization? What is public is now surveilled, and what is private is now public.
How AI can slow the rise of antibiotic-resistant “superbugs”
Homogenizing behavior. There is an argument put forward to defend the use of AI in surveillance. It goes like this: “These systems of surveillance are bad because they are inaccurate, and AI will fix that.” This has been a common line of reasoning made in the facial recognition debates, where early criticism of these systems centered on their inability to recognize darker skin tones or facial characteristics that aren’t male and White. The problem is that AI won’t fix this. The predictions made by AI systems about our likely future behavior are by design built from patterns of behavior seen across many people. As such, any claims of accuracy don’t mean “we know how you, Suresh, are likely to behave in this scenario.” They mean “we have decided that you, Suresh, are a member of this class of individuals that on average behaves this way.”
This smoothing out of our individuality, our quirks, and our unique tendencies to react to stimuli around us is antithetical to the idea of democracy as a celebration of originality. It’s a chilling causal reversal. It’s not taking our individual expressions of freedom and finding some common ground; it’s removing all the things that make us unique and different for the purpose of prediction and control. And this is by design, and not an accident.
The desire to surveil is bipartisan. It’s about power, not party politics. This seems hard to accept eight months into the current administration, which has run roughshod over any and all protections against rampant surveillance, data collection, and AI (mis)use that it can find. But the temptation to collect data and then use it for surveillance is extremely strong, even more so inside government, where so much data is collected just as a matter of course, and people trust (or are forced to trust) government agencies to handle some of their most private financial and personal data.
Devising controls. This situation presents a very difficult conundrum for those of us (myself included) who want to hold out hope that society can in fact use AI and data for good and believe that people just need the right guardrails in place to do so. The ease with which the rule of law can be flouted, protections can be destroyed, and data can be abused in the service of power raises a huge question about the risk of any data collection and use, even if there was a benign purpose to begin with.
I don’t yet have a good answer to the question I have raised. But I will leave you, the readers of the Bulletin, with an image that harkens back to the founding of this publication. AI development is framed by countries around the world as an “arms race.” Good use cases of AI easily, and frequently do, turn to bad. The production of AI systems requires huge amounts of data collection and energy use. The inferences and insights generated by these AI systems can be downright dangerous in the wrong hands. All of this makes AI feel not a small bit like nuclear energy—radioactive, dangerous, and, yes, sometimes useful. The only way humans have been able to harness nuclear energy is with restrictive, well-designed controls, and while these might have limited some potential commercial opportunities, it has on balance helped keep the world safer and freer than without. I see no way for us to continue using AI without similarly restrictive, well-designed controls to prevent damage to our democracy and society and preserve its freedom and liberty.
Editor’s note: This piece was produced with support from the Future of Life Institute.
Together, we make the world safer.
The Bulletin elevates expert voices above the noise. But as an independent nonprofit organization, our operations depend on the support of readers like you. Help us continue to deliver quality journalism that holds leaders accountable. Your support of our work at any level is important. In return, we promise our coverage will be understandable, influential, vigilant, solution-oriented, and fair-minded. Together we can make a difference.