Political Campaigns Have No Idea What’s About to Hit Them

Stephan Lewandowsky, a cognitive scientist at the University of Bristol in Britain, wrote by email:

My research has shown that A.I. can be used to tailor political messages to people with different personalities, and that tailored messages have a slight persuasive edge over untailored messages.

So on that basis alone I think A.I. will be deployed widely to get that edge. There is also some evidence that humans find A.I. more persuasive generally than human-generated content.

Does the use of A.I. in campaigns have the potential to alienate voters?

I worry about that, especially if voters can no longer be sure whether a message is machine-generated or written by a human being. If people discover that they are being manipulated, this will likely alienate them further from politics generally.

Unfortunately my research shows that even if people know that they are being manipulated by A.I., the manipulation is still effective — transparency about A.I. is by itself insufficient to eliminate its effect on people.

In part because of that, Lewandowsky countered, “the most urgent research question, in my view, is not, ‘how effective is A.I. in campaigns?’ but rather ‘what are the downstream effects on political epistemics, polarization and democratic backsliding?’”

Sandra González-Bailón, a professor of communications and sociology at the University of Pennsylvania, argued in an email that anxieties over the use of A.I. in campaigns may be based on beliefs that are not yet founded on reality:

Research on the persuasive potential of A.I. takes place in experimental environments where participants are “forced” to enter a dialogue with these machines. Of course, outside of the lab these types of interactions are, for the vast majority of people, just a drop in a sea of information received and processed.

The findings are fascinating and insightful, but they have very specific scope conditions. Attempts at persuasion do not happen in a vacuum.

It’s possible, she continued, that

we may be building a future in which social networks are hybrid structures of people and machines, and we have yet to understand what this means for political action and opinion formation. But, as of now, I am unconvinced chatbots are as persuasive in the wild as they seem to be in the lab.

Jennifer Pan, a political scientist at Stanford, shares many of González-Bailón’s concerns, writing in an email:

A.I.’s effects on content production, monitoring and operations are already substantial, but its effects on mass persuasion or personalized persuasion at scale may be more constrained than current discourse implies. Persuasion at scale has always been hard, and the binding constraint is public inattention to politics.

Controlled studies of the “effects of L.L.M.s on persuasion, including our own, ‘Biased L.L.M.s Can Influence Political Decision-Making,’” Pan continued, “show that conversations with L.L.M.s can durably shift beliefs and attitudes.”

These results, however, emerged when “participants were required to engage in at least three turns of conversation with the model on topics they knew little about.” Consequently, “while the effects were real and showed up even when participants could identify that the model was biased, I’d be cautious about extrapolating to the political campaign context.”

I asked Pan who will benefit most from the use of A.I. in campaigns. Her response:

There are two countervailing ways to think about this. The first is that A.I. asymmetrically benefits lower-resourced actors. Challengers, small campaigns, down-ballot races and nonstate political actors gain the most from having cheap access to capabilities that previously required paid consultants.

The countervailing consideration is that well-funded incumbents already had strategists, pollsters, data scientists and communications staff.

Some scholars view A.I. as another case study of how new technologies have historically forced rapid and sometimes painful economic changes (the printing press, the internal combustion engine, computers, the internet), along the lines of Joseph Schumpeter’s concept of “creative destruction.”

David Lazer, a professor of both political science and computer sciences at Northeastern, contended in an email that A.I.

will transform the industry as it will transform any industry that involves analysis and interpretation of data. I think it will make data more valuable, because it will allow much more insight to be gleaned from any given data.

Think of A.I. as the equivalent of doubling or tripling — or much much more — the labor force of consultants/etc. That won’t displace the industry, but it may displace some jobs. There will still be a major need for serious human expertise in surveys/etc. in using A.I., because A.I. will act as a multiplier of sorts.

Lazer argued:

It will also transform what kind of data can be collected; e.g., rather than closed-ended questions (which impose such a strong structure on what people can say, that surveys may miss what they really think), you could interview voters at scale. You could also do far more with observing what people say and do on social media. So: I think the entire industry will look dramatically different in five years.

I don’t share Lazer’s relatively complacent view of A.I. With a tool as powerful as artificial intelligence, a tool whose strength is growing daily, leaving it in the hands of politicians and consultants whose first priority is to win is an inherently risky proposition.

Because of that I am going to conclude by citing “Curated Reality: How AI Is Reshaping Human Agency,” by Chris Kremidas-Courtney, published late in April on the Defend Democracy blog:

Today, Big Tech is shaping the environment in which human choices are made by defining the menu of ideas and information available to citizens. This curated reality filters what information, products and ideas we see and can throttle the visibility of certain ideas, determining what enters public consciousness. The result is a shrinking space for human agency while most remain largely unaware of the constraints shaping our choices. This is not a future risk, but a present reality.

A.I. weakens persistence and individuals’ sense of agency, according to a 2026 study, “A.I. Assistance Reduces Persistence and Hurts Independent Performance,” Kremidas-Courtney noted:

Participants who relied on A.I. performed worse and gave up more quickly when the system was removed, even after only brief exposure. If sustained use erodes the motivation and persistence required for independent thinking, the effects may accumulate gradually but be difficult to reverse over time.

Citizens, according to Kremidas-Courtney,

are moving within cognitive environments they neither see nor shape, while a small number of Big Tech firms design and refine those environments at scale. Over time, this shapes not only how individuals think, but how they relate to one another, reducing the willingness to question oneself, resolve disagreements and engage constructively across differences.

Today, Kremidas-Courtney warned,

Privately governed A.I. systems are displacing more open, collectively shaped information environments. What was once a relatively contested and plural space for debate is increasingly mediated through curated interfaces that prioritize certain pathways over others.

In functional terms, this begins to resemble a form of digital feudalism where access to information, visibility and even reasoning is structured by systems that citizens depend on but cannot influence.

In other words, metaphorically speaking, politics and other systems of information dissemination are holding onto the tail of a 16-foot crocodile that grows longer, stronger and hungrier by the day.

The Times is committed to publishing a diversity of letters to the editor. We’d like to hear what you think about this or any of our articles. Here are some tips. And here’s our email: letters@nytimes.com.

Follow the New York Times Opinion section on Facebook, Instagram, TikTok, Bluesky, WhatsApp and Threads.


Source:

www.nytimes.com