10 hours ago

AI-generated political videos are more about memes and money than persuading and deceiving

Zohran Mamdani as a creepy trick-or-treater, Gavin Newsom body-slamming Donald Trump and Hakeem Jeffries in a sombrero. This is not the setup to an elaborate joke. Instead, these are all examples of recent AI-generated political videos. New easy-to-use tools – and acceptance of those tools by politicians – means that these fake videos are quickly becoming commonplace in American politics.

Perhaps the most interesting thing about many of the videos is how clearly fake they are. Rather than trying to deceive the viewer into thinking a depicted event actually happened, the videos serve a different purpose. President Trump didn’t post a video of himself wearing a crown in a fighter jet dumping feces on a group of protesters because he wanted people to believe that the flight actually happened. He likely did it to express his feelings about the protest and to create an in-joke with his followers.

Fears about the political implications of AI-generated videos have been around since the term deepfakes was coined in 2017. Steady improvements in the technology mean that distinguishing real from fake could become a significant threat. But today’s use of AI imagery is largely about making memes and making money – in other words, typical social media content.

Getting a rise out of people

Internet platforms use algorithms designed to keep people engaged, and that typically means promoting content that stirs emotions. AI-generated political videos often provoke an emotional response – amusement or outrage.

People are more likely to share information when it is emotionally arousing. For example, people are more likely to pass along urban legends that elicit feelings of disgust, and news articles that are emotionally charged are more likely to make the New York Times list of most emailed articles. Similar patterns occur online, where emotional content is much more likely to go viral than nonemotional content.

In addition, strong emotions can interfere with people’s ability to detect false information. People are worse at distinguishing between true and false political news headlines when they are experiencing stronger emotions – for instance, enthusiasm, excitement or fear. Thus, emotionally appealing AI-generated videos are both more likely to spread and reduce people’s ability to judge whether they are real or fake.

Online politics

Creating and sharing AI videos is also a powerful way for people to demonstrate their allegiances and show their political identities. “I am a Trump supporter, so I post AI videos of ICE detainees crying to own the libs” or “I am a Democrat and so I share Governor Newsom’s AI-video of JD Vance talking about couches to show that I’m in on the joke.”

What’s new in recent months is that campaigns and politicians are using AI-created videos, not just their supporters. An analysis from The New York Times showed that Trump commonly uses AI imagery to “attack enemies and rouse supporters”.

These new tools also allow for active participation in the political process. Rather than simply watching politicians and voting, citizens can play an active role in shaping the conversation between elections.

Information and technology researcher Kate Starbird has written about similar dynamics in the ways that everyday Americans found “evidence” for voter fraud in the 2020 election. Politicians told the public that voter fraud was going to occur, and then when voters saw things that they did not understand when voting, such as the use of Sharpie pens to mark ballots, they interpreted that action as evidence of voter fraud. Politicians then circulated that evidence online to support the false narrative.

New AI tools make this cycle of participatory disinformation even simpler. Instead of reinterpreting actual events as evidence for a false claim, people can easily generate that evidence themselves.

AI video at volume

AI video creation tools make it incredibly easy for people to churn out hundreds of videos, post them online and simply see what content becomes popular and goes viral. In fact, that’s exactly what seems to have happened with recent AI-generated videos of raids by Immigration and Customs Enforcement. According to an investigation by 404 media, Facebook user “USA Journey 897” used to post a variety of real videos of police activity as well as absurd AI videos of people carrying whales and riding tigers.

However, after the release of a new version of OpenAI’s Sora video generator on Sept. 30, 2025, the account switched entirely to posting multiple fake videos of deportations every day. Most of the videos accumulated hundreds of thousands of views, and one fake video of a Walmart employee being detained had over 4 million views.

Typically these accounts are hosted overseas and exist to earn money through creator incentive programs. These incentives create an environment where social media no longer informs people about the world, but instead serves as a fun-house mirror, presenting back to us the world that we want to see – or at least the version of the world that will capture our attention and outrage.

AI-generated political ads are stretching ethical boundaries.

Flowing into the internet

It’s not always easy for people to detect which videos are real and which are AI-generated. A recent audit by the publication Indicator found that platforms regularly fail to properly label AI content. Researchers posted over 500 AI-generated images and videos across Instagram, LinkedIn, Pinterest, TikTok and YouTube. Less than one-third were properly labeled as AI-generated, and even posts generated by the platform’s own AI tools were often missed.

For years, the great fear concerning political deepfakes was that they were going to fool people into believing something happened that didn’t. They still might, but at the moment, AI-generated political videos are a mix of entertainment and memes, legitimate attempts at persuasion, and ways of capturing attention for money.

In other words, they are now just like the rest of the internet. Most of what we see and share is meant to entertain, some is meant to inform and persuade, and a great deal exists solely to monetize our attention.

Read Entire Article

Comments

News Networks