AI Images and The Election

With election season in full swing, politicians have been exploring the usage of AI in their political campaigns. Photo via Brett Sayles on Pexels.

Over the past few years, the rise in artificial intelligence-generated content has grown tremendously, from simple voice replication to realistic photos and videos. While ethical concerns about this rapid progress have grown within art-based industries, it has slowly started seeping out into other aspects of public life. 

Recently, however, AI-generated content has entered the world of politics in the form of AI-generated political advertisements.

The most widely known incident of AI-generated content being used in politics occurred on Aug. 18, when former President Donald Trump reposted several AI images of Taylor Swift and her fans, known as Swifties, endorsing Trump for president. 

Posted on his Truth Social account, the most noteworthy image was one of Swift wearing an Uncle Sam outfit, posing in front of an American flag with the words, “Taylor Wants You To Vote For Donald Trump.” While this was met with controversy online, Swift has not responded to the post. 

However, she has not been the only person manipulated by AI imagery.

Trump posted on Truth Social on Sept. 1 an article showing an AI image of Vice President Kamala Harris dressed up in a Joseph Stalin outfit. This image was uploaded exactly two weeks after posting an image of Harris on X standing in front of a crowd wearing Soviet clothing with a massive communist flag in the background. This reflects Trump’s history of calling Harris a “Socialist, ” “Fascist, ” “Communist” and “Marxist,” even giving her the nickname “Comrade Kamala.”

“For me, the biggest problem with AI is that it can easily be manipulated to look believable, which makes disinformation and misinformation harder to catch — especially on social media, where news consumers are not deeply engaging with these posts, and something that goes by in passing can stick in someone’s mind,” said Dr. Kevin Lerner, chair of the department of Communication and associate professor of journalism, via email.

Take one of the images Trump reposted by X user @akafacehots, who posted two pictures with the caption “SwiftiesForTrump continue to break the internet! Kamalas campaign is in shambles over it!” One of the pictures shows a singular person wearing a “Swifties For Trump” shirt at a Trump rally. 

The other picture is an AI-generated crowd waving what appear to be American flags, although many of them are messed up and have incorrect designs. This post has 3.2 million views, almost 94,000 likes and 2,100 comments. 

Some of the responses point out the use of AI in the images, while others treat the post like it is real, with responses like “I truthfully never thought I would see the day, but it warms my damn heart,” by @ille_Ghost and “Bet they won’t wear it to a Taylor concert, though. They’d be booed out or not let in. We need to protect the #SwiftiesForTrump,” by @4Black2Beard0.

Despite the false nature of this post, the responses and commentary generated are both believed and circulated. After two elections filled with accusations of misinformation and social media’s role in spreading it, technology that can create realistic fake images will continue to contribute to the distrust toward our upcoming elections. 

“Unless we, as a society, make a concerted effort to educate others about news literacy, I don’t think people will take it upon themselves to do it,” said Dr. Lerner. “People do need to be critical consumers of the news, but the human bias toward information that supports what we already believe is really strong. Ideally, we would start media literacy education in elementary school, the way they teach you how to use a library. And it does happen to some extent, but not nearly enough.”