Ai and Misinformation

the potential impact on elections

At the end of last year in Cassandra’s Gen Z Guide to the AI Revolution, we learned that 59% of Gen Zs say that they don’t trust AI. As we head into the 2024 presidential election cycle, this trust issue is going to come even more into the forefront. InFebruary, tech giants including Microsoft, Meta, Google, Amazon, X, OpenAI and TikTok pledged to fight the risk of AI disrupting the election. Chatbots, AI generated-content and deepfakes all have the potential to impact elections. Today we take a look at some recent examples.


AI deepfakes tied to elections in Europe and Asia have appeared on social media. These deepfakes have the potential to attack or bolster a candidates' image, steer voters towards or away from a specific candidate and impact the public's trus in what they are hearing or seeing related to elections. According to AP news, some recent examples of political related deepfakes include:

A video of Moldova's pro-Wester president supporting a political party friendly to Russia;

Audio of Slovakia's liberal party leader talking about vote rigging and raising the price of beer;

In Bangladesh, a video of opposition lawmaker Rumeen Farhana - a critic of the ruling party - wearing a bikini, which caused an uproar in the conservative, Muslim majority nation.


A group of more than 40 election officials, journalists and academics tested five leading AI models on their election information, by judging the models based on their response to 26 questions a voter might ask. Thereport from Proof News and the AI Democracy Project showed that half of the model’s responses were inaccurate. According to the report, an example of inaccurate output from one of the models included saying California voters can vote by text message when voting by text is not allowed anywhere in the U.S.


In February, theFederal Communications Commission ruled that robocalls using A.I. generated content are illegal. This followed an investigation into robocalls that used President Joe Biden’s voice. FCC chair Jessica Rosenworcel cited the intention to misinform votes as a reason for the unanimous decision. “Bad actors are using AI-generated voices in unsolicited robocalls to extort vulnerable family members, imitate celebrities, and misinform voters. We’re putting the fraudsters behind these robocalls on notice,” she said.