Strategists from both major political parties are increasingly concerned about the potential impact of deepfake videos as the presidential election approaches. With advanced artificial intelligence (AI) capabilities, the possibility of misleading video clips emerging in the final days of the campaign has raised alarms among lawmakers and political experts alike.
Warnings about foreign actors trying to sway the election have circulated for months. However, experts believe that an October surprise could just as likely stem from domestic sources. High-profile figures like former President Donald Trump and Vice President Kamala Harris have already been targets of deepfake technology.
Recently, an AI-generated video surfaced online, portraying Trump humorously mishandling a McDonald’s drive-thru experience. This clip followed his recent appearance at a fast-food restaurant in Bucks County, Pennsylvania, and received significant attention on social media. The incident also sparked a disinformation campaign that led Yelp to disable reviews for the Feasterville McDonald’s due to an influx of fake ratings.
In August, a fabricated photo of a younger Trump with convicted sex offender Jeffrey Epstein circulated widely, racking up hundreds of thousands of views before experts confirmed it was AI-generated. In response to the crowd sizes at Harris’s rallies, Trump alleged—without providing evidence—that she had manipulated images to suggest massive turnout, even though reporters confirmed the crowds in attendance.
According to the Microsoft Threat Analysis Center, Russian actors have also attempted to damage Harris’s reputation by disseminating deepfake videos depicting her in a negative light. One such video falsely attributed a crass statement to Harris regarding assassination attempts against Trump.
“Deepfake technology has improved significantly since we first encountered it in 2016,” noted Democratic strategist Rodell Mollineau. He emphasized that the real danger lies not in the deepfake itself, but in social media companies’ reluctance to control the spread of such misinformation.
Concerns have been raised about the role of platforms like X, formerly known as Twitter, especially under the ownership of Elon Musk. Critics argue that Musk has facilitated the spread of misinformation, exemplified by his re-sharing of a doctored Harris campaign ad that misrepresented her statements.
Although these deepfakes have yet to sway public opinion significantly, political strategists warn that the situation could worsen in the upcoming days. Veteran journalist John Heilemann expressed concerns that the final two weeks of the campaign could see misinformation go viral, especially deepfake-driven content.
Prominent conservative personality Charlie Kirk cautioned his followers to prepare for potential disinformation targeting Trump, urging them to stay focused and vote.
Experts fear that social media platforms will respond sluggishly to the proliferation of misleading content. Joshua Graham Lynn, co-founder of RepresentUs, pointed out that once misinformation spreads, it can be challenging for voters to unlearn it. He also raised concerns about misleading information about polling places that could deter voters on Election Day.
The increasing accessibility of technology means that anyone can generate and distribute vast amounts of disinformation quickly. “What once required thousands of trolls now can be accomplished by a single individual,” Lynn explained. This shift has made the threat of misinformation even more pronounced as the election approaches.
Senate Intelligence Committee Chair Mark Warner has echoed these sentiments, expressing apprehension about foreign interference related to critical issues like disaster relief in pivotal states such as North Carolina and Georgia. In response, he urged American Internet domain registrars to take proactive steps to prevent foreign operatives from influencing the election through disinformation campaigns.

