The Legal Accountability of AI-Generated Deepfakes in Election Misinformation
How Deepfakes Are Created
Generative AI models enable the creation of highly realistic fake media. Most deepfakes today are produced by training deep neural networks on real images, video, or audio of a target person. The two predominant AI architectures are generative adversarial networks (GANs) and autoencoders. A GAN consists of a generator network that produces synthetic images and a discriminator network that tries to distinguish fakes from real data. Through iterative training, the generator learns to produce outputs that increasingly fool the discriminator. Autoencoder-based tools similarly learn to encode a target face and then decode it onto a source video. In practice, deepfake creators use accessible software: open-source tools like DeepFaceLab and FaceSwap dominate video face-swapping, with DeepFaceLab reportedly used for over 95% of known deepfake videos. Voice-cloning tools can mimic a person’s speech from minutes of audio. Commercial platforms like Synthesia allow text-to-video avatars, which have already been misused in disinformation campaigns. Even mobile apps like FaceApp and Zao let users perform basic face swaps in minutes. In short, advances in GANs and related models make deepfakes cheaper and easier to generate than ever.
During creation, a deepfake algorithm is typically trained on a large dataset of real images or audio from the target. The more varied and high-quality the training data, the more realistic the deepfake. The output often undergoes post-processing to enhance believability. Technical defenses focus on two fronts: detection and authentication. Detection uses AI models to spot inconsistencies that betray a synthetic origin. Authentication embeds markers before dissemination, such as invisible watermarks or cryptographically signed metadata indicating authenticity. However, detection is an arms race—marked deepfakes can sometimes evade notice, and labels alone don’t stop false narratives from spreading.
Deepfakes in Recent Elections: Examples
Deepfakes and AI-generated imagery have made headlines in election cycles around the world. In the 2024 U.S. primary season, a digitally altered audio robocall mimicked President Biden’s voice urging Democrats not to vote in the New Hampshire primary. The caller was later fined $6 million by the FCC and indicted under existing telemarketing laws. Also in 2024, former President Trump posted on social media a collage implying that pop singer Taylor Swift endorsed his campaign, using AI-generated images of Swift in “Swifties for Trump” shirts. Similarly, Elon Musk’s platform carried AI-generated clips, including a parody ad depicting Vice-President Harris’s voice via an AI clone.
Globally, deepfake-like content has appeared in various elections. In Indonesia’s 2024 presidential election, a video surfaced on social media featuring a convincingly generated image of the late President Suharto endorsing a candidate. In Bangladesh, a viral deepfake video superimposed the face of opposition leader Rumeen Farhana onto a bikini-clad body, designed to discredit her in a conservative society. Moldova’s pro-Western President Maia Sandu has been targeted by AI-driven disinformation, including a deepfake video falsely showing her resigning. In Taiwan, a TikTok clip circulated that synthetically portrayed a U.S. politician making foreign-policy statements, stoking confusion ahead of elections. These examples show that deepfakes have touched diverse polities, often aiming to undermine candidates or confuse voters.
Notably, many of the most viral “deepfakes” in 2024 were circulated as obvious memes or claims rather than subtle deceptions. Experts observed that outright undetectable AI deepfakes were relatively rare; more common were AI-generated memes plainly shared by partisans. Nonetheless, even unsophisticated fakes can sway opinion: a U.S. study found that false presidential ads did change voter attitudes in swing states. In sum, deepfakes are a real and growing phenomenon in election campaigns worldwide—a trend taken seriously by voters and regulators alike.
U.S. Legal Framework and Accountability
In the U.S., deepfake creators and distributors of election misinformation face a patchwork of tools, but no single comprehensive federal “deepfake law.” Existing laws relevant to disinformation include statutes against impersonating government officials and electioneering, such as the Bipartisan Campaign Reform Act, which requires disclaimers on political ads. In some cases, ordinary laws have been stretched: the NH robocall used the Telephone Consumer Protection Act, resulting in the $6 million fine and a criminal charge. Similarly, voice impostors can potentially violate laws against “false advertising” or “unlawful corporate communications.” However, these laws were enacted before AI, and litigators have warned they often do not fit neatly. For example, deceptive deepfake claims not tied to a specific victim do not easily fit into defamation or privacy torts.
Recognizing these gaps, some courts and agencies are invoking other theories. The U.S. Department of Justice has charged individuals under broad fraud statutes, and state attorneys general have considered deepfake misinformation as interference with voting rights. Notably, the Federal Election Commission (FEC) is preparing to enforce new rules: in April 2024, it issued an advisory opinion limiting “non-candidate electioneering communications” that use falsified media, effectively requiring that political ads use only real images of the candidate. If finalized, that would make it unlawful for campaigns to pay for ads depicting a candidate saying things they never did.
U.S. Legislation and Proposals
Federal lawmakers have proposed new statutes. The DEEPFAKES Accountability Act would impose a disclosure requirement: political ads featuring a manipulated media likeness would need clear disclaimers identifying the content as synthetic. It also increases penalties for producing false election videos or audio intended to influence the vote. While not yet enacted, supporters argue it would provide a uniform rule for all federal and state campaigns. The Brennan Center supports transparency requirements over outright bans, suggesting laws should narrowly target deceptive deepfakes in paid ads while carving out parody and news coverage.
At the state level, over 20 states have passed deepfake laws specifically for elections. For example, Florida and California forbid distributing falsified audio/visual media of candidates with intent to deceive voters. Some states define “deepfake” in statutes and allow candidates to sue or revoke candidacies of violators. These measures have had mixed success: courts have struck down overly broad provisions that acted as prior restraints. Critically, these state laws raise First Amendment issues: political speech is highly protected, so any restriction must be tightly tailored. Already, Texas and Virginia statutes are under legal review.
Policy Recommendations: Balancing Integrity and Speech
Given the rapidly evolving technology, experts recommend a multi-pronged approach. Most stress transparency and disclosure as core principles. For example, the Brennan Center urges requiring any political communication that uses AI-synthesized images or voice to include a clear label. Transparency has two advantages: it forces campaigns and platforms to “own” the use of AI, and it alerts audiences to treat the content with skepticism.
Outright bans on all deepfakes would likely violate free speech, but targeted bans on specific harms may be defensible. Another recommendation is limited liability: tying penalties to demonstrable intent to mislead, not to the mere act of content creation. Both U.S. federal proposals and EU law generally condition fines on the “appearance of fraud” or deception.
Technical solutions can complement laws. Watermarking original media could deter the reuse of authentic images in doctored fakes. Open tools for deepfake detection should be deployed by fact-checkers and social platforms. Making detection datasets publicly available helps improve AI models to spot fakes. International cooperation is also urged: cross-border agreements on information-sharing could help trace and halt disinformation campaigns. The G7 and APEC have committed to fighting election interference via AI, which may lead to joint norms or rapid response teams.
Ultimately, many analysts believe the strongest “cure” is a well-informed public: education campaigns to teach voters to question sensational media, and a robust independent press to debunk falsehoods swiftly. While the law can penalize the worst offenders, awareness and resilience in the electorate are crucial buffers against influence operations. As Georgia Tech’s Sean Parker quipped, “the real question is not if deepfakes will influence elections, but who will be empowered by the first effective one.” Thus, policies should aim to deter malicious use without unduly chilling innovation or satire.
References
- Deepfake Statistics
- Synthesia AI Deepfakes
- GAO Report on Deepfakes
- EU AI Act on Deepfakes
- Election Deepfakes Analysis
- NPR on Deepfakes and Elections
- AP News on AI and Elections
- Lawfare on Tackling Deepfakes
- Brennan Center on Regulating AI Deepfakes
- First Amendment on Deepfakes
- NCSL on Deepfake Legislation
- UNH Law on Deepfakes
- DFR Lab on Brazil Election AI Research
- DFR Lab on Brazil Election Deepfakes
- Freedom House on EU Digital Services Act