Share this Article

Prime Minister Narendra Modi warned of the potential dangers of artificial intelligence-generated deepfakes and the resulting misinformation. AI-generated deepfakes, such as images, videos, or audio, can depict a person doing or saying something they never actually did. Created using deep learning, they are often used for malicious intent, such as spreading misinformation, propaganda, or even committing identity theft. They can become a political toolkit against the existing electoral and democratic processes of nations. 

Introduction

Elections are often described as the heartbeat of democracy. They rely not merely on procedural fairness but on a deeper epistemic foundation: the assumption that voters make choices based on accurate, verifiable, and accessible information. Yet this foundation is eroding. The digital ecosystem that once promised democratised knowledge has now become a contested space, where manipulated realities circulate faster and wider than verified truths. Among the most disruptive of these developments are deepfakes—AI-generated synthetic media capable of mimicking real people with astonishing accuracy—and large-scale disinformation campaigns, often amplified by algorithmic bias and platform virality.

In a world where seeing and hearing no longer equate to believing, the integrity of democratic elections is increasingly fragile. The danger is not simply that individuals may be misled, but that entire societies may lose confidence in the very possibility of distinguishing fact from fabrication. This erosion of trust transforms electoral manipulation from a technical challenge into an existential one for democracy itself.

The Deepfake Revolution

Deepfakes, powered by advanced machine learning models such as generative adversarial networks, represent one of the most striking technological disruptions of the last decade. Initially developed as experimental tools within academic laboratories, they have rapidly transitioned into widely accessible technologies. What once required vast computational resources can now be achieved by anyone with a consumer-grade device and free AI software.

The consequences for elections are profound. In India’s recent state elections, deepfaked videos depicting political leaders making incendiary remarks were circulated across WhatsApp groups, polarising communities within hours. During the war in Ukraine, a fabricated video of President Volodymyr Zelensky appearing to announce surrender spread online before being debunked, demonstrating the potential of synthetic media to influence not only domestic politics but also geopolitical conflict. In the United States, slowed and edited footage of prominent politicians has gone viral, shaping perceptions even after being exposed as manipulated.

Deepfakes exploit a fundamental human vulnerability: our instinctive trust in visual and auditory evidence. For centuries, sight and sound have served as anchors of reality. Now, those anchors are loosening, and with them the shared frameworks upon which democratic discourse depends.

Disinformation in the Electoral Arena

Deepfakes are only one facet of a larger disinformation ecosystem that has reshaped political communication. Elections today are contested not only through manifestos and debates but through the strategic manipulation of digital narratives. States, political parties, and private actors alike have learned to weaponise social media platforms to shape voter perceptions, often in ways that are invisible to the electorate itself.

State-sponsored influence operations, such as those conducted by Russia’s Internet Research Agency, deploy armies of bots and trolls to amplify divisive narratives, creating the illusion of grassroots movements. Domestic political actors increasingly adopt similar techniques, exploiting encrypted platforms, meme cultures, and micro-targeted advertising to deliver personalised misinformation directly to voters. Cambridge Analytica’s role in the 2016 U.S. presidential election exposed how voter profiling can be combined with tailored psychological messaging to shift electoral outcomes without overt coercion.

These campaigns thrive on cognitive biases. Human beings are naturally predisposed to believe information that aligns with their pre-existing worldviews, a phenomenon known as confirmation bias. When combined with deepfakes, this bias becomes even more dangerous, as falsehoods are wrapped in the persuasive authority of apparent visual authenticity. The result is a political environment where voters no longer inhabit a shared informational universe but are instead fragmented into insulated echo chambers.

Electoral Security in a Post-Truth World

Traditionally, electoral security has focused on protecting the mechanics of democracy: ballot boxes, voter rolls, and polling stations. Today, the battlefield has shifted. The most decisive vulnerabilities lie not in voting infrastructure but in the information environment surrounding elections. Electoral manipulation no longer requires tampering with ballots when it is possible to tamper with minds.

The velocity of misinformation compounds the problem. Studies have shown that falsehoods spread significantly faster than verified information on social media, largely because emotionally charged narratives are more engaging than nuanced truths. By the time electoral commissions or fact-checking organisations respond, the damage is often irreversible. Once a manipulated video, synthetic audio clip, or fabricated “leak” reaches viral saturation, retractions rarely recalibrate voter perceptions. In effect, electoral authorities are trapped in a perpetual game of catch-up while disinformation campaigns evolve with unprecedented agility.

Case Studies Across Democracies

Examples from around the world demonstrate the scale of this challenge. The 2020 U.S. presidential election was shaped as much by contested narratives as by votes themselves. Even low-quality manipulated videos, often called “cheapfakes,” fueled public scepticism about mail-in ballots and electoral integrity. In India’s 2024 general elections, deepfaked speeches portraying candidates making inflammatory remarks were disseminated through WhatsApp and Telegram, platforms whose encrypted architecture makes detection difficult.

Elsewhere, synthetic media has been deployed in geopolitical contexts to destabilise public opinion. The Zelensky deepfake, though quickly debunked, highlights the possibility of “strategic deception” during wartime, where fabricated realities are introduced precisely to undermine confidence in leadership. Across these cases, the common thread is clear: electoral security is increasingly inseparable from information security.

Responses and Limitations

Governments, technology companies, and civil society organisations have begun to respond, but their efforts are fragmented and often inadequate. Regulatory measures, such as the European Union’s Digital Services Act, aim to impose greater accountability on platforms hosting manipulated content, while countries like India have introduced rules allowing authorities to flag misinformation for removal. Yet regulation alone faces structural limitations. In authoritarian contexts, anti-disinformation laws risk becoming instruments of censorship, while democratic regulators struggle to enforce rules across transnational digital ecosystems.

Technological countermeasures offer some promise. AI-driven detection tools are being developed to identify deepfakes through subtle inconsistencies in facial movements, voice patterns, and image textures. Initiatives such as the Content Authenticity Initiative aim to attach cryptographic signatures to original media, enabling verification of authenticity. However, these tools operate in an escalating arms race. As detection algorithms improve, generative techniques evolve to bypass them, leaving defenders perpetually one step behind.

Ultimately, the most enduring solution lies in cultivating digital resilience among citizens themselves. Media literacy initiatives can equip voters with the critical thinking skills needed to interrogate content rather than consume it passively. Programs such as Taiwan’s citizen-led fact-checking networks have demonstrated how decentralised responses can outpace official channels. Electoral commissions must invest not only in technology but also in public education campaigns that foster trust without imposing paternalistic control.

The Crisis of Epistemic Trust

Beyond technical fixes lies a deeper philosophical dilemma. Democracies rest on a shared epistemic foundation: the belief that citizens can access a common set of facts upon which to deliberate collective decisions. Deepfakes and disinformation erode this foundation, pushing societies toward a “post-truth” condition where every narrative is contestable and no evidence is beyond suspicion.

When all images can be fabricated and every recording potentially forged, epistemic trust collapses. Without trust, elections risk becoming performative exercises, stripped of their deliberative essence. The challenge, then, is not merely to secure electoral procedures but to defend the possibility of truth itself as a public good.

Conclusion

The intersection of deepfakes, disinformation and electoral security marks one of the most pressing challenges of the digital age. Unlike traditional threats, which targeted physical infrastructure or procedural integrity, these new risks undermine democracy from within by corroding the informational environment on which it depends. As synthetic media becomes more sophisticated and disinformation campaigns more coordinated, democracies must move beyond reactive firefighting to proactive resilience-building.

Securing elections in the twenty-first century demands a reimagining of security itself. Technological detection, regulatory safeguards, and public literacy must converge within a coherent framework that balances innovation with democratic values. Above all, societies must recognise that electoral integrity today is inseparable from the preservation of collective epistemology. Protecting democracy now means protecting truth, for without shared realities, the promise of self-government cannot endure.

Title image courtesy: News18

Disclaimer: The views and opinions expressed by the author do not necessarily reflect the views of the Government of India and Defence Research and Studies

violence at airport

By Alshifa Imam

Alshifa Imam has recently completed her Masters in Conflict Analysis and Peacebuilding. Her research interests include strategic studies, gender and conflict, and emerging domains of warfare in the Indo-Pacific.