Generative AI’s Growing Impact On Democracy: Lessons from India’s 2024 General Election
Over the course of this historic year of elections, IRI’s Technology and Democracy (TechDem) Practice has been tracking how artificial intelligence (AI), particularly generative AI (GenAI), has been used by political actors. While there are many examples to choose from, the elections in India stood out by showcasing how GenAI could be used in positive and negative ways. From personalizing voter outreach to amplifying viral falsehoods, GenAI tools had diverse applications. IRI closely watched these contests and remains keenly interested in what India’s ascent means for the future of democracy. Democracy practitioners globally can learn much from these recent elections, especially when considering how GenAI will impact campaigns in their own countries.
Concerning Use Cases
GenAI was a disruptive force during India’s elections, particularly in the use of deepfake content for negative campaigning. The Diplomat drew attention to how AI-created media had been used by politicians to smear rivals. They highlighted an incident from earlier this year where the Indian National Congress posted a video featuring Modi’s likeness grafted onto a performer singing about a life of thievery, twisting the lyrics to describe how the prime minister was handing over the country to tycoons. This viral post demonstrated how GenAI is enhancing political attacks while also influencing voter opinions.
The impacts deceptive media created by GenAI had on fact-checkers in India also became clear, as the speed at which GenAI content spread made debunking more challenging. Tech Policy Press highlighted how groups tracking this content have been stretched to their limit. Experts from the Misinformation Combat Alliance (MCA) – a cross-sectoral initiative in India formed to tackle misleading content – found it exceedingly difficult to correct falsehoods. One MCA representative who spoke with Tech Policy Press likened fact-checking to triage care, with media literacy groups having limited capacity to address all AI-generated media containing lies. Given the enormity of the problem, fact-checkers have found themselves simply overwhelmed.
Promising Use Cases
Although much has been said about GenAI’s risks to democratic processes, the use of these tools during India’s elections reveals benefits that deserve mention. For instance, politicians adopted GenAI to better engage with their constituents, including those in remote communities. WIRED spotlighted a candidate in Rajasthan, a large state known for its harsh desert, who leveraged GenAI to create a digital avatar that could connect with people across all districts. One notable feature was that this synthetic stand-in could actively answer individuals’ questions in a dynamic conversation. For campaigns, this example shows how GenAI might strengthen ties between candidates and their constituents when other forms of outreach are difficult.
Additionally, political parties took advantage of GenAI to mass produce customizable content, which made a difference for less-resourced candidates. A piece from The Conversation emphasized this point, noting how GenAI promised to increase the quantity of tailored content without sacrificing quality. Since many politicians in India lacked the capacity to meet every voter, using AI-generated content that relates to constituents was an economical alternative. If guardrails are put into place, these examples show how GenAI could level the playing field for low-capacity campaigns when it comes to voter outreach.
Navigating The Road Ahead
India’s elections showed how political parties and campaigns are using GenAI to their advantage. It also made clear that work needs to be done to address its harms. With groups in India already spending upwards of $50 million (USD) on AI-generated content, it is essential that democracy practitioners navigate GenAI’s complex landscape, ensuring its deployment benefits society without compromising human rights. As IRI’s TechDem Practice identified through its Generative AI and Democracy Working Group, stakeholders from all sectors must be involved in forging a way forward. Actors ranging from tech industry integrity teams to election management bodies play key roles in addressing GenAI-related issues. Supporting fact-checking initiatives, championing media literacy, establishing campaigning guidelines, promoting proper labeling of GenAI content, and prioritizing civil society assistance could form the core of a cross-sectoral strategy. Establishing these guardrails for GenAI during elections would not only benefit campaigners in India down the road. They would also aid fact-checkers, election administrators, and civil society stakeholders across the globe in 2024 and beyond.
Top