How disinformation, fake news, and AI threatens India’s electoral process

There are growing concerns that AI-generated and manipulative content may impact the upcoming general elections, potentially shaping the outcome.

A customer watches election campaign advertisements on his mobile phone outside a shop in New Delhi, India, on Monday, April 8, 2019. / Photo: AP
AP

A customer watches election campaign advertisements on his mobile phone outside a shop in New Delhi, India, on Monday, April 8, 2019. / Photo: AP

As India, the world’s most populous country, prepares to go to the polls on April 19, the rise of artificial intelligence (AI), misinformation, and disinformation casts a shadow on election season.

During India’s last election in 2019, social media was instrumental in waging campaigns that were ultimately won by the ruling Hindu-nationalist Bharatiya Janata party under Indian prime minister Narendra Modi, who in 2014, was dubbed “India’s first social media prime minister,” after winning his first election that year.

“This is the first election where social media has assumed an important role and the importance of this medium will only increase in the years to come,” Modi said in a blog post in May 2014. “It became a direct means of information and gave us the much-needed local pulse.”

The upcoming elections in the country have significant implications for citizens in a highly engaged virtual space that boasts over 750 million internet users and some 462 million social media users, or about 32.2 percent of the total population, in January 2024.

Read More
Read More

India elections 2024: World’s biggest voting exercise, explained

Growing concern

Online misinformation and other forms of digital interference have been a global issue for some time now. However, in political contexts across the world, the proliferation of generative AI is a growing concern for governments and tech companies alike, given the increased accessibility of tools that allow anyone to create such inflammatory content.

According to the World Economic Forum’s 2024 Global Risks Report, misinformation and disinformation have risen rapidly to become a top risk that is likely to increase as elections take place in several economies from the West to Asia.

The same report ranked India highest at risk of AI-fuelled misinformation and disinformation globally with regard to upcoming elections, which will be conducted over seven different phases between 19 April to 1 June. The results are expected to be announced on 4 June.

“No longer requiring a niche skill set, easy-to-use interfaces to large-scale artificial intelligence (AI) models have already enabled an explosion in falsified information and so-called ‘synthetic’ content, from sophisticated voice cloning to counterfeit websites,” the World Economic Forum report states.

AI-generated campaign videos essentially have the potential to sway voters and incite protests. Even if the platform carrying the video warns that it is fabricated content, there is still a risk of violence or radicalisation, and the impact of such campaigns could be far-reaching, possibly putting democratic processes at risk.

Loading...

Empowering or manipulative

On one hand, the technology behind these tools can help break down barriers when used effectively, like AI-translated speeches made accessible in a linguistically diverse country such as India, which has 22 official languages, and a prime minister who has been accused of pushing Hindi as the country’s dominant language.

On the other, chatbots and personalised videos, gaining in popularity, could also serve a larger, more nefarious political purpose in India. Experts are concerned about voters finding it hard to distinguish between real and fake messages.

Prateek Waghre, the executive director of the Internet Freedom Foundation, a digital rights group based in New Delhi, said, “It’ll be the Wild West and an unregulated AI space this year,” adding that the technology is entering a media environment already rife with misinformation.

A 2022 paper in the South Asian History and Culture journal found that campaigns in the last election by leading parties “incorporated online misinformation into their campaign strategies, which included both lies about their opponents as well as propaganda.”

Read More
Read More

X blocks political posts in India ahead of election

It doesn’t help that misinformation may sometimes reach voters through legitimate political channels, in addition to social media platforms like WhatsApp or Facebook. Late last year, the Congress opposition party posted a video on its official X account showing leader of the Bharat Rashtra Samiti, KT Rama Rao, calling for votes in favour of the party. Viewed more than 500,000 times, it turned out the video was fake, according to media reports.

“Of course, it was AI-generated though it looks completely real,” a senior Congress party leader told Al Jazeera. “But a normal voter would not be able to distinguish; voting had started [when the video was posted] and there was no time for [the opposition campaign] to control the damage.”

The worrying implications of AI-generated or manipulated media are now looming over the upcoming general elections, threatening its integrity.

Read More
Read More

'Politics of revenge': Why is India's Modi going after opposition leaders?

In response to new tech

The Indian government has expressed concerns about the impact posed by AI-generated deepfakes.

“We are the world’s largest democracy [and] we are obviously deeply concerned about the impact of cross-border actors using disinformation, using misinformation, using deepfakes to cause problems in our democracy,” minister of state for electronics and IT, Rajeev Chandrasekhar, told Financial Times in January.

“We have been alert to this earlier than most countries because it impacts us in bad ways much more than smaller countries.”

In September 2023, Modi called for global regulations for AI during an address at the G20 Summit, and has also warned about the dangers of deepfake videos and other manipulated content, saying it is “important to establish some dos and don’ts.”

Loading...

Earlier this year, Meta announced it would be launching a new helpline and fact-checking service in India to prevent the spread of AI-generated deepfake content on its WhatsApp messaging service.

Collaborating with the Misinformation Combat Alliance (MCA), the helpline aims to “combat media generated using artificial intelligence which may deceive people on matters of public importance, commonly known as deepfakes, and help people connect with verified and credible information.”

WhatsApp users in India can now report suspicious content through a multilingual helpline chatbot in the app, supporting English, Hindi, Tamil and Telugu languages — a timely rollout ahead of the elections for one of the world’s largest democracies.

“The program will implement a four-pillar approach – detection, prevention, reporting and driving awareness around deepfakes,” according to Meta.

Read More
Read More

Why is media industry increasingly wary of AI’s copy-paste ‘journalism’?

Route 6