For the first time, the Financial Stability Oversight Council, in its annual report released on December 14, mentioned artificial intelligence (AI) as an “emerging vulnerability in the financial system,” highlighting concerns about AI’s “black box” problems and data and privacy concerns.
However, a more significant concern for AI in 2024 may be navigating the global electoral calendar. Over one-quarter of the world’s population will participate in national elections across 76 countries, including the United States, India, Russia, Ukraine, Indonesia, Egypt, South Africa, and possibly the United Kingdom—with billions of dollars of investment turning on the outcomes.
Public awareness of online mis/disinformation has increased, and the direct impact of such content on actual polling behavior remains disputed. Still, synthetic content will degrade public discourse and undermine the legitimacy of the electoral process.
A 2019 study by the Computational Propaganda Research project at the University of Oxford found that national governments and political parties across 70 countries had utilized social media for truthful and misleading election campaigning. External governments and non-state actors are also increasingly using platforms such as Facebook, X (formerly Twitter), YouTube, Instagram, LinkedIn, WhatsApp and Telegram for propaganda aimed at foreign and domestic foes.
The most prominent cases where such misuse has occurred include:
- Turkey’s elections in 2023;
- U.S. presidential elections in 2016 and 2020;
- India’s 2019 general election; and
- the U.K. Brexit referendum in 2016.
Disinformation Threat
The production of disinformation for subverting political discourse is now a widespread phenomenon. Every year, for example, Meta-owned Facebook removes over 1 billion accounts that generate “fake news.”
Even before the emergence of GenAI tools such as Microsoft-backed OpenAI’s ChatGPT and Google’s Bard, a 2020 survey by Oxford researchers noted that governments, public relations firms and political parties were producing misinformation on “an industrial scale.”
This year, media investigations exposed the proliferation of a dedicated private industry that creates and circulates disinformation for clients, reportedly used to influence the outcomes of 27 of 33 recent presidential elections.
Meanwhile, efforts by online platforms to moderate or remove disinformation that predate the public rollout of increasingly powerful GenAI tools have repeatedly proven inadequate.
GenAI Step-Change
There is growing concern that free or affordable GenAI technology will be used in forthcoming polls to produce misleading disinformation on a hyper-industrial scale.
This is due to multiple reasons.
Capable Technology
Powered by personalized chatbots that mimic human language, tone, voice and logic, such content is becoming increasingly realistic.
Tools such as OpenAI’s ChatGPT have some built-in safety protocols, but manipulation of these tools is feasible. OpenAI CEO Sam Altman recently said that the ability of such models to “manipulate and persuade, to provide one-on-one interactive disinformation, is a significant area of concern.”
Social Media Reach
Social media, fully or partially encrypted messaging services and other digital platforms are now the most common way for billions of people to communicate and access news—even though users increasingly recognize that a significant proportion of the content they see is misleading or outright harmful.
Social Media Business Model
Social media business models continue to rely on harvesting and monetizing personal data, and increasing network traffic.
In April 2021, media investigators revealed that Facebook repeatedly allowed government leaders and politicians to use its platform to “deceive the public or harass opponents,” even after being alerted to evidence of such misuse. Such activity was reported in India, where fake accounts inflated the popularity of leaders, as well as across Europe, other parts of Asia and the Americas.
All major social media platforms recognize these risks. Facebook has recently introduced policies explicitly to label artificially generated content and bar the use of its GenAI tools to create political advertising. Google has banned political advertising ahead of Taiwan’s elections next month to counter Chinese disinformation campaigns.
However, such moves will likely prove inadequate in the face of incentives for users to circulate mis/disinformation, large-scale layoffs of safety-focused teams at platforms such as X and the commercial goals of social media to increase platform traffic.
Human And Automated ‘Trolls’
Due to the confluence of these factors, it is likely that forthcoming elections, especially in major countries, will see the growing use of tens of thousands of automated chatbots that can engage with humans interactively and persuasively.
These tools will be combined with or instead of “troll farms” manned by hundreds of humans posting pre-written disinformation.
Such influencer campaigns by domestic and foreign actors (especially Russia and China) will have already affected important public debates within election campaigns, including on critical policies such as climate change, immigration, taxation and support for the wars in Gaza and Ukraine.
Regulatory Response
At the November virtual summit of G20 countries, several members, such as Japan and India, voiced concerns about the impact of AI-generated deepfakes on public discourse and elections.
However, these shared concerns will not translate into harmonized remedial action. Several poll-bound countries, notably the United States, have failed to update social media content moderation legislation.
In the aftermath of deep-fake videos featuring prominent U.K. politicians, election regulators in the country said they were “powerless” to stop AI-generated deepfakes from sabotaging the next parliamentary election. The U.K. Online Safety Act passed in October 2023, seeks to mitigate these risks but has yet to be fully enforced and tested.