Jun 27, 2024
7 min read

Expert Corner: From Deepfakes to Democracy—The Battle Against AI-Generated Misinformation and Fraud

Sagar Vishnoi, AI & Politics expert, discusses the possible negative impact Artificial Intelligence might have and provides a strategy to build a resilience against the possible threats in The Sumsuber Expert Corner.

The year 2024 will be remembered as the year when AI took a major leap into being the most important technology for the coming century. The impact of AI on such a wide scale, with elections in more than 60 countries covering over half the population of the world, is nothing short of a turning point for the technical revolution and trajectory of AI development. With immense benefits in fields like education, healthcare, decision-making, and increasing efficiency, AI is making everyday life easier and more efficient for all. 

However, AI has a flip side as well. With the ease of access and lowering of barriers to entry, the adoption of AI technology in fraudulent activities is on the rise. This is the main reason why AI-generated frauds involving synthetic media and deepfakes are increasing. In this article, we will explore the rise of AI-generated frauds, their potential impacts, and the ways we can counter them.

The Evolution of AI-Generated Misinformation and Fraud

Online misinformation, mainly spreading through social media platforms, has emerged as a pressing issue recently. With the advent of generative artificial intelligence (Gen AI), the realm of misinformation has undergone a rapid transformation. Malicious actors are leveraging bots and AI engines to sow seeds of discord and misinformation in public, especially during election times.

Generative AI includes models capable of generating texts, videos, images, and many other forms of content based on prompts. This technology has garnered attention for its immense capabilities and challenges. The introduction of ChatGPT in November 2022 led to a record 180 million active users daily in 2024, indicating the quick adoption of AI technologies across the world.

With the rise of AI usage, a new threat has emerged: deepfakes. These are synthetic media digitally manipulated to replace one person’s likeness with that of another with remarkable precision. The rise of deepfakes globally is alarming, with over 500,000 video and voice deepfakes circulated on social media in 2023 alone. These audio and video deepfakes raise profound ethical and security concerns, posing a threat to the essence of democratic discourse and security.

Europol’s projection that 90% of online content will be synthetically generated by 2026 underscores the gravity of the situation.  A recent Sumsub survey detected a 245% year-to-year increase in deepfakes worldwide. In the past year 2023, more than 75% of Indians were exposed to deepfake content. AI voice scams are also on the rise, with 1 in 4 adults impacted globally. If proper actions are not taken, these attacks will only increase in number.

The Impact on Elections Across the World

The convergence of AI and disinformation presents a series of challenges in the upcoming elections across the world. Voters are already suspicious of the role AI will play in elections. A recent survey indicated that over 70% of Americans believe it is “very” or “somewhat” likely AI will be used to manipulate social media to influence the outcome of the presidential election. The impact of AI-generated content spreading misinformation across social media is a significant threat that has affected earlier elections and political outcomes.

There were more than 100 deepfake video advertisements of UK PM Rishi Sunak promoting false brands. A deepfake audio of US President Joe Biden urging New Hampshire voters not to cast their votes also went viral. Similarly, a doctored audio clip of London Mayor Sadiq Khan making inflammatory comments spread widely. Another case involved a deepfake video of German Chancellor Angela Merkel making controversial statements on immigration laws. Leaders like Emmanuel Macron, Justin Trudeau, Boris Johnson, and Narendra Modi have all fallen victim to this technology.

Recent incidents of AI-generated deepfakes affecting elections have appeared across the world. One such incident occurred in Turkey’s local elections, misleading the public. Another case in Slovakia involved a fake video emerging days before the elections, showing a candidate falsely claiming that the elections were rigged. Similarly, Bangladesh has been flooded with deepfakes, highlighting the widespread threat this menace of misinformation has become.

In the Indian context, we have witnessed many cases which involve politicians using and falling prey to deepfake technology. During the ongoing elections, candidates are utilizing AI-generated campaign songs, as seen in Rajasthan, AI-generated voice calls of prominent leaders such as O Panneerselvam, Shivraj Singh Chauhan, and Harish Rao, personalized video messages, and even the resurrection of dead political leaders (check out this thread). During the Telangana election, a deepdfake video of a prominent political leader resulted in a public altercation. In the Madhya Pradesh election, AI-generated footage mimicking the popular TV show “Kaun Banega Crorepati” (KBC) was used to pose political questions, amplifying anti-incumbency sentiments.

Moreover, the political landscape in India has seen a significant increase in the use of AI avatars and social media campaigns to sway voter sentiment. The Dravida Munnetra Kazhagam (DMK) party resurrected its late leader Karunanidhi through AI avatars in public events. Major parties such as the BJP, AAP, and INC leverage AI tools to craft engaging content, utilizing AI platforms for designing impactful images and videos.

AI-driven video dialogue replacement and dubbing technologies have enabled political figures to communicate in various dialects, as seen with BJP leader Manoj Tiwari’s AI-generated videos in 2020. These sophisticated AI applications are transforming political communication but also raising ethical and security concerns about the integrity of AI tools in the electoral process. As AI technology continues to rise, it is important to understand how these developments in India can affect global geopolitics.

Impact on the Social Fabric, Cultural Diversity and Society

According to Sumsub’s survey of Q1 of 2024, there was a multiple growth in deepfakes in certain countries where elections occur in 2024: India (280%), the US (303%), South Africa (500%), Mexico (500%), Moldova (900%), Indonesia (1550%), and South Korea (1625%). In the EU (where European Parliament elections are set for June), many countries experienced deepfake cases increase YoY: this includes Bulgaria (3000%), Portugal (1700%), Belgium (800%), Spain (191%), Germany (142%), and France (97%). Moreover, this issue also concerns countries with no elections this year. This includes China (2800%), Turkey (1533%), Singapore (1100%), Hong Kong (1000%), Brazil (822%), Vietnam (541%), Ukraine (394%) and Japan (243%).

This means deepfakes can exploit the human tendency to trust visual and audio information, making it easier for malicious actors to spread misinformation and conspiracy theories. The situation worsens as AI technology advances. Given the diversity across the globe in terms of religion, ethnicity, language, and economic conditions, misinformation spread by AI-generated content can multiply exponentially, eroding public trust and destabilizing democratic processes.

This will create chaos and distort the democratic process, making it hard for citizens to make informed decisions based on authentic information. When culturally significant figures are misrepresented by AI deepfakes, societal mistrust rises, disrupting the cultural fabric and hampering the sense of unity and mutual respect among people. As the world navigates this election super-cycle, safeguarding the integrity of information is paramount to preserving the sanctity and trust of the global community in the electoral process.

Building Resilience Against Misinformation and Fraud

The use of AI-generated misinformation and fraud attempts has significant implications for all of us. If proper measures are not taken, it can lead to significant downsides for people globally. To build resilience against these threats, here are a few steps that can be taken:

1. Building Awareness: First-hand knowledge of these attacks is essential. This can be done through awareness workshops and training sessions with multiple stakeholders. One such workshop was recently conducted by the Inclusive AI team, training government officials on deepfake threats and how to detect them in Shrawasti, India. This timely step ensures that the administration is ready to tackle such scenarios, especially ahead of the Lok Sabha elections.

2. Policy Actions: Policy actions at the national and international levels are necessary. Efforts are already being made by some countries in this direction. The US has introduced laws at state and federal levels, such as the Deepfake Accountability Act and state laws in California and Texas, which criminalize the publishing and distribution of deepfake videos. The European Union has introduced the EU AI Act, which categorizes AI systems based on risk levels and imposes transparency requirements on AI-generated content. Countries like China, Canada, South Korea, and other major economies have introduced regulations to counter the menace of deepfakes and misinformation.

3. Stronger involvement from social platforms: social platforms and networks should take steps for ethical AI usage, as they are the ones who hugely contribute to the spread of misinformation. The possible strategy to resolve the issue include the following steps:

  • Check the uploaded content for its authenticity
  • Mark AI-generated content
  • Educate the users about the AI technologies
  • Verify users and mark the content uploaded by unverified users

4. Deepfake Detection tools, a need of the hour- We’ve seen the extent to which AI misinformation is spreading & online frauds are happening around the world, particularly through media and social networking sites. As a result, deepfake detection tools are essential for maintaining online hygiene and trust among platform users. Celebrities, movie stars, athletes, politicians, CEOs and influencers who inspire billions are among the most common targets of deepfake attacks and fraud attempts. It is extremely important to implement safeguards to protect their identities and images from deepfakes and AI misinformation using detection tools. Furthermore, these tools are critical for those fighting fake news and misinformation, such as fact-checkers, investigative agencies, and regulatory bodies. Global collaborative efforts & companies having deepfake detection tools must focus on training and equipping these groups with the necessary skills to detect deepfakes and effectively combat misinformation & online scams. .

Looking Ahead

The rise of Artificial Intelligence has significantly pushed many sectors of the economy. However, with the rise of deepfake attacks and fraud attempts, there is an urgent need for collaborative actions involving all stakeholders. The regulations in countries like the US, China, and Canada are welcome steps in the right direction. In the coming time, the collaboration between government, tech companies, and civil society will be paramount to building resilience among people. Initiatives like deepfake detection workshops, deepfake toolkits, awareness programs, open-source fact-checkers, and public-private partnerships are critical steps in this direction.

With 64 elections in 2024, the time is now to act on this issue urgently. If we take steps in the right direction at the right time, we can ensure that millions of innocent people don’t fall prey to AI deepfake frauds and manipulations. To sustain the trust people have in democratic institutions and the integrity of the electoral process and governance, concrete steps must be taken so history does not repeat itself. It is up to us, as a global community, to ensure that the benefits of AI outweigh the dangers and that we build a future where AI serves as a force for good, enhancing our lives while safeguarding the integrity of information and democratic processes.

AIDeepfakesFraud PreventionIdentity VerificationPartners column