THE proliferation of artificial intelligence (AI) in the digital age has ushered in both remarkable innovations and unique challenges, particularly in the realm of information integrity.


AI Brief
  • AI can create realistic fake texts, images, audio, and videos (deepfakes), making it difficult to distinguish real content from fake.
  • Advanced AI systems can analyze patterns and context to help moderate content, fact-check, and detect false information, aiding in the fight against misinformation and disinformation.
  • Addressing AI-driven misinformation requires content watermarking, public education on media literacy, and collaboration among tech companies, policymakers, and civil organizations.


AI technologies, with their capability to generate convincing fake texts, images, audio and videos (often referred to as 'deepfakes'), present significant difficulties in distinguishing authentic content from synthetic creations. This capability lets wrongdoers automate and expand disinformation campaigns, greatly increasing their reach and impact.

However, AI is not a villain in this story. It also plays a crucial role in combating disinformation and misinformation. Advanced AI-driven systems can analyse patterns, language use and context to aid in content moderation, fact-checking and the detection of false information.

Understanding the nuances between misinformation (unintentional spread of falsehoods) and disinformation (deliberate spread) – as is crucial for effective countermeasures – could also be facilitated by AI analysis of content.

The social cost of disinformation

The consequences of unchecked AI-powered disinformation are profound and can erode the very fabric of society.

The World Economic Forum’s Global Risks Report 2024 identifies misinformation and disinformation as severe threats in the coming years, highlighting the potential rise of domestic propaganda and censorship.

The political misuse of AI poses severe risks, with the rapid spread of deepfakes and AI-generated content making it increasingly difficult for voters to discern truth from falsehood, potentially influencing voter behaviour and undermining the democratic process. Elections can be swayed, public trust in institutions can diminish, social unrest can be ignited, and violence can even erupt.

Moreover, disinformation campaigns can target specific demographics with AI-generated harmful content. Gendered disinformation, for example, perpetuates stereotypes and misogyny, further marginalizing vulnerable groups.

Such campaigns manipulate public perception, leading to widespread societal harm and deepening existing social divides.

A multi-pronged approach to tackle fake content

The rapid development of AI technologies often outpaces governmental oversight, leading to potential social harms if not carefully managed.

Industry initiatives like content authenticity and watermarking address key concerns about disinformation and content ownership. These tools require careful design and input from multiple stakeholders to prevent misuse, such as eroding privacy or persecuting journalists in conflict zones.

For example, the Coalition for Content Provenance and Authenticity (C2PA) – integrated by Adobe, Arm, Intel, Microsoft and TruePic – addresses the prevalence of misleading information online through the development of technical standards for certifying the source and history, or provenance, of media content.

To further mitigate the risks associated with AI, developers and organizations must implement robust safeguards, transparency measures and accountability frameworks.

By establishing comprehensive systems, developers can ensure that AI is deployed ethically and responsibly, thereby fostering trust and promoting the beneficial use of AI in various domains.

In addition to technical measures, public education on media literacy and critical thinking is essential to empower individuals to navigate the complex landscape of digital information.

Schools, libraries and community organizations play a vital role in promoting these skills, providing resources and training programmes to help individuals develop the ability to critically evaluate information sources, discern misinformation from factual content, and make informed decisions.

Collaboration is key to tackling misinformation

Moreover, collaboration among stakeholders, including policy-makers, tech companies, researchers, and civil organizations, is vital to effectively address the multifaceted challenges posed by AI-enabled misinformation and disinformation.

This situation highlights the importance of fostering a global understanding and cooperation to tackle the spread of false information facilitated by the rise of man-made content and AI technologies.

The AI Governance Alliance, a flagship initiative by the World Economic Forum and part of the Centre for the Fourth Industrial Revolution, unites experts and organizations worldwide to address the complex challenges of AI, including the generation of misleading or harmful content and the violation of intellectual property rights.

Through collaborative efforts, the Alliance develops pragmatic recommendations to ensure that AI are developed and deployed responsibly, ethically and for the greatest benefit of humanity.

Another Forum initiative is the Global Coalition for Digital Safety, which is spearheading efforts to combat disinformation by promoting a whole-of-society approach to enhancing media literacy. This includes understanding how false information is produced, distributed and consumed, and identifying the necessary skills at each stage to counter it.

The coalition brings together tech companies, public officials, civil society and international organizations to exchange best practices and coordinate actions aimed at reducing online harms.

Advancing our approach to digital safety

As AI continues to transform our world, it is imperative to advance our approach to digital safety and information integrity.

Through enhanced collaboration, innovation and regulation, we can harness the benefits of AI while safeguarding against its risks, ensuring a future where technology uplifts rather than undermines public trust and democratic values.

By working together, we can ensure that AI serves as a tool for truth and progress, not manipulation and division.