Challenging misinformation in the age of AI: Building a collaborative future

Irina Wandera

March 26, 2025

On February 25 and 26 this year, I attended the Future of Science Communication Forum in Nairobi, Kenya hosted by the Alliance for Science as they marked their 10th anniversary, and participated in an important session that brought together leading voices from the fields of technology, policy, and science to address a very topical issue: misinformation in the age of artificial intelligence (AI).

The session focused on how collaboration between scientists, AI developers, regulators, and other key actors can ensure the responsible use of AI while safeguarding innovation.

The panel began by addressing the structural roots of misinformation. Juliana Rotich, the co-founder and Executive Director of Ushahidi, highlighted that AI-driven content, mainly shared through social media, can accelerate the spread of misinformation due to the incentive structures of digital platforms.

Philip Thigo, Special Envoy on Technology for the Republic of Kenya, expanded the conversation by emphasizing the Global South’s unique vulnerabilities. With limited regulatory frameworks and reduced corporate accountability, misinformation disproportionately affects these regions.

He underscored the challenge posed by tech companies operating globally, making it difficult to regulate them at the national level.

AI is a double-edged sword in the fight against misinformation. On the one hand, it can power advanced tools for fact-checking and information verification. On the other hand, AI-generated content—including deepfakes and synthetic media—facilitates the spread of disinformation.

Both Rotich and Thigo called for increased transparency in the use of AI systems, including the use of watermarks to identify AI-generated content.

A critical concern raised was the declining investment in AI ethics teams within major tech firms. Without internal accountability, the unchecked advancement of AI could exacerbate bias and misinformation. Both speakers stressed the need for multi-stakeholder oversight and agile governance to keep pace with technological change.

Policy Considerations for Responsible AI Governance

One of the most promising proposals discussed was the United Nations Global Digital Compact, an emerging framework for global governance of digital technology and artificial intelligence. Through shared responsibility, it aims to balance commercial interests with public good.

Key policy issues highlighted included:

  • Incentive realignment: Encouraging platforms to prioritize accuracy over engagement through regulatory and economic incentives.
  • Transparency and explainability: Companies must disclose how their AI models make decisions and label AI-generated content.
  • Accountability in the Global South: Ensuring tech firms operating across borders are held accountable for misinformation harms in developing regions.
  • Agile governance models: Moving beyond rigid laws to flexible, multi-stakeholder approaches that can adapt to rapid technological advancements.
  • Building local capacity: Investing in the skills and resources needed for Global South participation in AI governance.
  • Public engagement and education: Fostering community-led initiatives to build digital literacy and challenge misinformation.

The session also emphasized the role of the UN-backed Scientific Panel on AI and Misinformation, spearheaded by Costa Rica and Spain. This initiative seeks to provide independent, evidence-based guidance on misinformation challenges, with a strong focus on including voices from the Global South.

A central tension in the discussion was how to regulate AI without hindering innovation. While regulation is necessary to curb misinformation, overly rigid frameworks may stifle technological progress.

The panelists proposed fostering a cooperative approach where governments, scientists, and AI developers work together to craft flexible, adaptive regulations that protect public interests while allowing innovation to flourish.

Rotich in particular stressed the importance of user agency, encouraging people to safeguard their data and advocate for AI transparency. She also called for continuous public education on AI capabilities, citing resources from the Electronic Frontier Foundation and the Center for Humane Technology as vital tools for building digital literacy.

Bridging the Digital Divide

A recurring theme was the need to empower local actors to challenge misinformation. Thigo argued that public sector leadership is crucial in ensuring equitable AI governance.

He advocated building capacity within the Global South, enabling policymakers and scientists to participate meaningfully in international technology governance forums.

Community-driven solutions were also highlighted as essential. Traditional knowledge systems and local influencers play a critical role in disseminating accurate information. Incorporating these voices into digital literacy campaigns can bridge the gap between global platforms and local realities.

The session concluded with a powerful call for collective action. As Thigo noted, “Our future is not written for us—it is written by us.” This sentiment captures the urgency of shaping AI governance that reflects diverse voices and protects public interests.

By fostering collaboration between scientists, AI developers, policymakers, and local communities, we can ensure that AI remains a tool for truth and equity rather than a driver of division and misinformation. The responsibility lies with all of us to shape an AI future that serves humanity as a whole. _______________________________________________________________________________________________

Irina Wandera is the Policy Manager at EmergingAg


Categories