Tackling mis- and disinformation

Tackling mis- and disinformation

The scale and spread of mis- and disinformation is a growing challenge to building an evidence-based understanding of the causes and impacts of the climate crisis as well as the options to address it. This exacerbates the distance gap between scientific knowledge and public knowledge. It also limits the potential of public debate and participatory decision making by pushing people towards polarization, a key decision-making gap, rather than encouraging diverse groups to achieve consensus. Already a known issue in the Global North, there is emerging evidence that climate-related conspiracy theories originating in the USA, can spill over into the Global South to influence local opinion on climate technologies. For example, social media promoting misinformation about “chemtrails” has been shown to negatively impact public perception of solar geoengineering technology. Online disinformation campaigns have been used to deny environmental crimes such as deforestation of the Amazon and have fueled conflict around natural resources in Somalia, while health-related misinformation denying cholera outbreaks has undermined public health efforts and exacerbated the impact of climate-related epidemics in Malawi. Mis- and disinformation in the Global South leverage online spaces like Facebook and, increasingly, private networks like WhatsApp, in addition to spreading through analog means. The emerging use of generative AI models to create content has also raised concerns about the rise of misinformation online as models like OpenAI’s ChatGPT have been shown to “hallucinate” or make up facts when they do not know an answer to a user query.


How might Collective Intelligence help close the climate misinformation distance gap to support better decision-making?

Most existing examples of collective intelligence to combat misinformation combine automated approaches and crowdsourcing for fact checking and moderation of online content. For example, CoFacts is a Taiwanese platform that invites the public to check any text they suspect contains misinformation on the popular messaging platform Line using a chatbot. When statements are submitted for fact checking, they’re verified by other CoFacts volunteers. CoFacts aims to curb the spread of misinformation on closed social networks such as chat groups where it can often be difficult to track. In 2018, CoFacts helped users verify messages about LGBTQI+ rights prior to a divisive vote on same-sex marriage. Factmata is another general-purpose tool that can be used to identify harmful online content. They use AI models that are regularly retrained by a community of experts to detect propaganda, hate speech and misinformation in near real-time.

Another promising Collective Intelligence approach to curtail the amplification of inflammatory and false content in online spaces is crowdsourced community moderation. For example, the r/Science community on Reddit showed that actions taken by existing members, for example regular reposting of community principles, helped to reduce the spread of fake news and increased rule compliance for posted content by eight percent. Crowdsourcing can also help with early detection of misinformation offline. Medicins Sans Frontier (MSF), the International Federation for Red Cross and UN Global Pulse are already developing tools that use social media analysis, crowdsourcing or community reporting to identify and verify rumors that might interfere with response operations during crises. For example, the Wikirumours platform that was developed by the Sentinel Project to crowdsource damaging rumors in conflict-affected regions, has been adapted by MSF to identify disinformation. Spotting new rumors at an early stage allows frontline organizations to adapt so they don’t interfere with active programmes in the field.

Online discussions about climate are highly reactive to real-world attitudes and policies around climate. Analyzing how social media narratives change over time could provide valuable insights into policy interventions and agreements that are most successful at shifting societal norms around climate. For example, sentiment analysis of public social media discourse showed an increase of 30-40 percent in negative sentiments such as “fear” and “sadness” following the publication of high-profile IPCC reports. Social media can also be used to understand the spread of health-related misinformation. In the wake of the Zika outbreak in 2016, researchers demonstrated the potential of using machine learning and crowdsourcing of social media data to tailor the containment actions of health officials.

1 A conspiracy theory from the 1990s that alleges condensation trails (contrails) from aircrafts spread chemical or biological compounds for purposes including weather and climate modification.

2 The Line chat app is very popular in Taiwan.