Increase the utility of citizen data for climate issues

Increase the utility of citizen data for climate issues

This report has demonstrated that currently, citizen generated datasets are underused by decision makers, despite their potential to fill several known evidence gaps in the Global South. Addressing some of the common concerns associated with these datasets including their size, messiness and potentially “patchy” nature could bridge the distance gap between official and traditional knowledge, as well as increasing their uptake by decision makers. Addressing quality concerns may not be enough. Traditional gatekeepers of global evidence synthesis need to adapt their processes to incorporate these types of insights and engage decision makers to make best use of the value of these data. 

Apply methods from citizen-led experiments in agriculture to other climate issues

Large field experiments with citizen scientists, known as n-trials, are a route to scaling data collection about adaptation approaches that are most viable in a given location. Trials typically involve large numbers of participants and follow random allocation of different treatments in real-world settings, to allow more robust conclusions about the impact of different interventions. This approach is already being used in agriculture experiments with smallholders to test seed varieties, crop management practices and pest control Seeds for Needs. These initiatives could be enhanced by building on local, traditional and indigenous agricultural practices. Field experiments could also be applied to test interventions in other key areas of climate adaptation, like biodiversity management or health surveillance.


Enhance the evidentiary value of crowdsourced data in climate adaptation

Citizen generated data can help investigate and monitor compliance with regulations. Existing applications include tracking human rights abuses, monitoring wildlife crimes and measuring the impact of policies to reduce waste by industry. But the authentication of citizen generated data about climate issues or violations of climate legislation is lagging behind. Questions about the provenance and accuracy of crowdsourced data can limit their uptake by legal experts and policy makers. In court cases, for example, it’s important to demonstrate that evidence hasn’t been tampered with. The Digital Evidence Vault from researchers at Carnegie Mellon, is a rare example of a process for logging and authenticating crowdsourced digital data that can be used by human rights investigators. Investing in new standards and tools for verification could enhance the evidentiary value of data crowdsourced from the front lines for formal decision making and regulatory processes.

Develop new approaches to compensate for sparse data for disaster risk and biodiversity management.

Hyperlocal collective intelligence initiatives typically collect relatively small but rich datasets. Investing in the development of new statistical techniques that can cope with sparse, unlabelled datasets is key to unlocking the full value of this type of data. This could be particularly useful for crowdmapping initiatives in disaster response that lack data for hard-to-reach locations. RapiD, is an existing tool used for mapping that applies weakly supervised learning to validate geographic predictions made using a limited number of data points labeled by volunteers on OpenStreetMap. Another key application area is biodiversity management where locally granular but sparse on-the-ground observations could be coupled with remote sensing data to allow better modeling of species’ distributions. This approach could shift the dial on monitoring species listed as Data Deficient under the International Union for Conservation of Nature (IUCN) Red List. In the long term, these investments could help governments to better target their interventions for Biodiversity 30x30 and the Loss and Damage Fund.