UNDP's participation at the Nobel Prize Summit 2023

May 26, 2023

 

UNDP had significant participation this year at the second Nobel Prize Summit, which gathered Nobel laureates, policymakers, members of the public, and global leaders in the arts and sciences from 24-26 May in Washington DC. This year’s summit, co-hosted by the Nobel Foundation and National Academy of Science, Engineering, and Medicine, addressed “Truth, Trust and Hope”—and how we can tackle mis- and dis-information, restore trust in science, and build a better, more hopeful and resilient future.

 

UNDP Administrator's Speech on Disinformation  

 

Hope in Action: Open-Source Innovations for Information Integrity  

Highlighting how open-source are helping to address mis- and disinformation across the world  

Information pollution distorts facts, erodes trust, and harms democracy and human rights. In countries with weak governance systems, conflicts, or crises, misinformation can have even more devastating effects. That’s why, in February of this year, the United Nations Development Programme (UNDP) and the Digital Public Goods Alliance (DPGA) launched a joint call for open-source innovations, with the goal of discovering and highlighting innovative open-source solutions that can help promote information integrity. It received an incredible response from individuals and organisations worldwide, with 99 technologists, innovators and change-makers from diverse backgrounds and sectors submitting their open-source solutions and concepts. Showcased at the Nobel Prize Summit in Washington, the nine selected solutions empower users to combat disinformation and foster a more informed, resilient information ecosystem. 

 

Congratulations to the following solutions for being selected:   

    Incubated at NYU, Ad Observatory is a powerful tool that aims to enhance transparency in digital political advertising on Facebook during elections and provides valuable insights into ad topics and advertiser intent. This tool can play a crucial role in combating disinformation by shedding light on political advertising practices and fostering public awareness and accountability.Ad Observatory has been utilised by reporters from major US news outlets in 2020 and 2022, helping to inform readers about digital advertising in elections, uncovering spending patterns, and misleading advertisements. It has also helped explore the role of algorithms and advertising in society for researchers in Australia. Ad Observatory's advanced language capabilities facilitate analysis in both Spanish and English. Moreover, they have made their back-end open source and are currently working on developing an open-source front-end toolkit, which is expected to be completed by the end of 2023. In 2024, Ad Observatory plans to provide the public with practical transparency about digital advertising in the US presidential election, and partner with European organisations to develop a European version to provide transparency for the European Parliamentary elections. 

    “We built an observatory, a site for finding and visualizing spending on political ads during elections on Facebook. Users can see spend by topic, candidate, or state for every congressional and presidential race… We open-source the code for the website and data collection and analysis pipeline… Now we're working with teams in Australia and Europe to help them adapt at observatory for elections in their own countries. We hope that that observatory will become a tool that's used in every democracy to give the public more functional transparency into digital political advertising.”
    Laura Edelson, New York University

      The proliferation of deep fake videos and audios is raising concerns about the spread of purposely deceiving and fabricated content. Selected in the concept category, and led by researchers at UC Berkeley, the DeepFake Fingerprint is being developed as a research methodology to learn and provide a unique identity fingerprint for any given person using only a short snippet of audio-visual content. The goal of this work is to create distinctive mark for individuals that can ultimately be used to authenticate audio-visual content. Once developed, this  technology can be used in conjunction with social media to prove authenticity, which can be particularly valuable for world leaders and public figures. This includes the addition of an authenticity badge to official content and other verified accounts vulnerable to deepfaking. 

      “Recently, we've watched how misinformation has evolved from text campaigns and social media into the visual domain. Most conserningly, the speed and accessibility of modern AI methods means that convincing fake videos and audio can be generated by the click of a button and by anyone with an Internet connection. We've seen the devastating consequences of deep fake video circulating online from the use of faces of world leaders to spread dangerous warfare misinformation to CEOs of global companies being fooled out of hundreds of thousands of dollars. Now seeing is truly no longer believing. And the definition of trust is being redefined… We have developed a research method to ascertain whether a person is the true subject of audiovisual content. Not only can we tell if the person is real, but we can detect the source architecture by which the deep fake video was created. So far, our methodologies have shown impressive accuracy in detecting audio deep fakes. Although these tend to be perceptually very hard for humans to differentiate. Ultimately, our methods can be used to prove the authenticity of audio visual content for real everyday internet civilians and rebuild trust in what we see on the Internet.”
      Romit Barua and Sarah Barrington

        Feluda, developed by Tattle, addresses the challenge of combating misinformation and disinformation in the context of social media platforms that heavily rely on images and short-videos, particularly in regional languages in countries like India. Traditional methods of analysing content through URLs are insufficient as they are often absent in photos or messages, making it challenging for researchers to analyse. Feluda helps make multimodal analysis possible by using visual and semantic similarity across modality. Feluda is an engine designed to understand native multimedia content in Indian languages. It can be used to conduct image and video searches on Indian fact-checking sites, generate analytical reports on content circulating on platforms like WhatsApp and answer questions like what themes are present in a multimodal dataset. The engine is a valuable resource for any team seeking to understand large volumes of multilingual and multimodal content, aiding in the fight against information pollution particularly for fact-checking, research and data science. Feluda was used intensively in this case study of the information chaos on Whatsapp during India’s second Covid-19 wave. Feluda needs sustained investment of time and resources to keep up with the shifting challenges of information disorder, which will allow us to bring it to newsrooms and research labs. Collaboration with journalists and fact-checkers on helping them find important stories and track narratives can help enhance Feluda’s impact in addition to working with misinformation researchers to help them move beyond the study of text-only social media.

        “We build tools and datasets to understand and respond to harmful and inaccurate content in India. We began work… after noticing an increasing amount of misinformation amongst the messages consumed by family members on WhatsApp. In India, as in most of the Global South. People Communicate on chat apps and social media using not just text will also images and videos and audios with a tool that makes it very easy to understand and very multimodal datasets. Fender makes it easy to find similar images and videos. This makes factchecking more efficient as well as accessible. It also helps you find insights about what themes are popular at the moment, as well as digit coordinated campaigns. Misinformation research and response will require a multi-pronged approach and cellular being open source enables independent groups to use it and customize it without being tied to a particular use case or a company… It can help you tackle climate misinformation, health misinformation as well as political misinformation. It can also help you analyze large number of media items and find trends and this can greatly optimize workflows for fact checking content moderation, as well as hate speech detection.”
        David George, co-founder of Tattle Civic Technologies

          Media Cloud empowers researchers, journalists, and organisations to combat disinformation by providing access to a vast repository of online news media content from around the world. This system offers free and open tools for research and analysis, enabling the study of digital news and online information flows on a global scale. By understanding and tracing the amplification of stories across online news media, it becomes possible to analyse and mitigate pollution within the information ecosystem. Moreover, Media Cloud facilitates the examination of reliable media sources and their narratives, contributing to the study of information systems. The platform's user base consists of academics, journalists, independent researchers, and non-profit organisations and foundations. It has recently developed a search interface that allows users to explore content from various social media platforms, such as Twitter, Reddit, and YouTube, further enhancing research capabilities. Media Cloud is growing to accommodate the increasing volume of information, and creating easier-to-use interfaces to support non-media-experts utilising its features.

          “The cloud was developed over 10 years ago, at first to answer a rather pointed question. What influence was the burgeoning blogosphere happening in mainstream media agenda and media discourse? That simple question soon became much more complex. And over the past 10 years, we've added over 60,000 news media sources globally into our system. We crawl the open web. We have APIs to connect social media platforms are really trying to understand information flows on the internet widely, and the digital public sphere. This information is really critical for understanding and beginning to combat the problem of missing disinformation. When you have a clearer sense of the information circulating online, then you have a fact-based way of analyzing the problem and developing solutions. Openness is so key for our project, and for the ability to use it to combat those issues of missing information. When we built our project. The entire code base is open source. We're calling the open web. Everything about our project aims to be open and transparent. The digital public goods Alliance and UNDP’s Oslo Governance Center and chief digital office are highlighting open-source solutions. For a more trustworthy and informed future.”
          Emily Boardman Ndulue, researcher

            Open Terms Archive is a digital common empowering journalists, regulators, lawmakers, researchers and activists to understand, respond and influence the rules of online services. It publicly archives the terms and conditions of services in different languages and countries, making them easily readable and highlighting changes. This allows individuals to uncover unfair practices, ensure better protection against misinformation, hold big platforms accountable, and design effective regulations. Open Terms Archive connects and empowers individuals and organisations to collectively improve the transparency and fairness of online platforms to help foster a healthier digital experience. After its focus on information pollution and consumer protection, Open Terms Archive is looking for funders to enable leveraging crowdsourcing to track the terms of generative AI services, an industry that is rapidly evolving, mostly unregulated, and that will massively impact many sectors. 

            The idea for Open Terms Archive emerged from my experience defending the European elections against disinformation. My team was demonstrating that the advertisement-based business model of large social media platforms provide an incentive to keep open the vulnerabilities that enable this information. We concluded responding defensively to specific attacks would be a never-ending battle. And the losing one. Our goal is this to ship the balance of power from large platforms towards the common good. Open Terms Archive reinforces actors who motivate platforms to close their vulnerabilities to information manipulation. We publicly Track Changes to Terms of Services and notify an ecosystem of researchers, regulators, lawmakers, consumer protection, NGOs and media outlets have improved reactiveness precision and scale in assessing loyalty and compliance and in drafting regulation. …Anyone can audit our open-source code and everyone is encouraged to contribute.”
            Matti Schnedier, Director, Open Terms Archive

              Phoenix by Build Up is a solution designed to address the negative impact of digital media on societal divides and conflicts. It aims to provide peacebuilders with a deep understanding of the divisions present in digital media and their real-world consequences. Phoenix achieves this by scraping and organising digital content, utilising artificial intelligence to automate classifications created in collaboration with peacebuilders. This enables detailed analysis of digital conflicts and generates actionable insights that humanitarians can use when facilitating decisions. By recognizing the interconnectedness between online and offline events in conflict settings, Phoenix empowers peacebuilders to bridge the divide and foster constructive dialogue.

              “Information pollution exacerbates societal divides in communities across the world. Patterns of digital content consumption and interaction are intertwined with polarization, dehumanization, and violence… It is a complex problem that paralyzes conflict responders with confusion peacebuilders today need to find,” according to Helena Puig Larrauri, a developer of one of the successful submissions, the peace-building application Phoenix. “We have to make tools that are usable by any local peace group and can support digital media analysis in the communities and languages they think are important. The open-source model is key to this because it will enable customization and community-driven development that may exceed our ambition.”

                Querido Diario, developed by Open Knowledge Brazil, addresses the challenge of accessing and analysing official decision-making acts throughout Brazil's cities. With no centralised platform available, the only reliable source of information is in the closed and unstructured PDF files of official gazettes where they are published. To tackle this information gap, Querido Diario’s robots help collect, process, and openly share these acts. Launched over a year ago, it has grown into a comprehensive repository with more than 180,000 files, continuously updated with daily collections. Querido Diario helps combat information pollution by providing a transparent and reliable source of data that can be used to fact-check and counter false narratives, enabling informed analysis and promoting accountability. The primary users are researchers, journalists, scientists, and public policy makers and it helps benefit various sectors including environmental researchers and journalists, education NGOs, and scientists working with public data. Querido Diario's coverage reaches 67 cities today, where 47 million live. The next steps involve scaling up to include all 26 states and at least 250 cities. The project aspires to incorporate Natural Language Processing models and integrate its data with other public datasets, helping users contextualise information even more. 

                Collecting, processing, and openly sharing official decision-making acts throughout Brazil's cities to promote accountability and counter false narratives. Launched more than a year ago, it has grown into a comprehensive repository with more than 180,000 files and is continuously updated.
                Querido Diario

                  RegretsReporter, a browser extension built by the Mozilla Foundation, powers the largest ongoing crowdsourced investigation into YouTube's recommendation algorithm. With participation from over 70,000 individuals from 191 countries, RegretsReporter allows people to donate data about harmful videos that they are recommended on YouTube. Mozilla’s analysis of this data has uncovered dangerous patterns in the algorithm’s behaviour, including the frequent promotion of content that violates the platform's own Community Guidelines, a concerning issue that is more prevalent in non-English speaking regions. YouTube's recommendation algorithm is notoriously opaque. RegretsReporter disrupts this lack of transparency by harnessing community participation, shedding light on how the algorithm may contribute to information pollution. Furthermore, the project's recent extension release introduces a unique feature that enables users to contest and challenge problematic recommendations, leading to a reduction in information pollution. RegretsReporter has powered research and investigations that have been cited in European regulations, in US Supreme Court cases, and in shareholder resolutions aimed at holding YouTube accountable. Now, the Mozilla team are releasing anonymised, public datasets from RegretsReporter to enable people from around the world to conduct their own impactful investigations into YouTube’s algorithm. 

                  Regrets Reporter is the world's largest crowd-sourced investigation into YouTube's recommendation algorithm. We decided to develop regrets reporter because YouTube's recommendation algorithm is one of the most impactful consumer AI systems that people interact with every single day. Their recommendation algorithm drives more than 700 million hours of watch time on the platform. It's incredibly opaque and nobody outside of YouTube really has any way to understand how the algorithm works, and importantly, what kinds of content it's recommending to people. We decided to build regrets reporter to get more information about what kinds of things that are being recommended on the platform and provide the research community with more access to data to be able to understand and investigate some of these things, and ultimately, to hold YouTube accountable. And our research has found that that the platform recommends videos that violate for instance their very own community guidelines in terms of service…
                  Brandi Geurnkink, Mozilla

                    Ushahidi is a global not-for-profit technology company that gives citizens the tools to generate data on the ground - raising their voices, influencing change, and mobilising support. Ushahidi builds open-source software that helps people gather, analyse, and act on data, whether it's about climate change, elections, or any other issue that affects them. Their flagship product is an integrated data crowdsourcing and mapping platform that plays a crucial role in combating disinformation by ensuring information integrity, particularly in critical situations like elections and humanitarian disasters. It utilises criteria, such as location and time, to validate crowdsourced data for accuracy. Collaborating with partners, Ushahidi verifies urgent information and facilitates prompt responses. The platform serves a diverse user base, including the general public, first responders, and international organisations, enabling them to collect and analyse information, report incidents, and gain situational awareness. Ushahidi's tools empower users to address important issues, promote citizen participation, inform decisions and drive systemic change. The premise of Ushahidi’s work since its inception is raising the voices of marginalised groups to ensure that their lived experiences influence the change they want and need to see in the world. Ushahidi aims to raise 20 million additional voices of marginalised groups by 2026, with the main goal of achieving meaningful, long lasting change through knowledge based on inclusive and truthful data. 

                    Ushahidi was born out to the post-election violence broke out in Kenya back in 2007-2008. And the problem back then was that many of us were stuck in our houses not knowing exactly what was happening in different parts of the country. And so Ushahidi founders came together and set up an open-source platform that enables ordinary citizens to share messages via SMS, emails, tweets, or even by the web platform and have that directly influence a kind of humanitarian response that was going to be provided during that time. And since then, we've really focused on making sure that we are providing an open-source tool that is easily accessible and affordable so that we are doing our part to make data and technology accessible to everyone. Initially, the focus was on making sure that people were able to go out to vote, but as time has gone by, we've noticed that there is an onslaught on our data ecosystem. It's being extremely corrupted by mis-and dis- information.
                    Ushahidi

                    We express our gratitude to our advisory panel of experts, whose extensive experience and insights was invaluable in assessing the submissions. An extended thank you goes to Craig Newmark Philanthropies, the Government of Norway, and Omidyar Network for their support of this initiative.

                     

                    Untold Stories: Youth initiatives to promote healthy information ecosystems in the Global South

                    On 26 May, UNDP Oslo Governance Centre and UNDP’s Washington Representation Office (WRO) hosted a hybrid panel moderated by HDRO with 9 youth leaders from across the Global South on how information integrity, as a decisive driver of democracy, human rights, and development, can be better integrated into wider development agendas and programs.  

                    The young leaders brought up issues such as dis- and misinformation targeting vulnerable groups, and enhancing social divides and polarisation, harming democratic processes, and delegitimising mediation. The gap between digital access and digital literacy was pointed out as an issue, as were gaps in access to information itself. Information getting corrupted in translation between languages was pointed as a significant issue. 

                    Solutions suggested were focused on education and promoting digital literacy, critical thinking and behavioural change. There was agreement on the need to have fact checkers in more languages. There was also focus on investing in fact checking infrastructure, leveraging AI and digital content for positive outcomes, and promoting data transparency. Working with local communities, as well as creating cross border collaboration was highlighted. Finally, speakers stressed the importance of making the views of youth meaningfully represented in the decisions made. 

                    These insights provide valuable input to HDRO’s research and to the growing discussion about information pollution and the challenges it presents. 

                    Panellists:

                    • Zawad Alam, Founder & Team Lead, Project WE, Bangladesh 
                    • Wani Geoffrey, Developer, AlertMe app, South Sudan
                    • Santosh Sigdel, Co-founder and Executive Director, Digital Rights Nepal, Nepal
                    • Marija Krstevska Taseva, President, National Youth Council of Macedonia (NYCM), North Macedonia 
                    • Luísa Franco Machado, Digital Rights Activist, Brazil 
                    • Gisselle Wolozny, Director, El Milenio, Honduras 
                    • Dickson Matulula, CRS Project Katoba Youth Climate Champions, Zambia
                    • Dania Al Nasser, Co-Founder, Wain Al Ghalt (‘Where is the mistake?’) initiative, Jordan  
                    • Alisson Ramirez, Journalist, Ojo Publico, Peru

                     

                    The Summit Website did livestream much of the proceedings, along with additional content including a video message from the Administrator.