5. Do no harm

This standard is linked to the following Principle for Digital Development: Design With the User and Address Privacy and Security.

Digital technology can be an incredible force for good. But at the same time, digital technologies are often developed without adequate consideration of the potential negative impacts they can cause. Technology is not neutral. Most technology has been built with inherent bias (because it is built by humans), and can therefore have unforeseen negative impacts on people and planet. 

This standard comes in two parts: Part A - Ensure Human Rights, and Part B - Protect the Environment.

Part A: Ensure Human Rights

The list of potential harms from digital technology is daunting, whether that is the suppression of speech, privacy violations, data leak or algorithmic discrimination. It is therefore our responsibility to integrate a human rights-based approach in all digital projects and effectively manage human rights risks associated with digital technologies. 

It is important to keep in mind that harm may not be immediately obvious. For instance, AI models that are built by averaging data from populations have often sidelined marginalized communities and minorities, even as they are disproportionately subjected to the technology’s impacts. Moreover, if your AI dataset is biased against a certain population, your AI will also integrate those biases. See an example of what happened to Microsoft’s Twitter bot. 

It is important to think about who is going to use and interact with the technology and their motivations and goals. For example, while helping a government create a health and education platform may have good goals in mind, is there a potential for the platform to be used to collect data that can fuel discrimination? Another example is that an algorithm on social media that optimizes for engagement can, as a by-product, promote fake news or extremist political content, as users will react strongly and share this content more than content that does not enrage. 

Ask yourself : 

  • How can we ensure we do not lock people into future harms? 
  • Who can be harmed by your innovation and when?
  • Which human rights are touched upon with this project? 
  • Who does this solution actually benefit?  
  • Was appropriate consent obtained by the end user? 
  • Is there any future good we might foreclose by implementing this solution? 

You must map out the potential consequences and risks of implementing a digital solution, including first, second, third and fourth-order consequences (example HERE) and plan mitigation measure. 

Digital technology value chain

Examples of human rights impacts

Design and development

·   Bias and discrimination

·   Indigenous people’s rights

·   Privacy

Raw materials

·   Children’s rights

·   Hazardous working conditions

·   Environmental impacts

·   Health and safety

·   Livelihoods

Manufacture, transport and logistics

·   Modern slavery, forced labour and trafficking

·   Working conditions

·   Housing

·   Health and safety

·   Freedom of association

End use

·   Privacy

·   Bias and discrimination

·   Life, safety and security

 

Part B: Protect the Environment

Technology can improve our ability to live in ways that regenerate ecosystems and mitigate climate change. Technology is helping us to use less energy, produce cleaner energy, produce food in more sustainable ways, shift to circular models of production, have cleaner transport options, better monitor our impacts on the planet, and easily share ways to do better.

However, technology can also cause harm to the planet, directly and indirectly. 

The biggest direct environmental impacts that technology causes are increased energy consumption and material mining and waste.
Negative impacts may include: 

  • Algorithms that reinforce unsustainable behaviour. 
  • E-waste is a major concern in countries where e-waste facilities are not set up. 
  • Social media has driven increasing consumption of fast fashion and disposable goods.
  • Globalisation has created a marketplace in which goods are shipped or flown all over the planet in wasteful ways. 
  • Companies are still accountable to their shareholders to drive for constant growth with an ‘extract > make > waste’ model that is detrimental to the planet. 
  • Technologies like cryptocurrencies, which save energy in the fast transfer of value across the planet, can consume huge amounts of energy in computing power to mine new coins.

In order to mitigate and prevent harms to the environment caused by technology, it is first important to understand the way the technology actually works. What are the sources of energy? What is the lifecycle of the technology product? How is waste managed? What might be the 2nd, 3rd and 4th order consequences of introducing this technology to an environment or ecosystem?

Understand Greentech and how you could use it to support sustainability goals. The main goal of Greentech is to minimalise the negative impact of new technologies on the planet, specific goals also include:

  • Sustainability -  fulling our needs in a way that does not have a negative impact on our environment.
  • Innovation - finding new and more efficient ways to replace existing technologies with eco-friendly ones.
  • Viability - adopting new methods of green technology and creating new jobs that facilitate them.
  • Source reduction - reducing the consumption of resources, waste, and pollution.

 

Do:

  • Consider potential human rights impacts and take mitigation measures.
  • Consider potential harms for the planet and take mitigation risks.
  • Considering long-term consequences of your digital projects.
  • Think carefully about how you present choices (especially default choices) in any technology solution and user interface.
  • Make sure that you follow the UNDP Data Principles when collecting data from users. 
  • Map out the potential first, second, third, and fourth-order potential impacts of implementing any solution and look for ways technology could do harm to different populations, environments or ecosystems. Make sure you consider the whole life-cycle of any solution, including energy use and material use and disposal.
  • Have a fresh pair of eyes (ideally someone who is not involved in the project) to check for biases and assumptions as well as what could possibly go wrong. 
  • Be aware of biases in the datasets and technology that you feed into your projects. 
  • Understand the political environment and cultural context into which your product will be launched and use this understanding to map social risks.
  • Define mitigation strategy and protection mechanisms for the identified potential harm.
  • Make sure a good understanding of the context your design will be part of and the power dynamics at play within it.
  • Consider when it is necessary to incorporate UN Human Rights Due Diligence on Digital Technologies (guide to follow in 2023).
  • Read the Ledger of Harms by the Center for Human Technology.  This will provide you with an understanding that technology itself has a certain degree of agency, and that the unintended consequences can often outweigh the original intended goals.

Don’t:

  • Assume technology is neutral. 
  • Make technology choices that would reinforce any stereotypes and discrimination.
  • Optimize for only one goal, and ensure you understand the potential knock-on implications on the ecosystem in which your solution will operate.
  • Assume the government or partners have the best interests of their populations front-of-mind.
  • Assume a technology is ‘green’ even if it is sold as such.

 

Tools

includes questions that aim to help designers consider the values they are embedding in their products or services

  • Decision Tree Use the decision tree template in the playbook for Ethical Technology Governance PDF to think about risks 

 

UN Resources: 

 

To Watch

 

To Read

 

Case Studies.