AI, gender bias and development

October 6, 2025
Illustration of a person reaching toward a humanoid robot amid AI text and circuit background.

Image generated by ChaptGPT

Photo: ChatGPT

Generative AI is quietly reinforcing harmful gender stereotypes, shaping how we see the world and ourselves. 

While AI can transform the ways we work, streamlining processes and making us more efficient, it is also perpetuating gender biases which are woven into the very data AI learns from – biases that are often amplified through the way it interprets and reproduces those patterns.   

AI models are trained on vast amounts of publicly available content, much of which reflects the structural inequalities of the societies that produced it. These patterns are not just absorbed by AI, but also often uncritically reproduced – or even exaggerated – as the models generate new outputs. The lack of diversity in the global AI talent pool only further amplifies this problem: Women make up just 22 percent of AI professionals, and below 14 percent at senior levels. This is not to say that men actively undermine efforts to make AI more representative, or that women inherently train AI to be more gender-conscious, but diversifying the AI industry does mean that it is more likely that biases are noticed, questioned and ultimately eliminated.  

As of now, however, AI remains deeply biased. A 2024 UNESCO study found that large language models (LLMs) often portray women in domestic or subservient roles, associating them with words like “home”, “family”, and “children”, while linking men to terms like “executive”, “business”, and “career”. They also frequently generate sexist and misogynist content when asked to complete sentences beginning with the gender of a person, like describing women as “a sex object or baby machine” or “the property of her husband”. They connect women with traditionally undervalued and stigmatized professions such as “domestic servants”, “cooks” and “prostitutes”, where men, on the other hand, are more likely to be associated with more diverse roles like “teachers”, “doctors” and “drivers”.  

Collage: female firefighter in gear amid flames, and a family portrait with parents and baby.

These images were generated for UNDP's Digital Imaginings exhibition, and it was impossible generate an image for the above prompt. Every image generated showed the man as the firefighter, despite the specific instruction of woman firefighter. In the best case scenarios, both the mother and father were depicted as firefighters. In the worst, the man was the firefighter (but without the children) and the woman was pregnant with two children hanging off her. Only when “woman firefighter” was input without mention of father or man did the image come back as a woman. Even in AI, women are being framed in relation to a man, claiming ownership of themselves only when a man isn’t present.

Photo: UNDP Eurasia

These same biased assumptions can be found in other digital tools relying on AI models as well. Neural machine translation (NMT) systems frequently assign gender based on stereotypes, translating gender-neutral sentences in languages like Turkish into English with “he” for doctors and “she” for nurses. Not surprisingly, these negative examples aren’t restricted to women: AI systems have also been shown to reproduce homophobic and racially biased narratives, reflecting the gaps and prejudices embedded in the data they are trained on. For instance, in up to 70 percent of cases, gay subjects are painted in a negative light by LLMs tasked to create content based on a person’s sexual identity.  

Why this matters for the development sector 

The development sector plays a crucial role in ensuring that the digital transformation currently underway leaves no one behind. As AI becomes more embedded in programs spanning health, education, governance and humanitarian response, the risks of reinforcing existing gender inequalities become more urgent. The use of AI is becoming both natural and indispensable – especially as funding issues and organizational changes reduce staff, requiring them to juggle multiple tasks within tight timeframes.  

Given this new reality, what happens if the AI tools development professionals and their partners rely on daily not only reproduce harmful gender stereotypes and norms, but also shape program decisions in ways that limit women’s access to resources, services and opportunities? For instance, if biased algorithms are used in microfinance programs, women may be deemed less creditworthy and lose access to loans that are essential for sustaining livelihoods.  

 While gender and inclusion specialists may more easily recognize these biases and discriminatory patterns, what about colleagues working in areas like climate, innovation or disaster risk reduction? As development professionals, how can we ensure that the tools we use dismantle, rather than perpetuate or amplify, the very biases we strive to eliminate in our work? 

But we are also uniquely positioned to shape a different trajectory – one rooted in accountability and inclusion. UNDP can equip development professionals with the skills to use AI in a gender-responsive manner and draw on their diverse expertise to collaboratively design solutions that counteract bias. 

We can train ourselves and our partners how to identify and address harmful biases and their root causes, but also partner with tech companies and their AI developers to improve the systems already in use. Collaborations of this kind would ensure that biases and forms of discrimination related to gender, ethnicity and other intersecting identities are not only recognized but actively addressed within AI solutions. 

Building new, localized AI tools should also be part of the solution, as people on the ground are often better equipped to recognize the biases that someone removed from the local context might miss. By embedding inclusive principles from the outset and designing new tools grounded in lived reality, AI can better reflect and respond to the varied realities of the communities development professional serve.  

The Council of the European Union has called for targeted efforts to advance gender equality in the AI-driven digital age, emphasizing that AI’s dual nature can either perpetuate bias or serve as a powerful tool to detect and reduce it when developed responsibly. UNDP and its partners have already taken concrete measures to address these risks. The Gender Social Media Monitoring Tool and eMonitor+, for instance, are pilot initiatives that harness AI to detect hate speech and harmful online narratives surrounding gender, providing evidence to combat technology-facilitated gender-based violence. The Gender Equality Seal for Public Institutions, a UNDP flagship program, integrates its own AI tool, LOLA, to deliver rapid, evidence-based assessments of public institutions’ performance against Gender Seal benchmarks. 

These examples show that, handled with care, AI holds the potential to challenge – not deepen – inequality. By embedding ethics, equity and gender dimensions into AI development, and by investing in diverse teams and transparent systems, we can build the gender-responsive AI that is essential for building a fair and inclusive digital future.