Building Safer AI for All Languages: A Collective Pathway to Inclusive Human Development
February 16, 2026
Authors: Dr. Kyungho Song & Dr. Jiyeon Cho, Korea AI Safety Institute; Alena Klatte, Jennifer Louie & Barbora Bromová, UNDP Digital, AI and Innovation Hub
The promise of Artificial Intelligence (AI) resonates across communities and countries: faster access to information, greater efficiency in service delivery and more economic opportunities to improve people’s lives. Our collective experiences and lessons based on the technology that exists today present a massive opportunity to scale the benefits of AI to reach another over 1.2 billion people worldwide who speak languages other than typical high-resource languages that AI systems are built around. Realizing this ambition requires coordinated action that strengthen community agency and enhance human security, shaping an AI-powered future that serves and empowers everyone, including linguistically diverse populations.
Addressing existing performance gap in multilingual AI
The performance of multilingual AI technologies has improved with time, yet there remain stark discrepancies between high and low-resource languages that need to be addressed to ensure linguistically diverse populations can equally benefit. AI models tend to underperform for speakers of low-resource languages, as they face limited digital presence and relevant training data that is scarce or non-existent. While some systems can receive and process queries in low-resource languages languages like Lingala in Central Africa, Quechua in South America, or Nepali in South Asia, taking a closer look at their outputs highlights where systematic improvements are needed to leverage them effectively in multilingual contexts. Linguistically diverse AI systems tend to be slower, less responsive to model safeguards, more difficult to evaluate through standard benchmarks and up to five times more expensive compared to equivalent use in English.
UNDP's work on inclusive Language AI and Trust & Safety in AI systems brings value in this area, focused on building the frameworks, partnerships and enabling environment needed to bridge data gaps and ensure AI systems are safe and reliable across languages. Emerging insights from conversations and implementation in countries have revealed that the issue is not merely a technical limitation; it is a matter of global equity that, once addressed, is capable of unlocking transformative opportunities. Addressing the prevailing economic, infrastructural and connectivity gaps is important to continue expanding the gains of inclusive digital development for human development and catalyzing locally-driven innovation.
Unlocking economic opportunity with language-inclusive AI tech
AI systems continue to unlock profound economic opportunities that can both transform the lives of people and national economies. However, reducing the ‘double tax’ reality that speakers of low-resource languages often face when using AI systems will be key. For several low-resource languages, information is structured differently from the high-resource languages on which many of today’s AI models are developed. This increases processing costs, as models struggle to navigate new syntax, token structure or alphabets. Technologies designed for only a handful of languages also exhibit less than reliable performance and reasoning when used in linguistically diverse contexts. This combination of higher costs and low performance places additional burden on linguistically diverse communities in search of new economic opportunities.
Improvements in these areas can extend tangible benefits to linguistically diverse groups, such as refugees and migrants who speak low-resource languages and are navigating unfamiliar linguistic contexts. Whether for purposes of healthcare, legal and employment, AI translation tools are common go-to resources for these groups, but the extent of their benefits are limited if outputs are unreliable. Organizations like Migrasia have documented how this double burden compounds vulnerability in moments when accuracy and reliability matter most, tracing it to higher incidence of worker exploitation or other potential human rights abuses. Turning challenges into opportunities, UNDP through its Trust and Safety Re-imagination Lab is supporting Migrasia alongside 16 other leading AI trust and safety organizations and teams in translating their research insights into advocacy, community engagement action plans to inform safe and inclusive system design and deployment.
Safeguarding information integrity and community agency
Besides promoting equal access to economic opportunities, the importance of ensuring agency, self-determination and information integrity in the advancement of language AI systems should not be overlooked. Unaddressed linguistic biases in AI systems could reinforce dominant cultural perspectives and marginalize alternative viewpoints. For instance, research suggests that Large Language Models (LLMs) display systematic bias towards retrieving information available in the same language used in the query. When prompted in a low-resource language, in many cases these systems will preferentially retrieve and generate answers from training documents in that low-resource language. While this choice may seem reasonable on the surface, in reality it often results in unreliable outputs, as the training data is limited and tends to include only a few low-resource language documents. In instances where no information is available in the query language, LLMs generally default to sources written in high-resource languages, particularly English.
When developed responsibly, multilingual AI systems can strengthen communities' agency over these technologies that are increasingly shape their lives. This includes taking a whole of society approach to ensure low-resource language speakers can benefit from: chatbots that produce context-relevant responses, automated customer service lines that recognize diverse accents, automated monitoring systems that can detect cybercrime reliably and government platforms that can process names and addresses spelled using different characters. Ushahidi has been documenting how low-resource language communities are often excluded from key development decisions, towards improving process related to recourse, correction or reporting when AI systems fail. To this end, UNDP has taken a community-centred approach throughout its network of Local Language Accelerators, ensuring they remain locally-owned and globally-enabled. From Mexico, Namibia to Serbia, strategic choices – including about data ownership, revenue- and data-sharing decisions – are made by local language communities and key stakeholders. This ensures that the project reflects the needs and ambitions of local communities, countering data extractivism that otherwise threatens development objectives.
Strengthening human security with inclusive AI safety frameworks and practices
There’s an opportunity at hand to close AI’s language gap, which is currently slowing adoption in critical domains of development impact. This challenge is compounded by the fact that most low-resource languages still cannot be systematically tested for safety. For AI systems to be meaningfully evaluated, the underlying language layer must contain enough information for prompts, red-teaming and safeguard assessments to function. Building safer multilingual AI therefore requires a phased approach: from basic script recognition and comprehension to instruction-following, contextual reasoning, understanding uncertainty and eventually developing full alignment and monitoring systems. Each step ensures that safety evaluation becomes possible in-language, rather than being inferred through translation or proxies.
Improving testing and evaluation practices becomes crucial as AI systems intersect with sectors such as healthcare, financial inclusion and public administration. The risks posed by AI misuse for misinformation, disinformation, scams, fraud and gender-based violence are disproportionately amplified in low-resource languages. Organizations such as Tattle, Duco, Gender Rights in Tech, Ushahidi, TrustWeave and Migrasia are working within UNDP’s Trust and Safety Re-imagination Lab to mitigate these harms, leveraging emerging lessons and insights from real-world experiences. As things evolve, AI assurance and alignment remain critical for AI applications, across areas of public and personal health, civic participation, crisis preparedness, law and other areas of human development and governance.
Charting the path to safer multilingual AI that benefits everyone
AI is a key accelerator of prosperity. Meaningful access to (and proactive agency over) new AI technologies will define opportunity for the coming generation, both individually and collectively. The question before us is clear: how can we ensure AI brings opportunity for everyone, everywhere – no matter the language people speak?
Equitable progress requires the whole of society, including meaningful action from AI developers, policymakers, researchers, international organizations and others who can bend the AI arc towards equity. One practical step towards this goal is to embed functional usability and multilingual safety evaluation into AI development cycles. This involves defining measurable thresholds for language support, building on promising work on multilingual benchmarks, mapping missing linguistic forms and investing in community-driven data pipelines, from civic text and dialogue to technical manuals. This would move languages from symbolic inclusion towards genuine functional parity.
AI safety is not merely a technical matter. Aside from mitigating technical failures and preventing harmful outputs, contextual safety is key: ensuring that systems respect local meaning, cultural nuance and social norms is essential to making sure they serve local populations. Focusing on technical improvements alone will not close the language gap - building safer multilingual AI sustainably requires well-coordinated, measurable and collaborative action that transforms both technology and society.
UNDP’s global network of Local Language Accelerators is seizing the opportunity to shape meaningful outcomes that benefit everyone, by addressing the risks associated with prevailing language gaps in data infrastructures and AI technologies, as well as strengthening local, inclusive, context-aware alternatives that advance human development. Working jointly with global, regional and local teams and partners, the projects support grassroots innovators working on public interest AI use-cases, address key challenges along the data to AI value chain, and help governments create the conditions required to nurture sustainable local AI ecosystems.
This article is first in the blog series titled Closing the Language Gap in AI for Prosperity that will highlight how UNDP and its partners are tackling language and safety disparities in AI and carving out opportunities to advance human development, building on the agenda of the AI Impact Summit 2026 in moving from pilot to sustainable, responsible adoption at scale. Get in touch with us if you have any feedback or would like to collaborate on this topic: digital@undp.org.