Where the AI arc bends
February 18, 2026
With proper governance, digital technologies, including artificial intelligence, can be powerful accelerators of human development on a healthy planet.
The arc of AI’s development is not defined by how quickly the technology advances or how capable it becomes—it is defined by who benefits as it does. That arc does not bend at the time of invention, but through deliberate choices about how, where and for whom artificial intelligence is used. It bends as AI diffuses into real systems and begins to shape how opportunities are distributed in practice.
UNDP believes digital technologies, including AI, can be powerful accelerators of human development on a healthy planet. AI can strengthen public service delivery, support faster crisis response, improve accessibility for people with disabilities, support governments in using limited resources more effectively and create millions of new jobs through AI-enabled innovations.
But acceleration is not neutral.
In practice, we see that AI often reinforces existing conditions. Where institutions, infrastructure, data, skills and trust are strong, benefits can compound and spread. Where they are weak or fragmented, gaps can widen and gains can concentrate in the hands of a few.
This dynamic is what we increasingly understand as the AI Equity Gap. Earlier digital divides centred around disparities in access to connectivity, devices and digital skills, and those gaps remain significant across income levels with 2.2 billion people still offline globally. But the AI Equity Gap reflects a further layer: the distance between the expanding availability and capability of the technologies and the equitable distribution of their benefits. It is shaped not only by who can access or use AI, but by whose needs are prioritized, whose risks are surfaced and whose voices can influence outcomes.
This perspective is reflected in UNDP’s 2025 Human Development Report, A Matter of Choice: People and Possibilities in the Age of AI, which emphasizes that AI outcomes are shaped by human agency, expressed through institutions, incentives and investment choices, rather than by technology alone.
What has changed is not the nature of those choices, but where they are now being made and how quickly they are taking effect.
Today, AI tools are not confined to specialists or well-resourced institutions. Large language models and other applications can now be deployed quickly with relatively low technical barriers, moving into everyday use faster than policy and governance frameworks can keep pace. As a result, decisions about who benefits from AI are increasingly being made within real institutional settings and systems, often ahead of formal oversight.
The AI equity question, therefore, is no longer only whether countries adopt AI. It is about how AI becomes embedded in practice, who shapes it, whose priorities it reflects and on what terms it scales. In many contexts, the risk is not exclusion from AI systems, but inclusion on unequal terms.
Understanding where these decisions sit is now central to determining who benefits from AI at scale.
From what we are seeing on the ground, these choices are already taking shape in the practical questions countries are navigating as AI integrates into real systems.
What we are seeing on the ground
Over the past year, UNDP has responded to this demand across 50+ countries—from Ghana and Liberia to Colombia, Vietnam, and Bosnia and Herzegovina—through a global AI Sprint bringing together governments, technical partners and local stakeholders to prioritize AI use cases, assess readiness and identify practical pathways for adoption. This has included:
AI Landscape Assessments that take a whole-of-society view to inform national strategies, governance choices and investment priorities;
Dedicated trust and safety support to anticipate and mitigate emerging risks as AI systems diffuse in and across countries;
Initiatives on low-resource language data to ensure AI systems and use-cases reflect the linguistic and cultural diversity of local communities and empower, rather than hinder, collective agency;
Capacity-building for government to strengthen readiness; and
Targeted support to small and vulnerable states facing distinct capacity and market constraints.
What we are seeing on the ground is not a single model of AI adoption, but a shared pattern in how AI diffuses across these varied contexts.
Much of this adoption is embedded rather than bespoke. AI rarely arrives standalone or clearly identified as such. It diffuses through platforms, vendor services and digital tools that public institutions already use—systems for benefits processing, service triage, case review and everyday decision-making—often without being recognized as "AI" at all. As AI systems are absorbed into existing platforms, they often inherit dominant languages, data, and design assumptions, resulting in inclusion on unequal terms unless deliberate adaptation occurs.
This is how AI diffuses in practice. Not primarily through national AI strategies, but through everyday procurement, platform and operational decisions made across public and private institutions. As a result, AI adoption is rarely a single, deliberate national decision. Instead, it is an accumulation of choices made across people, institutions and partnerships. The result is a process that is incremental, uneven and often difficult to see, yet its effects compound quietly, shaping how AI interacts with public systems and how people experience them.
How these accumulated choices shape AI outcomes varies widely.
AI adoption risks exacerbating existing inequalities. More than two billion people still lack access to the internet, most of them in low and middle income countries.
In some contexts, AI use is guided by clear priorities and coordination. It is supported by skills development, aligned with reforms and embedded within existing accountability structures. In these settings, AI can complement institutions and systems, improving access, effectiveness and inclusion.
For example, AI has been integrated into existing public advisory systems to support high-stakes decision-making where demand, data, strong coordination, and infrastructure were already in place. In one widely documented case, AI-enabled forecasting was incorporated into long-standing agricultural advisory platforms, allowing updated guidance to be delivered through widely trusted public channels. The impact was not from the model alone, but from coordination across scientific institutions, government agencies, delivery platforms and human-centred communication systems that had been built over time.
In other contexts, AI use is more fragmented because adoption is unfolding within weaker systems. Where coordination is limited, data governance is weak, or public services are already under significant pressure, AI often enters through disconnected pathways: vendor platforms or tools brought in to cope with workload pressures, small pilots that never connect to core systems, or informal experimentation by individuals that struggles to see broader impact.
In these settings, AI can spread faster than institutions can adapt. Experience from UNDP’s trust and safety programming shows how gaps in capacity and accountability become visible at this point. AI-enabled screening or prioritization tools—for example, systems used to triage public service requests, reduce case backlogs, or assess financial risk—may deliver short-term efficiency gains, but without full transparency or oversight, people often have little understanding of how decisions are made or how to challenge them. Responsibility becomes unclear, ways to correct mistakes are limited, and trust is harder to maintain.
When AI is introduced into weaker or fragmented systems, it does not remain neutral. It absorbs and can amplify those weaknesses. Where and how AI is introduced is therefore not incidental. It is a development choice.
Where impact is actually decided: AI adoption as a development choice
While much of the global AI conversation remains focused on the rapid pace of innovation, in many of the countries UNDP partners with, the more consequential questions lie elsewhere: in how AI is implemented, used and integrated into real systems that can scale beyond pilots.
In practice, UNDP sees AI adoption not as a single decision, but as a set of development choices made over time. These include how services are designed and delivered; whether AI supports human judgement or substitutes for it; who can override automated decisions; how errors are identified and corrected; and who is responsible when systems cause harm. Together, these choices shape who is served, who is excluded and whose voices count when systems fail.
Control over data and infrastructure is central to these choices. Decisions about how data is generated, shared, retained and reused determine whether institutions can understand how AI systems deliver impact, intervene when problems arise and improve performance over time. While reliance on external providers can provide speed and technical expertise, it also shapes who can adapt systems, on what terms, and with what accountability and accuracy.
Practical safeguards further determine whether AI-enabled systems are trusted in everyday use. Testing, transparency, escalation pathways and recall mechanisms are not peripheral concerns; they are the conditions that allow institutions to deploy AI responsibly at scale.
"UNDP sees AI adoption not as a single decision, but as a set of development choices made over time."
Across public service delivery, social protection, health, education, crisis response and environmental management, from small island developing states to large middle-income economies, UNDP sees similar AI technologies produce very different outcomes. The difference tends to not lie in the tools themselves, but in how they are embedded within institutions, workflows and accountability structures, and in whether adopters have the capacity and confidence to use them over time.
This middle layer, between innovation and delivery, is where AI adoption often slows, risks accumulate, and impact is either consolidated or lost. It is also where the AI Equity Gap is widened or narrowed in practice.
This focus on adoption as a development choice aligns closely with a growing body of practitioner and policy thinking, including the work emerging from the India AI Impact Summit 2026. Building on earlier focus on AI safety and action, the 2026 Summit reflects a growing shift toward what ultimately determines impact: the pathways that allow AI to move from pilots into safe and scalable use within real systems.
That same logic underpins our call to action with partner People + ai at the EkStep Foundation for AI Diffusion Infrastructure (ADI): 100 Pathways by 2035 as an approach for safe AI impact at scale for the next billion users. These pathways draw on lessons from more than 25 country experiences and are operationalized through the Use Case Adoption Framework, which links vertical sectors where value is created, such as health, agriculture and education, with horizontal enablers including data, talent, compute, safety and multilingual capabilities. The framework focuses on sequencing efforts, reducing risk and developing reusable assets, such as playbooks, data commons, standards and assurance practices, that can be applied across contexts.
Across this work, the emphasis is on building repeatable pathways that move AI from pilots into sustained, population-scale use. Impact depends less on isolated interventions and more on the shared institutional and operational capacities that allow AI to function reliably and responsibly in everyday settings. This reflects a broader shift from asking what AI can do to understanding the conditions that enable safe and trusted use over time.
The conditions that turn AI use into impact
As AI becomes embedded within public and development systems, UNDP’s experience across countries points to a small set of practical conditions that shape whether it delivers sustained improvements in people’s lives.
First, foundations matter. AI delivers lasting benefits when it is built on digital and data infrastructure that allows systems to function reliably and improve over time. Where these foundations are weak or fragmented, AI systems are harder to manage, harder to correct and more likely to entrench exclusion once deployed.
This is particularly visible in the interaction between AI and digital public infrastructure, where AI is increasingly layered onto systems that already underpin public service delivery. Countries are not sequencing DPI first and AI later; they are often confronting both in parallel. Core DPI components such as digital identity, data exchange, payments and registries already underpin everyday public services. As of 2025, national DPI systems are operational in nearly half of the world’s countries. And as AI becomes embedded within these systems, governance, safeguards, institutional capacity, and core DPI principles such as openness and interoperability become increasingly important.
Second, integration into real services matters more than isolated pilots. AI changes outcomes when it is absorbed into how institutions actually operate—when workflows are redesigned, roles and responsibilities are clarified, and staff are supported to use AI as part of day-to-day decision-making. Where AI is added on without changes to processes, skills, or incentives, it often remains marginal, parallel, or underused, regardless of technical performance.
Third, trust matters in practice. AI-enabled systems are used and relied upon when people can understand how decisions affect them and when there are clear ways to raise concerns and address mistakes. Insights from the 2025 UNDP Global Survey on AI and Human Development—one of the broadest surveys of public attitudes on AI, covering over 21,000 respondents in 21 countries and representing 63 percent of the global population—show that confidence in government use of AI varies widely across contexts, with people’s views closely tied to whether AI systems are perceived as fair, transparent and accountable in practice. Without mechanisms for explanation and recourse, even technically sound systems struggle to deliver results or sustain adoption over time.
Finally, how governments and the private sector work together shapes outcomes. Most AI systems are developed, maintained, and updated across borders and sectors, from cloud infrastructure to foundation models and sector-specific applications. Development outcomes depend on how governments, private actors and other partners work together to adapt AI to local contexts and embed it within public systems in ways that align innovation with public purpose. When incentives are aligned, private-sector innovation can accelerate delivery, scale and learning. When they are not, dependency and fragmentation can deepen.
These conditions help explain why AI impact is not automatic. It emerges through development choices that shape how AI is introduced and sustained within the systems people rely on. This, in turn, raises a practical question for development actors: how to support countries in making those choices.
The decisions governments and industry leaders make will determine who benefits from artificial intelligence.
What this means for UNDP—and why this moment matters
AI’s development impact is being shaped far from the frontier of innovation. It is taking form in the systems where AI is absorbed into everyday use—through decisions about infrastructure, skills, safeguards, accountability, and who has the ability to influence how systems evolve once adopted or deployed.
This is where the AI Equity Gap is now widening or narrowing. Not only through access to AI, but through how benefits and risks are distributed as AI becomes embedded in public services, markets and decision-making systems.
For UNDP, this sits squarely within our core mandate under the Strategic Plan 2026-2029: expanding opportunity and choice across prosperity, effective governance, crisis resilience and a healthy planet. AI contributes to these goals when it is absorbed into institutions and systems in ways people can understand, trust and influence in practice. When AI is introduced without those conditions, it can just as easily reduce agency, deepen dependency, or concentrate gains away from the people it is meant to serve.
This is why UNDP’s AI programming concentrates on the foundations that determine how AI is actually used. Through the AI Hub for Sustainable Development, UNDP works with governments and private-sector partners to strengthen data, compute, talent and enabling ecosystems, particularly where these foundations remain uneven. Through a focus on digital public infrastructure, data governance, trust and safety, and capacity building, UNDP supports systems that can adapt, correct and remain accountable as AI scales.
Extending this work, UNDP and partners are advancing a commitment to help co-create 100 diffusion pathways by 2030. AI does not scale through a single breakthrough or in one place. It scales through many, context-specific pathways across sectors and countries, where adoption succeeds or fails based on real conditions on the ground. These pathways focus less on models or pilots and more on the practical and often difficult work of embedding AI into real systems. The emphasis is on using AI in ways that are governed, trusted and able to endure over time, so innovation serves public needs and institutions can sustain it.
UNDP’s role is not to advance AI as an end in itself, but to ensure that as AI becomes embedded in development systems, it expands human possibilities rather than narrowing them. The arc of AI’s development and impact is shaped not at the moment a model is built, but through the sustained institutional and operational work that determines how AI functions in practice, how problems are identified and addressed over time and who ultimately benefits. That is where the AI equity question now sits, and where UNDP is focused.