When AI governs, who guards our rights?
April 1, 2026
Image was created partially with AI.
AI is moving fast. Governance is not.
Across the region, digital transformation is advancing at speed. Governments are pursuing “digital first” agendas, rolling out e‑-government services‑ and embedding AI into core public functions—from Albania’s creation of a dedicated Minister for AI to “Smart City” surveillance networks in Serbia and Kazakhstan.
The same AI boom is visible in healthcare, credit scoring, welfare delivery, policing, border management and education. These initiatives promise efficiency, economic growth and modernised governance. Yet they unfold in a fragile human rights environment, where abuses of civic space, privacy, equality and non‑discrimination persist, and where regulation and oversight have not kept pace.
Added to this are structural vulnerabilities: dependence on foreign technologies, limited local capacity to develop or audit AI systems and poor representation of local languages and cultures in global training data. Large language models trained on worldwide content often barely reflect the realities of many communities in the region, yet they increasingly shape decisions that affect their lives. These are not merely technical shortcomings—they are human rights challenges.
The stakes are concrete — and the harms are already here.
AI is reshaping how consequential decisions get made—not in the abstract, but in the concrete systems that govern people's daily lives: who receives social benefits, who gets flagged by predictive policing, whose rental application is rejected before a human ever sees it.
The risks are real and already unfolding. In the Netherlands, tax authorities deployed an AI-powered fraud detection system that quietly targeted families in low-income neighborhoods, wrongly branding tens of thousands — many of them immigrants — as benefit cheats. The human cost was devastating: lost homes, crushing debt, broken families. Courts ultimately struck the system down for violating fundamental human rights. In Serbia, the government's Social Card system runs welfare applicants through an automated algorithm that processes 130 data points to decide who qualifies for support — and who doesn't. Roma communities and other marginalized groups have been hit hardest, with thousands losing benefits they depend on to survive.
These are not isolated incidents. They are early warnings of what happens when powerful automated systems operate without adequate transparency, accountability, or human oversight.
Reactive frameworks are not enough.
Unlike many AI security risks, human rights risks are harder to manage. Existing national and international frameworks tend to protect rights reactively—through courts and complaints after harm occurs—or at a very general policy level. But instead, we need a proactive, systems-s‑pecific approach that examines each AI system on its own terms and places potentially affected rights at the core of design and deployment from the outset; in other words, “human rights by design”.
Potential negative consequences for rights should be anticipated and prevented through system architecture, data governance and institutional safeguards—not managed only after violations occur. We already have guides outlining how businesses should integrate human rights into their operations through human rights due diligence, but, in practice, these are usually implemented at the enterprise or sector level, not at the level of specific AI tools. Public bodies, now extensive users of AI for critical decisions, must also ensure that their systems comply with human rights duties. AI rights-based governance therefore requires more concrete, operational tools.
A growing regulatory consensus.
This is precisely the direction taken by the first major regulatory initiatives on AI. The EU Artificial Intelligence Act and the Council of Europe Framework Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law both introduce explicit requirements for prior assessment of potential impacts on human and fundamental rights. The UN AI Advisory Body’s final report likewise emphasises prior, systematic assessment of rights impacts as a core element of AI governance.
These developments point to a growing convergence: Human Rights Impact Assessment (HRIA) is emerging as a necessary instrument for managing human rights risks in AI, complementing and operationalising broader due diligence frameworks. The recent landmark verdicts finding Meta and YouTube liable for their addictive algorithmic design demonstrate precisely why human rights impact assessments are a necessary safeguard—not a retrospective remedy.
What good practice looks like.
Several models for AI-specific HRIA – have already been proposed and piloted in Croatia and Spain, including in projects related to credit scoring, social protection targeting, policing technologies and smart city infrastructures. These exercises have helped surface concrete risks—discrimination, exclusion or disproportionate surveillance—before systems become entrenched.
For example, the Catalan Data Protection Authority recently evaluated an AI system using medical images to predict how response to chemotherapy by patients with cancer—a disease associated with high rates of overtreatment and serious side effects. After a methodical assessment, all patients were identified as vulnerable by definition and three rights were flagged: health, non-discrimination and privacy.
The most urgent risk emerged from the training data itself: drawn predominantly from European healthcare facilities, the algorithm risked performing less accurately for non-European patients, where differences in medical imaging by ethnicity can affect diagnostic precision. Mitigation translated findings into concrete action: expand the training dataset to adequately represent non-European patients, clearly communicate the tool’s limitations to healthcare professionals, and disclose how training data was obtained and processed.
This is what responsible AI in healthcare looks like—not the absence of risk, but an honest reckoning with it, and a commitment to keep asking questions even after deployment begins.
In Croatia, the Personal Data Protection Agency, appointed as a fundamental rights body under the EU AI Act, was approached by a school wanting to deploy an LLM-based system to ersonalize learning. The system would draw on student grades, behaviour, learning styles, and for students with disabilities, health data. Six fundamental rights were identified as being at stake — including human dignity, non-discrimination and the right to education. Risk analysis flagged that inaccurate student profiles could implicitly label children as less capable, reduce them to data points and amplify bias related to disability or socio-economic background. Mitigations were considered, but after weighing the full picture, the school concluded the potential harm outweighed the benefits and chose not to proceed. Sometimes an impact assessment doesn’t ask how we make this work. It asks if we should do this at all.
A practical resource: the Human Rights Impact of AI Assessment Toolkit
Good practice exists — but it remains fragmented. Methodologies vary, and many institutions still lack a clear, tested template to follow. That is the gap UNDP's Human Rights Impact of AI Assessment Toolkit (HRIA) Toolkit is designed to fill.
Developed by the team at UNDP's Istanbul Regional Hub, with inputs from country offices and regional partners across Europe and Central Asia, the Toolkit draws on the lessons of existing models to give anyone involved in AI — whether in government, business or civil society — a clear, structured way to ask: who could be harmed by this system, and what can we do about it?
The Toolkit walks users through five practical steps: planning and scoping; impact identification; risk analysis and prioritisation; risk management and mitigation; and monitoring and reporting. A readiness questionnaire, templates and targeted learning resources support users throughout.
It does not require a legal or technical background. It is designed to work for a small municipality piloting a social services algorithm and a large tech company deploying AI at scale alike. No expensive consultants. No compliance jargon. Just a lean, rigorous framework that places the people most likely to be affected by AI at the centre of how it is designed and deployed.
UNDP is open to consultation requests from teams using the Toolkit. Reach the team at www.hria.eu/contact
Download the toolkit | Interactive digital version (currently in beta)