Responsible AI: Enhancing Tools and Skills
August 4, 2025
Shaping AI Development for Collective Human Good
AI technology and its use cases continue to evolve at an extraordinary pace. It is no longer a question of if AI will shape our world, but how we shape AI to advance sustainable development for collective human well-being.
This vision is echoed in the Global Digital Compact, which calls for inclusive and rights-based governance of emerging technologies, as well as in the more recent Africa Declaration on AI and the Hamburg Declaration on Responsible AI, both reaffirming commitments to shaping AI for sustainable development. More than 70 countries and territories have also put forward national AI strategies, including Singapore’s National AI Strategy 2.0 championing “AI for the Public Good, for Singapore and the World.”
Human agency in shaping AI was the central theme of UNDP’s Human Development Report (HDR) 2025, which culminated from extensive global consultations, particularly in three key areas:
- Building a complementarity economy that promotes AI–human collaboration;
- Driving innovation with intent, focusing AI development on public and social goals, not just commercial ones; and
- Investing in capabilities that count, ensuring digital skills and AI benefits are equitably shared across societies.
It also underpins UNDP’s AI Trust and Safety Re-imagination Programme inviting experts and innovators across public and private sectors to collaborate in addressing AI’s rising challenges.
UNDP Global Centre, Singapore co-hosted the Asia Pacific consultation together with the HDR Office, convening regional experts on AI and human development.
Deploying Tools for Trusted AI Adoption
As governments seek to unlock the vast potential for public sector transformation, especially with generative AI, to deliver public services faster, better, and in more personalised ways, the complexity of responsible AI deployment grows alongside exponentially.
Each stage of the AI application lifecycle poses distinct governance challenges that public agencies must confront. The concerns are not abstract. From civic chatbots to policy assistants, real-world applications increasingly interact with citizens in sensitive contexts, thereby raising importance to ensure accountability, safety, and public trust.
As the lead agency driving Singapore’s Smart Nation and Digital Government efforts, the Government Technology Agency of Singapore (GovTech Singapore) is committed to ensuring that digital transformation is underpinned by trust, ethics, and public interest. To this end, GovTech Singapore has developed two complementary tools under its AI Guardian initiative to help public agencies adopt AI in a responsible and scalable way:
- Litmus serves as a testing-as-a-service platform, allowing teams to assess safety and security risks before deployment.
- Sentinel provides guardrails-as-a-service, enabling real-time content moderation and output filtering once applications go live.
Together, these tools offer public agencies a practical approach to embed AI governance across the entire lifecycle as a continuous, integrated process.
Illustration of how Litmus and Sentinel help Singapore’s public sector operationalise responsible AI from design to deployment to maintenance.
As part of GovTech Singapore’s mission to develop resilient and citizen-centric digital services, Litmus helps public agencies evaluate the safety and reliability of AI applications before they go live. Just like we stress-test bridges before opening them to traffic, Litmus stress-tests generative AI systems, such as virtual assistants or citizen-facing chatbots. It checks for issues like reliability, bias, or unsafe responses. Testing is integrated into development pipelines and generates actionable reports so teams can identify and resolve risks early in the process.
What makes Litmus especially useful is that it evaluates both the base AI models and the applications built on top of them. This allows public service teams to identify risks that may only surface during specific service interactions or conversational flows. Litmus fits directly into existing software development workflows and supports teams in translating complex model behaviour into governance-ready insights, helping both engineers and decision-makers determine whether an AI tool is safe for real-world deployment.
Simple representation of Litmus’ process of enabling teams to conduct safety and performance testing across AI models and downstream applications before deployment.
Complementing Litmus is Sentinel that provides real-time content moderation and output filtering when AI applications are live. Sentinel integrates directly into public sector tools such as chatbots and digital assistants to flag, block, or modify harmful responses before they reach users. It uses a configurable set of filters to evaluate both incoming prompts and outgoing model responses, supporting safe and trustworthy use of AI in sensitive environments.
Litmus and Sentinel are part of GovTech Singapore’s broader commitment to build a trusted, inclusive, and effective digital government. These tools are currently being piloted across a range of public sector use cases, helping agencies manage AI risk without slowing down innovation.
They offer a useful reference for other countries developing similar tools that are adapted to their own contexts and needs. To learn more, please reach out to the GovTech Singapore team at AIGuardian@tech.gov.sg, and the team welcomes opportunities for knowledge exchange and collaboration with international governments and public sector innovators.
As part of Singapore’s contribution to the UN Global Digital Compact, Singapore will include Litmus and Sentinel, among a broader suite of AI resources, on an open platform for policymakers around the world to build on them, adapt them to their own contexts, and advance the responsible and effective adoption of AI around the world.
Raising AI Skills and Capacities for Shared Benefits
At the same time, disparities persist in the capacities across regions and countries, hindering those who stand to gain the most from harnessing the potential of AI. UNDP is working across 170 countries to help close these divides, including through partnerships like the AI Hub for Sustainable Development, co-designed with the Government of Italy and endorsed by the G7, which supports efforts to support partner countries in Africa in areas such as data ecosystems, green compute, regulatory readiness, and skilled talent.
The AI AskHub platform connects African innovators with the tools to build AI at scale, it guides users to resources, answers and the latest AI programmes. Eligible candidates will be guided by the AskHub to two different programmes, the Compute Accelerator Programme and the AI Infrastructure Builder Programme.
Programmes under the AI AskHub platform.
AI skills and capabilities are important to enable development opportunities, and to mitigate risks of deepening inequalities, biases, and exclusion of marginalised groups. UNDP’s Digital Capacity Lab has been raising digital capacities through hands-on trainings with governments worldwide to create impactful and citizen-centric digital solutions. This includes the inaugural “Leadership and Governance in this Era of Digital Technologies” executive programme between UNDP and the Singapore Ministry of Foreign Affairs (MFA). The Digital Capacity Lab has also developed a specialized AI for Government program that further supports officials to develop a comprehensive understanding of AI’s role in public administration, as a regulator, a user, and an enabler.
Inaugural “Leadership and Governance in this Era of Digital Technologies” executive programme in June 2025, co-organised by Singapore MFA and UNDP, under the FOSS for Good initiative of the Singapore Cooperation Programme, featured senior officials from 17 member states of the Forum of Small States.
UNDP has also recently concluded a Memorandum of Understanding (MOU) with AI Singapore to joint expertise and resources in advancing AI literacy in the Global South. The collaboration seeks to advance five key areas:
- AI Literacy Courses: Tailor courses to regional needs, with a focus on accessibility and local relevance.
- Train-the-Trainer Programs: Equip educators with the necessary skills and knowledge to teach AI, including the creation of teaching toolkits and curricula.
- Outreach Initiatives: Engage underrepresented groups through targeted outreach programs to ensure broad participation.
- AI Ethics Campaigns: Conduct workshops and roundtables to discuss and disseminate best practices and ethical guidelines for AI.
- Knowledge Centers: Collaborate with local institutions and governments to create centers of excellence for AI education and research.
We welcome partners interested to support this work in scaling AI literacy for collective benefits to reach out to the UNDP Global Centre, Singapore at registry.sg@undp.org.
MOU Signing Ceremony between AI Singapore and UNDP on 29 May 2025.