top of page

Can AI Make Aid More Flexible in Sub-Saharan Africa, Or Just Automate Old Rigidities?

  • Zeynep (Azra) Koc
  • Dec 21, 2025
  • 4 min read
Source: Unsplash
Source: Unsplash

There is a tendency to label AI as the new and shiny fix for most aspects of life; aid delivery is not immune to such attribution. Aid agency staff, in the not-so-distant past, pored over spreadsheets to decide which households should receive emergency cash transfers in dusty offices in Lomé, Togo, and in the bustling UNICEF data hubs in Nairobi, Kenya, until a new technological experiment came into effect, which had the potential to change all of this.


So, when the pandemic hit, within weeks, an algorithm was helping them make those decisions. A Togolese government scheme called Novissi utilised mobile phone records and satellite imagery to identify individuals likely to be poor, with payments directly deposited into basic mobile wallets. Over half a million people received support in a matter of weeks, and policy briefs from research groups at Berkeley and MIT praised the programme as a model of data-driven crisis response.


Across sub-Saharan Africa, international donors are now speculating that Artificial Intelligence (AI) can fix a familiar problem with contemporary aid, which is too rigid, too slow, and too far removed from daily realities on the ground.


For instance, UNICEF’s U‑Report platform, featured in press releases by the African Union (AU) and UN news outlets, invites young people to text their views on education, climate and governance from even the most basic phones, with answers popping up on slick dashboards in ministry offices. Together, these experiments promise a future where humanitarian decisions are driven by real‑time data and beneficiaries' feedback, rather than patchy reports and tired project cycles.


The question is whether this really shifts power or simply automates an old way of doing things.

 

Listening at Scale and the Limits of the Digital Voice


In Uganda, Nigeria and other countries, U-Report polls have been used to gather youth views on school closures during COVID-19, vaccine confidence and violence, often at an impressive scale and speed. Internal reports and partnership documents highlight examples where poll results informed communication campaigns or helped officials adjust messaging and outreach.


Despite the immense potential benefits of harnessing the digital “voice”, the U-Report case highlights that deeper shifts, such as reorienting budgets, altering programme logics, and redistributing decision-making power, remain rare, constrained by pre-existing funding cycles, accountability requirements and political risk calculations. AI-enhanced listening, in other words, does not automatically translate into flexible action when incentives still point upwards to donors and headquarters, rather than downwards to communities.


The leap from listening to changing course remains limited. Poll findings often inform communication drives rather than hard decisions about where money goes or how programmes are designed. This kind of innovation can be prone to turning into a form of humanitarian novelty-seeking, instead of solving underlying inequalities embedded in the system. The U‑Report may widen the funnel of feedback, but the real bottleneck is higher up: where donors, technologists and senior officials still control what counts as meaningful input.

 

Faster Decisions, Uneven Reach


The promise of “real-time” AI aid is echoed to counter rigid donor and industry narratives; AI appears as a technical fix for long-criticised aid rigidities. In Togo, researchers and the government combined satellite imagery and mobile phone metadata to model poverty to target groups for cash transfers. Evaluative reporting found a reduction in exclusion errors by 4-21% compared to simple geographic targeting. These successes present a tempting paradigm shift wherein AI can finally allow aid to keep pace with fast‑moving crises and shifting local needs.


Innovations like Novissi were deemed a groundbreaking invention in aid delivery. However, the negative aspects of its accuracy cannot go unmentioned. The case of phone sharing in Togo has demonstrated how access to a hand-held device determines who receives transfers and who does not, and those without phones, typically women, the elderly, and/or the poorest, were far more likely to be invisible to the system.


The architecture of data and decision-making is mostly controlled by telecommunications infrastructure, international research teams, and external donors, rather than the people whose lives are being simulated, so digital currency transfers certainly reveal deeper problems within AI-driven policy decisions. Although the method appears more flexible on paper, it really runs the danger of embedding outdated exclusions into code.

 

Innovation without Redistribution?


Both U‑Report and Novissi sit under the broader banner of “AI for Social Good,” (AI4SG): a slogan now common in tech company announcements, UN forums and philanthropic initiatives. Nature reported that the AI4SG field projects use algorithms to spot crop disease, map poverty, or improve disaster response, and showcases Africa as a testing ground for many of these tools.


There is a warning embedded within the message of “innovation turn.” The sector’s enthusiasm for gadgets and prototypes can erode face‑to‑face relationships and consolidate power in expert networks. In this light, AI for social good looks less like a radical break and more like a new interface for an old ideology: efficiency and control first, politics and redistribution later.


The most difficult adjustments are not technical if AI is to make aid adaptable rather than only enhancing its effectiveness. The true frontier that these agencies must break through is governance: who designs these systems, who establishes the guidelines for data usage, and who can contest the results when they go wrong.


African digital rights campaigners and policy voices emphasise that local actors must have a voice in identifying the challenges in the first place, from what constitutes ‘poverty’ in their culture to what kinds of solidarity are just and desirable.


This would need donors to view tools like AI targeting models and U-Report as more than just real-time dashboards. Rewriting log frames, altering funding envelopes, and transferring agenda-setting authority to those now referred to as ‘beneficiaries’ should all be based on the data they provide.


Whether AI loosens or tightens existing rigidities will rely more on whether international institutions are prepared to devolve authority locally than on focusing solely on how intelligent the models can become. Until then, the hum of the algorithm may be new, but the rhythm of the system will feel eerily familiar.


Written by Zeynep (Azra) Koc

Edited by Ruth Otim

Comments


  • LinkedIn
  • X
  • Instagram
bottom of page