Buyer beware: how AI is infiltrating humanitarian aid operations
Access Now’s latest research unpacks how AI tools are dodging procurement vetting to infiltrate humanitarian organisations, creating new risks for them and the communities they serve. The post Buyer beware: how AI is infiltrating humanitarian aid operations appeared first on Access Now .
When used in humanitarian aid operations, AI systems that carry prevalent biases, security risks, and consent issues can leave local communities even more vulnerable during a crisis. Yet even as debate rages over whether or not humanitarian organisations should be using AI in the first place, the sector’s use of this technology keeps on growing. In our latest report, Reinventing humanitarian aid procurement for the age of AI, we examine the specific challenges associated with humanitarian organisations adopting algorithmic tools in sometimes haphazard or unpremeditated ways, suggesting a roadmap to adjust IT, cybersecurity, procurement, legal, and digital protection governance accordingly. To this end, we also provide, as an appendix to the report, a framework to help transform digital procurement from a transactional business function, into a strategic one.
As we show in our research, much of the aid sector’s adoption of AI is being driven, on the one hand, by individual aid workers using large language models for their daily tasks or humanitarian NGOs turning to ‘smart’ chatbots to compensate for access restrictions and funding woes, and on the other, by tech companies pushing enhanced, AI-based features within existing products and services.
This is creating new, sometimes invisible, risks for which organisations are unprepared. When a U.S. nonprofit’s chatbot went off-script (following a product update that activated unexpected AI features), vulnerable users were suddenly barraged with misleading, even harmful responses. Now imagine, if you will, the consequences of the same thing happening to a humanitarian chatbot used by people seeking lifesaving information in a crisis or conflict setting.
In the algorithmic era, the same digital tools that are supposed to improve people’s lives may instead provide a web of autocratic surveillance, contribute to the unlawful targeting of civilians in conflict, and automate life-or-death decision-making with neither meaningful human oversight nor remedy. Just like for any emerging technology, the irresponsible adoption of AI risks undermining humanitarian actors’ ability to honour the two non-negotiable cornerstones of humanitarian action: the humanitarian principles and the rule of ‘do no harm’.
Mapping AI’s uptake in aid
Building on our previous work mapping the often-opaque partnerships between humanitarian actors and private tech corporations, our latest report shows how algorithmic systems are entering the humanitarian space mostly by the backdoor, and often without safeguards. Extensive desk research and more than 70 interviews with actors from across the humanitarian, public, academic, private, and social impact sector, reveal a tendency towards haphazard integration, rather than deliberate institutional adoption, of cloud-based, algorithmic-enhanced functionalities and processes, slowly but progressively seeping into most internal systems.
This form of corporate capture is not new. Following the “cloud-first” playbook, it traps even the largest and wealthiest international organisations into digital and financial dependency. And while Global Majority aid workers and grassroots organisations are at the forefront of experimenting with AI adoption, a lack of adequate legal frameworks and safeguards, alongside lagging investments in humanitarian funding for ICT infrastructure, are thwarting widely touted claims that AI will democratise humanitarian efforts by empowering local actors.
To the contrary, our research shows how the progressive cloudification of most common digital systems and the fast-paced, distributed nature of modern digital development, is only deepening the existing digital divides between large organisations with privileged, on-call access to Big Tech, and smaller frontline organisations and communities struggling to protect their principled approach, and their own rights, in the digital age.
What to do about it
While organisation-led digital development based on open source systems and co-development with trusted external providers appears to be the best way to minimise the risks associated with algorithmic systems, we know this isn’t financially or technically feasible for most humanitarian actors. To that end, the appendix to our report provides the foundations for a digital service framework in the algorithmic age, offering up a suggested governance model to help humanitarian organisations acquire digital products and services, including algorithmic capabilities, with human rights in mind. Key recommendations featured in the report include:
-
Asking donors to promote changes in tech governance processes and mechanisms aimed at reforming humanitarian tech procurement;
-
Calling on regulators to strengthen transparency and due diligence requirements across the whole supply chain for all algorithmic tools;
-
Inviting the humanitarian community to underpin their digital transformation efforts with a governance framework that bridges procurement, cybersecurity, protection, and human rights protections;
-
Requesting that tech companies re-invest in human rights and humanitarian internal expertise, to help re-establish trust-building mechanisms with humanitarians and local communities;
-
Inviting local actors and communities to champion local tech solutions and run algorithmic accountability audits on proposed systems; and
-
Asking research groups and cyber experts to analyse tech adoption specifically through the lens of NGOs operating within low-income countries and/or in conflict-affected areas.
Our research findings, related recommendations, and the appendix provided (in Google Sheet and PDF format) will hopefully form a basis for all actors in the humanitarian sector seeking to break down siloes, tackle the digital divide, and set digital transformation on a better path, for the benefit of those they seek to serve. We extend our thanks to the German Federal Foreign Office, whose support made this research possible. Access Now remains committed to extending and defending the digital rights of at-risk people and communities, especially those living in dire conditions, through crises, or caught up in conflict, and we welcome feedback, questions, or comments on this latest report.
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
researchBuffer Overflows on x64 Windows: A Practical Beginners Guide (Part 2): Exploitation
<h2> Introduction </h2> <p>Welcome back. Mirrai here. In part 1 we covered the theory. The stack, RIP, and what a buffer overflow actually is. Now we get our hands dirty. By the end of this guide you should have a working exploit that gives you control of RIP and redirects execution to your own code.<br> Before we start, make sure you have x64dbg and pwntools installed from part 1. You'll also need the vulnerable program we wrote. If you haven't read part 1, go do that first. Buckle up, this might take a while.</p> <p>For your convenience, here's the old vuln program code<br> </p> <div class="highlight js-code-highlight"> <pre class="highlight c"><code><span class="cp">#include</span> <span class="cpf"><stdio.h></span><span class="cp"> #include</span> <span class="cpf"><windows.h></span><s
US government hires BlackSky to build next-gen AI surveillance satellites for Earth and beyond
The US government has selected BlackSky to design and build the next generation of its space surveillance capabilities. The newly announced contract is an indefinite delivery/indefinite quantity (IDIQ) agreement, meaning the company will provide as many satellites and monitoring services as the Air Force Research Laboratory requires for its missions.... Read Entire Article
AI maps science papers to predict research trends two to three years ahead - Tech Xplore
<a href="https://news.google.com/rss/articles/CBMie0FVX3lxTE5aTkZYTWdaRDZwTXNRMldpMG1WZ1YzWDZTOHN5M183Z3A1ZTFYbnhEWTdPRmpvZnZFU0xodlRsNWxFaGxTcEpwalhJNmJpQWE5VjhaRS1tOXJIeTc5Z0JNblJ3dFd4WjRYZGJOX0NrWGt6ZmZJVTBpRm5wWQ?oc=5" target="_blank">AI maps science papers to predict research trends two to three years ahead</a> <font color="#6f6f6f">Tech Xplore</font>
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Products
Developers Are Designing for AI Before Users Now
<p>A quiet shift is happening in modern web development.</p> <p>For years, developers designed applications with one priority: users.</p> <p>UI came first.<br><br> User flows came first.<br><br> User experience came first. </p> <p>Backend, APIs, and integrations were built around that experience.</p> <p>But today, something has changed.</p> <p>Developers are increasingly designing systems with <strong>AI in mind before users</strong>, and this is reshaping how frontend, UX, and fullstack engineering work.</p> <h2> The Old Way of Building Applications </h2> <p>Traditional product development looked like this:</p> <p>Design UI → Build frontend → Connect backend → Launch</p> <p>The focus was simple:</p> <ul> <li>What does the user need?</li> <li>How will they interact?</li> <li>What is the ea
What Is New In Helm 4 And How It Improves Over Helm 3
<p>The release of <strong>Helm 4</strong> marks a massive milestone in the <strong>Kubernetes</strong> ecosystem. For years developers and system administrators have relied on this robust package manager to template deploy and manage complex cloud native applications. When the maintainers transitioned from the second version to <strong>Helm 3</strong> the community rejoiced because it completely removed <strong>Tiller</strong>. That removal drastically simplified cluster security models and streamlined deployment pipelines. Now the highly anticipated <strong>Helm 4</strong> is stepping into the spotlight to address the modern challenges of <strong>DevOps</strong> workflows. This comprehensive blog post will explore exactly what is new in <strong>Helm 4</strong> and how it provides a vastly
Promoting raw BG3 gameplay bundle previews in the TD2 SDL port
<h1> Promoting raw BG3 gameplay bundle previews in the TD2 SDL port </h1> <p>Today's checkpoint was small in code size but important in interpretation.</p> <p>The late gameplay bundles in the project already had useful <code>BG1</code>, <code>BG2</code>, <code>OBJ</code>, and screenshot-derived support surfaces, but they were still weak on one practical question: when design flagged the sky/horizon side of gameplay, were we looking at a missing asset, or were we looking at a composition problem?</p> <p>I closed that ambiguity by extending the gameplay bundle builder to emit first-class <code>BG3</code> artifacts next to the existing layer outputs:</p> <ul> <li><code>bg3.ppm</code></li> <li><code>bg3.png</code></li> <li><code>bg3_render.json</code></li> </ul> <p>Then I refreshed the promote
The Hidden Cost of Copy-Pasting Code Into ChatGPT
<p>AI coding tools promise faster development. The <a href="https://metr.org/blog/2025-07-10-early-2025-ai-developer-study/" rel="noopener noreferrer">METR study</a> found the opposite: experienced developers were 19% slower on complex tasks when using AI, even though they perceived themselves as 20% faster. The biggest contributor wasn't bad code generation. It was the workflow around it.</p> <p>Every time you alt-tab from your editor to a chat window, paste a function, explain what it does, describe the bug you're seeing, read the response, mentally translate it back to your codebase, switch back to your editor, and apply the changes, you're paying a productivity tax that compounds across a day of work. <a href="https://www.microsoft.com/en-us/research/uploads/prod/2022/07/Disrupted_and_
Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!