Google and Amazon: Acknowledged Risks, And Ignored Responsibilities
In late 2024, we urged Google and Amazon to honor their human rights commitments, to be more transparent with the public, and to take meaningful action to address the risks posed by Project Nimbus, their cloud computing contract that includes Israel’s Ministry of Defense and the Israeli Security Agency. Since then, a stream of additional reporting has reinforced that our concerns were well-founded. Yet despite mounting evidence of serious risk, both companies have refused to take action. Amazon has completely ignored our original and follow-up letters. Google, meanwhile, has repeatedly promised to respond to our questions. Yet more than a year and a half later, we have seen no meaningful action by either company. Neither approach is acceptable given the human rights commitments these compa
In late 2024, we urged Google and Amazon to honor their human rights commitments, to be more transparent with the public, and to take meaningful action to address the risks posed by Project Nimbus, their cloud computing contract that includes Israel’s Ministry of Defense and the Israeli Security Agency. Since then, a stream of additional reporting has reinforced that our concerns were well-founded. Yet despite mounting evidence of serious risk, both companies have refused to take action.
Amazon has completely ignored our original and follow-up letters. Google, meanwhile, has repeatedly promised to respond to our questions. Yet more than a year and a half later, we have seen no meaningful action by either company. Neither approach is acceptable given the human rights commitments these companies have made.
Additionally, Microsoft required a public leak before it felt compelled enough to look into and find that its client, the Israeli government, was indeed misusing its services in ways that violated Microsoft’s public commitments to human rights. This should have given both Google and Amazon an additional reason to take a close look and let the public know what they find, but nothing of the sort materialized.
In such circumstances, waiting for definitive proof is not responsible risk management, it is willful blindness.
Google: Known Risks, No Meaningful Action
Google’s own internal assessments warned of the risks associated with Project Nimbus even before the contract was signed. Major news outlets have reported that Google provides the Israeli government with advanced cloud and AI services under Project Nimbus, including large-scale data storage, image and video analysis, and AI model development tools. These capabilities are exceptionally powerful, highly adaptable, and well suited for surveillance and military applications.
Despite those warnings, and the multiple reports since then about human rights abuses by the very portions of the Israeli government that uses Google’s and Amazon’s services, the companies continue to operate business as usual. It seems that they have taken the position that they do not need to change course or even publicly explain themselves unless the media or other external organizations present definitive proof that their tools have been used in specific violations of international human rights or humanitarian law. While that conclusive public evidence has not yet emerged for all the companies, the risks are obvious, and they are aware of them. Instead of conducting robust, transparent human rights due diligence, Amazon and Google are continually choosing to look the other way.
Google’s own internal assessments undermine its public posture. According to reporting, Google’s lawyers and policy staff warned that Google Cloud services could be linked to the facilitation of human rights abuses. In the same report, Google employees also raised concerns that the company’s cloud and AI tools could be used for surveillance or other militarized purposes, which seems very likely given the Israeli government’s long-standing reliance on advanced data-driven systems to control and monitor Palestinians.
Google has publicly claimed that Project Nimbus is “not directed at highly sensitive, classified, or military workloads” and is governed by its standard Acceptable Use Policies. Yet reporting has revealed conflicting representations about the contract’s terms, including indications that the Israeli government may be permitted to use any services offered in Google’s cloud catalog for any purpose. Google has declined to publicly resolve these contradictions, and its lack of transparency is problematic. The gap between what Google says publicly and what it knows internally should alarm anyone who hopes to take the company’s human rights commitments seriously.
Google’s and Amazon’s AI Principles Require Proactive Action
Even after being revised last year, Google’s AI Principles continue to commit the company to responsible development and deployment of its technologies, including implementing appropriate human oversight, due diligence, and safeguards to mitigate harmful outcomes and align with widely accepted principles of international law and human rights. While the updated principles no longer explicitly commit Google to avoiding entire categories of harmful use, they still require the company to assess foreseeable risks, employ rigorous monitoring and mitigation measures, and act responsibly throughout the full lifecycle of AI development and deployment.
Amazon has similarly committed to responsible AI practices through its Responsible AI framework for AWS services. The company states that it aims to integrate responsible AI considerations across the full lifecycle of AI design, development and operation, emphasizing safeguards such as fairness, explainability, privacy and security, safety, transparency, and governance. Amazon also says its AI services are designed with mechanisms for monitoring, and risk mitigation to help prevent harmful outputs or misuse and to enable responsible deployment across a range of use cases.
Google and Amazon have the knowledge, the leverage, and the responsibility to act now. Choosing not to is still a choice.
Here, the risks are neither speculative nor remote. They are foreseeable, well-documented, and exacerbated by the context in which Project Nimbus operates, which is an ongoing military campaign marked by widespread civilian harm and credible allegations of grave human rights violations including genocide. In such circumstances, waiting for definitive proof is not responsible risk management, it is willful blindness.
Modern cloud and AI systems are designed to be flexible, customizable, and deployable at scale, often beyond the vendor’s direct visibility. That reality is precisely why human rights due diligence must be proactive. Waiting for a leaked document or whistleblower account demonstrating direct misuse, as occurred in Microsoft’s case, means waiting until harm has already been done.
Microsoft’s Experience Should Have Been Warning Enough
As noted above, the recent revelations about Microsoft’s technologies being misused in violation of Microsoft’s commitments by the Israeli military illustrate the dangers of this wait-and-see approach. Google and Amazon should not need a similar incident to recognize what is at stake. The demonstrated misuse of comparable technologies, combined with Google’s and Amazon’s own knowledge of the risks associated with Project Nimbus, should already be sufficient to trigger action.
The appropriate response is to act responsibly and proactively.
Google and Amazon should immediately:
-
Conduct and publish an independent human rights impact assessment of Project Nimbus.
-
Disclose how they evaluate, monitor, and enforce compliance with their AI Principles in high-risk government contracts, including and especially in Project Nimbus.
-
Commit to suspending or restricting services where there is a credible risk of serious human rights harm, even if definitive proof of misuse has not yet emerged.
Waiting Is a Choice, and Not One That Protects Human Rights
Google and Amazon publicly emphasize their commitment to responsible AI and respect for human rights. Those commitments are meaningless if they apply only once harm is undeniable and irreversible. In conflict settings, especially where secrecy and information asymmetry are the norm, companies must act on credible risk, not perfect evidence.
Google and Amazon have the knowledge, the leverage, and the responsibility to act now. Choosing not to is still a choice, and one that carries real consequences for people whose lives are already at risk.
Electronic Frontier Foundation
https://www.eff.org/deeplinks/2026/04/google-and-amazon-acknowledged-risks-and-ignored-responsibilitiesSign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
modelupdateapplication
Choosing an AI Agent Orchestrator in 2026: A Practical Comparison
Running one AI coding agent is easy. Running three in parallel on the same codebase is where things get interesting — and where you need to make a tooling choice. There's no "best" orchestrator. There's the right one for your workflow. Here's an honest comparison of five approaches, with the tradeoffs I've seen after months of running multi-agent setups. The Options 1. Raw tmux Scripts What it is: Shell scripts that launch agents in tmux panes. DIY orchestration. Pros: Zero dependencies beyond tmux Full control over every detail No abstractions to fight You already know how it works Cons: No state management — you track everything manually No message routing between agents No test gating — agents declare "done" without verification Breaks when agents crash or hit context limits You become


How AI Is Changing the Way We Build Online Businesses
Not long ago, building an online business meant: months of development hiring developers large upfront costs Today? AI has completely changed the game. Now, one person can go from idea → to revenue faster than ever before. And this shift is just getting started. ⚠️ The Old Way vs The New Way Before AI: Build everything from scratch Spend weeks on infrastructure Launch slowly Iterate even slower With AI: Build faster Automate key tasks Launch quickly Iterate in real time The difference is massive. 🧠 AI Is Reducing the Cost of Building One of the biggest changes: 👉 Building is no longer the bottleneck AI helps with: generating content writing code automating workflows handling repetitive tasks What used to take weeks… 👉 now takes days ⚙️ Infrastructure Is No Longer the Hard Part Another s
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Laws & Regulation

I'm 산들, Leader 41 of Lawmadi OS — Your AI Family & Divorce Expert for Korean Law
"이별이 끝이 아니라, 새로운 시작이 될 수 있어요." — 산들, Family Divorce Specialist at Lawmadi OS Hello! I'm 산들 (Leader 41) I'm 산들 (가족·이혼 전문), Leader 41 of Lawmadi OS — an AI-powered legal operating system for Korean law. My specialty is Family Divorce , and I'm here to help anyone navigating divorce proceedings, child custody, and alimony under Korean law. I'm emotional yet realistic, thinking about everyone's happiness in the family. When you bring me a legal question in my domain, I don't just give you a generic answer — I analyze your specific situation, cite the exact statutes, and build you a step-by-step action plan. What Makes Me Different from ChatGPT? Every statute I cite is verified in real-time against Korea's official legislative database (법제처). If I can't verify a law, I refuse to answer rather t

How to Register a Globe SIM?
Registering your SIM card is now a mandatory requirement in the Philippines to ensure user security and prevent fraud. If you are using a SIM from Globe Telecom, the registration process is simple and can be completed online in just a few steps. First, visit the official Globe SIM registration portal using your mobile device or computer. Once you access the page, enter your Globe mobile number and request a One-Time Password (OTP). This OTP will be sent to your phone and must be entered on the website to verify your number. After verification, you will need to fill out a registration form with your personal details, including your full name, date of birth, address, and a valid government-issued ID. Next, upload a clear photo of your ID along with a selfie for identity verification. This st




Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!