MIRI Newsletter #124
Fundraiser For the first time in six years, MIRI is running a fundraiser! And it’s an ambitious one: we’re trying to raise $6M ($4.4M from donors plus 1:1 matching on the first $1.6M raised, thanks to an SFF grant). The mission is going well. If Anyone Builds It, Everyone Dies is generating wide-reaching public discussion […] The post MIRI Newsletter #124 appeared first on Machine Intelligence Research Institute .
-
December 2, 2025
-
Harlan Stewart
Fundraiser
For the first time in six years, MIRI is running a fundraiser! And it’s an ambitious one: we’re trying to raise $6M ($4.4M from donors plus 1:1 matching on the first $1.6M raised, thanks to an SFF grant).
The mission is going well. If Anyone Builds It, Everyone Dies is generating wide-reaching public discussion about the danger of superintelligence. The MIRI Technical Governance Team published a first-of-its-kind report with a drafted example of an international agreement to halt the development of superintelligence. We’re spending an increasing amount of time in conversation with policymakers in D.C. about this issue, and just a few days ago our CEO Malo Bourgon testified to the Canadian House of Commons.
But there is still a great deal to be done. MIRI currently has enough funds to continue doing our work for 15 months. Succeeding in this fundraiser would mean raising that to 24 months, which is a level where we can feel a lot more confident in making plans to expand our efforts, hire more people, and try a range of experiments to alert people to the danger of superintelligence and help them make a difference.
If you would like to support our work, you can donate here, and we thank you!
Other MIRI updates
-
The September launch of Eliezer Yudkowsky and Nate Soares’ If Anyone Builds It, Everyone Dies was a success. The book is a New York Times bestseller and, according to Audible and the New Yorker, one of the best books of the year. It has been strongly praised by a wide variety of voices, from Whoopie Goldberg, to Steve Bannon, to Grimes, to Ben Bernanke, to too many others to name here.
-
Eliezer and Nate are participating in interviews to discuss the book. These have included CNN, ABC, NPR, CBS, The Ezra Klein Show, Hank Green, Making Sense with Sam Harris, and many others, with more to come.
-
In a new report, the MIRI Technical Governance Team proposes an illustrative international agreement to halt the development of superintelligence until it can be done safely. In a series of follow-up posts, the authors of the report explain their reasoning about compute thresholds, their optimism that the agreement could be enforced, and the historical precedent for defensive coalitions between states.
-
The MIRI Technical Governance Team welcomed two new members: Brian Abeyta and Robi Rahman. Brian was previously the Pakistan Director at the White House National Security Council, and Robi is a data scientist who previously contributed to research at Epoch. We’re excited to have them on the team!
-
MIRI researcher Max Harms, author of Crystal Society, recently published Red Heart, a speculative thriller about Chinese AGI. AI forecaster Daniel Kokotajlo says: “I overall recommend this book & am tickled by the idea that Situational Awareness, Al 2027, and Red Heart basically form a trio.”
-
MIRI CEO Malo Bourgon spoke before the Canadian House of Commons ETHI committee. In his testimony, Malo explains the extreme risks posed by the race to build superintelligence.
Best,
Harlan Stewart Machine Intelligence Research Institute
Browse
Browse
Subscribe
Follow us on
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
research
Google DeepMind s Research Lets an LLM Rewrite Its Own Game Theory Algorithms — And It Outperformed the Experts
Designing algorithms for Multi-Agent Reinforcement Learning (MARL) in imperfect-information games — scenarios where players act sequentially and cannot see each other s private information, like poker — has historically relied on manual iteration. Researchers identify weighting schemes, discounting rules, and equilibrium solvers through intuition and trial-and-error. Google DeepMind researchers proposes AlphaEvolve, an LLM-powered evolutionary coding agent [ ] The post Google DeepMind s Research Lets an LLM Rewrite Its Own Game Theory Algorithms — And It Outperformed the Experts appeared first on MarkTechPost .

Researchers build Wi-Fi chip that can operate inside a nuclear reactor — receiver uses special materials and design to withstand high doses of radiation for at least six months
Researchers build Wi-Fi chip that can operate inside a nuclear reactor — receiver uses special materials and design to withstand high doses of radiation for at least six months

Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.




Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!