Examining popular arguments against AI existential risk: a philosophical analysis
Concerns about artificial intelligence (AI) and its potential existential risks have garnered significant attention, with figures like Geoffrey Hinton and Dennis Hassabis advocating for robust safeguards against catastrophic outcomes. Prominent scholars, such as Nick Bostrom and Max Tegmark, have further advanced the discourse by exploring the long-term impacts of superintelligent AI. However, this existential risk narrative faces criticism, particularly in popular media, where scholars like Timnit Gebru, Melanie Mitchell, and Nick Clegg argue, among other things, that it distracts from pressing current issues. Despite extensive media coverage, skepticism toward the existential risk discourse has received limited rigorous treatment in academic l
References
- Allyn, B. (2024, September 29). California Gov. Newsom vetoes AI safety bill that divided Silicon Valley. NPR. https://www.npr.org/2024/09/20/nx-s1-5119792/newsom-ai-bill-california-sb1047-tech
- Ambartsoumean, V. M., & Yampolskiy, R. V. (2023). AI risk skepticism, a comprehensive survey. arXiv. https://doi.org/10.48550/arXiv.2303.03885
Article
Google Scholar
- Bengio, Y. (2023). AI and catastrophic risk. Journal of Democracy, 34(4), 111–121. https://doi.org/10.1353/jod.2023.a907692
Article
Google Scholar
- Bengio, Y., Hinton, G., Yao, A., Song, D., Abbeel, P., Darrell, T., Harari, Y. N., Zhang, Y. Q., Xue, L., Shalev-Shwartz, S., Hadfield, G., Clune, J., Maharaj, T., Hutter, F., Baydin, A. G., McIlraith, S., Gao, Q., Acharya, A., Krueger, D., & Mindermann, S. (2024). Managing extreme AI risks amid rapid progress. Science, 384(6698), 842–845. https://doi.org/10.1126/science.adn0117
Article
Google Scholar
- Bostrom, N. (2014). Superintelligence: Paths, Dangers, Strategies. Oxford University Press.
Google Scholar
- Brundage, M., Avin, S., Clark, J., Toner, H., Eckersley, P., Garfinkel, B., Dafoe, A., Scharre, P., Zeitzoff, T., Filar, B., Anderson, H., Roff, H., Allen, G. C., Steinhardt, J., Flynn, C., HÉigeartaigh, S. Ó., Beard, S., Belfield, H., Farquhar, S., & Amodei, D. (2018). The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation. arXiv. https://doi.org/arXiv: 1802.07228
- Buolamwini, J. (2023). Unmasking AI: My mission to protect what is human in a world of machines. Random House.
- Carlsmith, J. (2024). Is power-seeking AI an existential risk? arXiv. https://doi.org/10.48550/arXiv.2206.13353
Article
Google Scholar
- Dung, L. (2024). The argument for near-term human disempowerment through AI. AI and Society. https://doi.org/10.1007/s00146-024-01930-2
Article
Google Scholar
- Elias, J. (2024, February 28). Google CEO tells employees Gemini AI blunder is ‘unacceptable’. NBC News. https://www.nbcnews.com/tech/tech-news/google-ceo-tells-employees-gemini-ai-blunder-unacceptable-rcna140926
- Future of Life Institute (2023, March 22). Pause Giant AI Experiments: An Open Letter. Future of Life Institute. https://futureoflife.org/open-letter/pause-giant-ai-experiments/
- Ganguli, D., Hernandez, D., Lovitt, L., Askell, A., Bai, Y., Chen, A., Conerly, T., Dassarma, N., Drain, D., Elhage, N., El Showk, S., Fort, S., Hatfield-Dodds, Z., Henighan, T., Johnston, S., Jones, A., Joseph, N., Kernian, J., Kravec, S., & Clark, J. (2022). Predictability and surprise in large generative models. Proceedings of the 2022 ACM Conference on Fairness Accountability and Transparency, 1747-1764. https://doi.org/10.1145/3531146.3533229
- Good, I. J. (1965). Speculations Concerning the First Ultraintelligent Machine. In F. Alt & M. Ruminoff (Eds.), Advances in Computers, volume 6. Academic Press.
- Grunewald, E. (2023, December 21). Attention on Existential Risk from AI Likely Hasn’t Distracted from Current Harms from AI. Erich Grunewald’s Blog. https://www.erichgrunewald.com/posts/attention-on-existential-risk-from-ai-likely-hasnt-distracted-from-current-harms-from-ai/
- Heaven, W. D. (2023, May 2). Geoffrey Hinton tells us why he’s now scared of the tech he helped build. MIT Technology Review. https://www.technologyreview.com/2023/05/02/1072528/geoffrey-hinton-google-why-scared-ai/
- Hendrycks, D., Mazeika, M., & Woodside, T. (2023). An overview of catastrophic AI risks. arXiv. https://doi.org/10.48550/arXiv.2306.12001
Article
Google Scholar
- Kasirzadeh, A., & Gabriel, I. (2023). In conversation with artificial intelligence: Aligning language models with human values. Philosophy & Technology, 36(2), Article 27. https://doi.org/10.1007/s13347-023-00606-x
Article
Google Scholar
- Lee, W. (2024, September 17). Gov. Newsom signs AI-related bills regulating Hollywood actor replicas and deep fakes. Los Angeles Times. https://www.latimes.com/entertainment-arts/business/story/2024-09-17/newsom-ai-bills-sag-aftra
- Levin, P. L. (2024, January 24). The real issue with artificial intelligence: The misalignment problem. The Hill. https://thehill.com/opinion/4427702-the-real-issue-with-artificial-intelligence-the-misalignment-problem/
- Manancourt, V., & Bristow, T. (2024, September 20). Meta’s Nick Clegg tears into Rishi Sunak’s AI doomerism. POLITICO. https://www.politico.eu/article/meta-nick-clegg-tears-rishi-sunak-ai-doomerism-ai-summit-national-security/
- Maslej, N., Fattorini, L., Perrault, R., Parli, V., Reuel, A., Brynjolfsson, E., Etchemendy, J., Ligett, K., Lyons, T., Manyika, J., Niebles, J. C., Shoham, Y., Wald, R., & Clark, J. (2024). The AI Index 2024 Annual Report. AI Index Steering Committee, Institute for Human-Centered AI, Stanford University, Stanford, CA. https://aiindex.stanford.edu/report/
- McLean, S., Read, G. J. M., Thompson, J., Baber, C., Stanton, N. A., & Salmon, P. M. (2023). The risks associated with artificial general intelligence: A systematic review. Journal of Experimental and Theoretical Artificial Intelligence, 35(5), 649–663. https://doi.org/10.1080/0952813X.2021.1964003
Article
Google Scholar
- Milmo, D. (2023a, October 24). AI risk must be treated as seriously as climate crisis, says Google DeepMind chief. The Guardian. https://www.theguardian.com/technology/2023/oct/24/ai-risk-climate-crisis-google-deepmind-chief-demis-hassabis-regulation
- Milmo, D. (2023b, October 29). AI doomsday warnings a distraction from the danger it already poses, warns expert. The Guardian. https://www.theguardian.com/technology/2023/oct/29/ai-doomsday-warnings-a-distraction-from-the-danger-it-already-poses-warns-expert
- Müller, V. C., & Cannon, M. (2022). Existential risk from AI and orthogonality: Can we have it both ways ? Ratio, 35(1), 25–36. https://doi.org/10.1111/rati.12320
Article
Google Scholar
- Nolan, B. (2023, May 25). Ex-Google CEO Eric Schmidt says AI poses an ‘existential risk’ that could kill or harm ‘many, many people’. Business Insider. https://www.businessinsider.com/google-eric-schmidt-ai-poses-an-existential-risk-kill-people-2023-5
- Omohundro, S. M. (2008). The Basic AI Drives. Proceedings of the 2008 Conference on Artificial General Intelligence 2008: Proceedings of the First AGI Conference, 483–492. https://doi.org/10.5555/1566174.1566226
- Ord, T. (2020). The precipice: Existential risk and the future of humanity. Hachette Books.
- Pascual, M. G. (2024, April 15). Melanie Mitchell: ‘The big leap in artificial intelligence will come when it is inserted into robots that experience the world like a child’. EL PAÍS English. https://english.elpais.com/technology/2024-04-14/melanie-mitchell-the-big-leap-in-artificial-intelligence-will-come-when-it-is-inserted-into-robots-that-experience-the-world-like-a-child.html
- Pinker, S. (2019). Enlightenment now: The case for reason, science, humanism and progress. Penguin Books.
- Richards, B., Arcas, B. A., y, Lajoie, G., & Sridhar, D. (2023, July 18). The Illusion Of AI’s Existential Risk. NOEMA. https://www.noemamag.com/the-illusion-of-ais-existential-risk
- Russell, S. (2019). Human Compatible: Artificial Intelligence and the Problem of Control. Penguin.
Google Scholar
- Ryan-Mosley, T. (2023, June 12). It’s time to talk about the real AI risks. MIT Technology Review. https://www.technologyreview.com/2023/06/12/1074449/real-ai-risks/
- Sandbrink, J. B. (2023). Artificial intelligence and biological misuse: Differentiating risks of language models and biological design tools. arXiv . https://arxiv.org/abs/2306.13952
- Science Media Centre (2023, May 30). Expert reaction to a statement on the existential threat of AI published on the Centre for AI Safety website. https://www.sciencemediacentre.org/expert-reaction-to-a-statement-on-the-existential-threat-of-ai-published-on-the-centre-for-ai-safety-website/
- Soice, E. H., Rocha, R., Cordova, K., Specter, M., & Esvelt, K. M. (2023). Can large language models democratize access to dual-use biotechnology? arXiv. https://doi.org/10.48550/arXiv.2306.03809
- Tegmark, M. (2018). Life 3.0: Being human in the age of artificial intelligence (First Vintage books edition). Vintage Books.
Google Scholar
- The editorial board. (2023). Stop talking about tomorrow’s AI doomsday when AI poses risks today. Nature, 618(7967), 885–886. https://doi.org/10.1038/d41586-023-02094-7
Article
Google Scholar
- Thorstad, D. (2024, March 22). Harms (Part 1: Distraction). Reflective Altruism. https://reflectivealtruism.com/2024/03/22/harms-part-1-distraction/
- Tung, L. (2016). Google Alphabet’s Schmidt: Ignore Elon Musk’s AI fears—He’s no computer scientist. ZDNET. https://www.zdnet.com/article/google-alphabets-schmidt-ignore-elon-musks-ai-fears-hes-no-computer-scientist/
- Urbina, F., Lentzos, F., Invernizzi, C., & Ekins, S. (2022). Dual use of artificial-intelligence-powered drug discovery. Nature Machine Intelligence, 4(3), 189–191. https://doi.org/10.1038/s42256-022-00465-9
Article
Google Scholar
- Verma, P., Zakrzewski, C., & Tiku, N. (2024, July 13). OpenAI illegally barred staff from airing safety risks, whistleblowers say. Washington Post. https://www.washingtonpost.com/technology/2024/07/13/openai-safety-risks-whistleblower-sec/
- Vermeer, M. J. D., Lathrop, E., & Moon, A. (2025). On the Extinction Risk from Artificial Intelligence. https://www.rand.org/pubs/research_reports/RRA3034-1.html
- Wei, J., Tay, Y., Bommasani, R., Raffel, C., Zoph, B., Borgeaud, S., Yogatama, D., Bosma, M., Zhou, D., Metzler, D., Chi, E. H., Hashimoto, T., Vinyals, O., Liang, P., Dean, J., & Fedus, W. (2022). Emergent Abilities of Large Language Models. Transactions on Machine Learning Research. https://openreview.net/forum?id=yzkSU5zdwD.
- Wilmoth, P. (2024, March 11). Is AI an Existential Risk? Q&A with RAND Experts. RAND Research & Commentary. https://www.rand.org/pubs/commentary/2024/03/is-ai-an-existential-risk-qa-with-rand-experts.html
- Yampolskiy, R. V. (2024). On monitorability of AI. AI and Ethics. https://doi.org/10.1007/s43681-024-00420-x
Article
Google Scholar
Download references
Ethics and Information Technology
https://link.springer.com/article/10.1007/s10676-025-09881-ySign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.

Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!