Live
Black Hat USADark ReadingBlack Hat AsiaAI BusinessChina cuts cost of military-grade infrared chips to as little as a few dozen USDSCMP Tech (Asia AI)MethodologyDEV CommunityHow to Create a Pipeline with Dotflow in PythonDEV CommunityJava + AI: Beyond APIs: into runtime, performance, and system designDEV Communityv0.20.3-rc0: model/parsers: add gemma4 tool call repair (#15374)Ollama ReleasesThe Indianapolis Data Center Shooting Is a Local Bug ReportDEV CommunityWriting Self-Documenting TypeScript: Naming, Narrowing, and Knowing When to StopDEV CommunityDiscussion: AI and Privacy-First DevelopmentDEV CommunityDiscussion: AI & Machine Learning CategoryDEV CommunitySecuring Plex on Synology NAS with Post-Quantum Cryptography via Cloudflare TunnelDEV CommunityResume Skills Section: Best Layout + Examples (2026)DEV CommunityHow AI Is Transforming Cybersecurity and Compliance — A Deep Dive into PCI DSSDEV CommunityBlack Hat USADark ReadingBlack Hat AsiaAI BusinessChina cuts cost of military-grade infrared chips to as little as a few dozen USDSCMP Tech (Asia AI)MethodologyDEV CommunityHow to Create a Pipeline with Dotflow in PythonDEV CommunityJava + AI: Beyond APIs: into runtime, performance, and system designDEV Communityv0.20.3-rc0: model/parsers: add gemma4 tool call repair (#15374)Ollama ReleasesThe Indianapolis Data Center Shooting Is a Local Bug ReportDEV CommunityWriting Self-Documenting TypeScript: Naming, Narrowing, and Knowing When to StopDEV CommunityDiscussion: AI and Privacy-First DevelopmentDEV CommunityDiscussion: AI & Machine Learning CategoryDEV CommunitySecuring Plex on Synology NAS with Post-Quantum Cryptography via Cloudflare TunnelDEV CommunityResume Skills Section: Best Layout + Examples (2026)DEV CommunityHow AI Is Transforming Cybersecurity and Compliance — A Deep Dive into PCI DSSDEV Community
AI NEWS HUBbyEIGENVECTOREigenvector

5 architects of AI share the pros and cons of superintelligence

Business Insiderby Reem Makhoul, Robert Leslie, Jessica OrwigApril 3, 20266 min read1 views
Source Quiz

AI architects discuss tech's growing autonomy and its impact on jobs, education, science, healthcare, and more.

2026-04-03T10:07:01.256Z

  • Former AI insiders highlight AI's potential to transform jobs, research, healthcare, and more.

  • They also warn that AI competition is increasing risk and they call for stronger safety measures.

  • "The technologies are oftentimes neutral. It's what people do with them that makes the difference," Craig Mundie said.

Craig Mundie, a former Microsoft executive, said artificial intelligence is no longer just a tool — it's becoming something closer to an independent intelligence, with consequences that could reshape society.

Across a series of interviews with Business Insider, Mundie and four other former leaders from OpenAI, Google, DeepMind, and the White House described a future where AI systems grow more capable, more autonomous, and harder to control.

Their warnings converge on the idea that this technology is advancing faster than society can manage it.

Craig Mundie is an ex-Microsoft exec and runs a firm, Mundie & Associates, that advises organizations on AI and more.

Craig Mundie and Mundie & Associates

Within the coming years, they said, AI could transform labor markets, concentrate power, and introduce new risks — from cyberattacks to autonomous weapons — while also offering breakthroughs in medicine and education.

Ultimately, the outcome — for better or worse — depends on how humans choose to deploy it, they say.

Jobs could disappear faster than systems can adapt

Mo Gawdat, a former chief business officer at Google, said AI could replace jobs in less than five years, starting with intellectual work and eventually extending to physical labor.

"You will have AI's agency in the real world, where they actually can carry things and move things and replace every job," Gawdat said. "The intellectual jobs and then the blue-collar jobs."

Mo Gawdat is a former chief business officer at Google X and currently advises on AI development and human happiness.

Business Insider

Camille Stewart Gloster, who served as the White House deputy national cyber director from 2022 to 2024, said companies are already restructuring work. Tasks like early research and document review are shrinking, reshaping what entry-level roles look like.

She described the labor market shifting from a pyramid shape with many entry-level roles to a diamond shape with more roles concentrated in the middle tiers.

That transition could leave workers behind. Stewart Gloster said some companies are cutting jobs before understanding what skills they will need next, calling the move premature.

Daniel Kokotajlo, a former OpenAI researcher, said the shift may not be gradual. Instead, AI systems could "come smashing through all at once," automating large parts of the economy in a short period.

A powerful technology with few guardrails

Ramana Kumar is a former DeepMind research scientist.

Business Insider

Ramana Kumar, a former DeepMind research scientist, said modern AI systems are designed to be convincing, not necessarily truthful. That makes them useful, but also risky.

Kokotajlo added that even today's systems lack reliable controls, noting they can produce misleading information despite being trained not to.

The concern grows as systems gain autonomy.

Future AI could integrate vast amounts of information in ways humans cannot, giving it a unique advantage in solving complex problems, Mundie said.

Daniel Kokotajlo is a former OpenAI researcher who is now the executive director of the AI Futures Project.

Business Insider

That capability could unlock major advances. Mundie pointed to healthcare as one of the most immediate areas of impact, where AI could determine the root cause of diseases by analyzing entire systems of the human body rather than narrow specialties.

"You don't have to ask the same question to 10 specialists," Mundie said. "The machine does that for you."

Still, the same power creates new vulnerabilities. Stewart Gloster warned that generative AI can scale cyber threats, making phishing attacks more convincing and easier to produce.

"They can also use generative AI to find out a lot about you and really tailor their communications very quickly to cause you harm and to get information from you," Steward Gloster said.

The race to build smarter AI is accelerating risk

Camille Stewart Gloster is the former White House deputy national cyber director and is now the CEO and Principal of CAS Strategies, LLC.

Business Insider

Several experts pointed to competition between companies and countries as a key driver of risk.

AI labs and governments are racing to build more powerful systems quickly, which could lead to widespread deployment of superintelligent AI before safety measures are in place, Kokotajlo said.

"I think that this race pressure will cause the leaders of these countries and the leaders of these companies to aggressively deploy their superintelligences into the economy and also into the military," he said.

Another concern is concentration of power, Kumar said. The organizations that control data centers and models could hold disproportionate influence over how AI is used and deployed.

That dynamic makes regulation difficult. Stewart Gloster — who advises policymakers, industry leaders, and global organizations on matters of technology, cybersecurity, and national security — added that governments are still figuring out what rules are needed, while companies themselves are uncertain about the skills and systems they should prioritize.

A future shaped by human decisions

Despite the risks, the experts described potential benefits that could reshape daily life.

In particular, AI could act as a constant tutor, making education more accessible, Mundie said. It could also accelerate scientific discovery, climate research, and medical treatment.

Some envision a future where work becomes less central.

Kumar said AI could reduce the need for long hours, freeing people to spend time elsewhere — though that would require changes to how income and economic systems function.

Ultimately, Gawdat framed the outcome as a choice: Applied well, AI could lead to a more abundant society; applied poorly, it could deepen inequality, erode trust, and concentrate power.

"The technologies are oftentimes neutral," Mundie said. "It's what people do with them that makes the difference."

AI may become the most powerful system humans have built. What happens next depends less on the machines themselves and more on the people shaping them.

  • AI

  • Artificial Intelligence

  • Tech

  • More

  • Jobs

  • Labor Market

Read next

Was this article helpful?

Sign in to highlight and annotate this article

AI
Ask AI about this article
Powered by Eigenvector · full article context loaded
Ready

Conversation starters

Ask anything about this article…

Daily AI Digest

Get the top 5 AI stories delivered to your inbox every morning.

Knowledge Map

Knowledge Map
TopicsEntitiesSource
5 architect…superintell…Business In…

Connected Articles — Knowledge Graph

This article is connected to other articles through shared AI topics and tags.

Knowledge Graph100 articles · 208 connections
Scroll to zoom · drag to pan · click to open

Discussion

Sign in to join the discussion

No comments yet — be the first to share your thoughts!