Live
Black Hat USAAI BusinessBlack Hat AsiaAI BusinessAI shutdown controls may not work as expected, new study suggests - ComputerworldGoogle News: Generative AI27 questions to ask when choosing an LLM - InfoWorldGoogle News: LLMJapan, driven by labor shortages, is increasingly adopting robotics and physical AI, with a hybrid model where startups innovate and corporations provide scale (Kate Park/TechCrunch)TechmemeAnthropic tells OpenClaw users to pay up - The Rundown AIGoogle News: ClaudeANALYSIS: Q1 IPOs ‘Forge’ Ahead as OpenAI, SpaceX Look to Debuts - Bloomberg Law NewsGoogle News: OpenAINew track in artificial intelligence added to Arkansas Tech University curriculum - River Valley Democrat-GazetteGoogle News: AIDeepMind Calls for New Safeguards Against AI Agent Exploitation - The420.inGoogle News: DeepMindChatGPT web service hit by brief disruption, OpenAI investigates - news.cgtn.comGoogle News: ChatGPTAgile Robots and Google DeepMind Partner on AI-Driven Industrial Robotics - ARC AdvisoryGoogle News: DeepMind40 Days of Building HarshAI: What I Learned About AI AutomationDEV CommunityMoving fast with agents without losing comprehensionDEV CommunityCharlie's Chocolate Factory Paperclip — Ep.1DEV CommunityBlack Hat USAAI BusinessBlack Hat AsiaAI BusinessAI shutdown controls may not work as expected, new study suggests - ComputerworldGoogle News: Generative AI27 questions to ask when choosing an LLM - InfoWorldGoogle News: LLMJapan, driven by labor shortages, is increasingly adopting robotics and physical AI, with a hybrid model where startups innovate and corporations provide scale (Kate Park/TechCrunch)TechmemeAnthropic tells OpenClaw users to pay up - The Rundown AIGoogle News: ClaudeANALYSIS: Q1 IPOs ‘Forge’ Ahead as OpenAI, SpaceX Look to Debuts - Bloomberg Law NewsGoogle News: OpenAINew track in artificial intelligence added to Arkansas Tech University curriculum - River Valley Democrat-GazetteGoogle News: AIDeepMind Calls for New Safeguards Against AI Agent Exploitation - The420.inGoogle News: DeepMindChatGPT web service hit by brief disruption, OpenAI investigates - news.cgtn.comGoogle News: ChatGPTAgile Robots and Google DeepMind Partner on AI-Driven Industrial Robotics - ARC AdvisoryGoogle News: DeepMind40 Days of Building HarshAI: What I Learned About AI AutomationDEV CommunityMoving fast with agents without losing comprehensionDEV CommunityCharlie's Chocolate Factory Paperclip — Ep.1DEV Community
AI NEWS HUBbyEIGENVECTOREigenvector

Just Because We Can: The Strategic Risks Of Automating Everything

Crunchbase Newsby Itay SagieApril 3, 20264 min read1 views
Source Quiz

While AI and automation can be powerful, many applications use complex global systems to solve simple problems that could be handled locally. Guest author Itay Sagie shares three risks of undisciplined automation of everything, urging more thoughtful and disciplined use of technology.

Recently, I caught myself saying: “OK, Google, turn on the shower vent.”

Within seconds, my voice left my home in Haifa, traveled through submarine fiber networks to Europe, was processed in a Google data center, possibly routed through additional vendor clouds across continents, and then made its way back, only to activate a switch sitting 10 inches from my face. The techie nerd in me gets excited every time this happens. But … I could have just raised my hand and pressed it.

We live in both incredible and absurd times. Our growing tendency to deploy global systems, across multiple vendors, and continuous compute to solve problems that were already solved locally is something I feel we need to discuss.

To be clear, I am very much in favor of automation and agentic AI. I am educating myself with agentic AI courses to keep up with the times and use the latest capabilities. In many cases, they are transformative to businesses and consumers. Especially at scale, in repetitive processes, in data-heavy environments, or in cases where accessibility matters, AI agents do unlock real value.

But not every problem belongs in that category. And I feel an increasing number of AI-based applications and workflow automations tend to fall in the “shower vent” category.

You may think this isn’t an issue: What does it matter if we bring the tech revolution to solve ridiculous tasks, just because we can?

But there are drawbacks and risks to the automate-everything ethos.

Three risks of automating without discipline

Operational risk: more points of failure, less control: That simple command depends on multiple systems working in sync, your device, your network, Google’s infrastructure and potentially a third-party vendor cloud.

If any layer fails, the system fails. The same pattern is emerging in agentic AI workflows: multistep pipelines across LLMs, orchestration tools and external APIs. These add dependencies and complexities.

To give another example from my personal life: When my parents got their existing home, they built it as a “smart home.” It worked great, until a “smart lightswitch” malfunctioned and the smart home company asked for $1,500 to send a special “smart home engineer” to fix what would have been a $5 DIY. This is equivalent to hiring AI engineers and automation experts to support a workflow that could have been handled by a junior, nontechnical person in 10 minutes.

And that brings me to the next point.

Economic risk: hidden and compounding costs: Voice commands and AI workflows feel inexpensive at small scale, but they rely on paid infrastructure: compute, API calls, tokens, orchestration layers and vendor integrations.

In many cases, especially at scale, when implemented for those “ridiculous” tasks, the cost of automation can approach, or exceed, the value of the task being automated. We must ensure we invest in AI and automation where it makes economic sense.

Environmental and strategic risk: scaling inefficiency: Data centers create hundreds of millions of tons of CO₂ emissions annually, estimated to grow to 2.5 billion tons of CO₂ emissions by 2030. AI is becoming a growing percentage of that. So these are megatons of CO₂ emissions, and growing.

While each small agentic AI workflow can account for a few grams of CO₂ emissions, at scale, these inefficiencies compound into real environmental impact. More importantly, this reflects a strategic issue: optimizing for the sake of it. This mindset can mean we often lose focus on solving meaningful problems.

Itay Sagie is a strategic adviser to tech companies and investors, specializing in strategy, growth and M&A, a guest contributor to Crunchbase News, and a seasoned lecturer. Learn more about his advisory services, lectures and courses at SagieCapital.com. Connect with him on LinkedIn for further insights and discussions.

Illustration: Dom Guzman

Stay up to date with recent funding rounds, acquisitions, and more with the Crunchbase Daily.

Was this article helpful?

Sign in to highlight and annotate this article

AI
Ask AI about this article
Powered by Eigenvector · full article context loaded
Ready

Conversation starters

Ask anything about this article…

Daily AI Digest

Get the top 5 AI stories delivered to your inbox every morning.

More about

applicationglobal

Knowledge Map

Knowledge Map
TopicsEntitiesSource
Just Becaus…applicationglobalCrunchbase …

Connected Articles — Knowledge Graph

This article is connected to other articles through shared AI topics and tags.

Knowledge Graph100 articles · 239 connections
Scroll to zoom · drag to pan · click to open

Discussion

Sign in to join the discussion

No comments yet — be the first to share your thoughts!

More in Products