Am I the baddie?
I am a software engineer. I work for a company that makes software for road construction. Monday last week we were under a bad crunch and we were told to start using agentic workflows. We had like 50 tickets to close by the following Tuesday. I’ve been experimenting with ai development for years now, but this was different. I had access to Opus/Sonnet 4.6, and GPT5.4—the latest models. Suddenly, they understood. I could talk about abstract concept’s and analogies, and it got them. I was soon working through tickets the first day in hours, what would have taken me days. But we still had a ton of work and not enough time. I was still bound to a single thread of work at a time. So like any problem, I hacked around it. I started with a worktree, where it basically creates a whole other copy of
I am a software engineer. I work for a company that makes software for road construction. Monday last week we were under a bad crunch and we were told to start using agentic workflows. We had like 50 tickets to close by the following Tuesday. I’ve been experimenting with ai development for years now, but this was different. I had access to Opus/Sonnet 4.6, and GPT5.4—the latest models. Suddenly, they understood. I could talk about abstract concept’s and analogies, and it got them. I was soon working through tickets the first day in hours, what would have taken me days. But we still had a ton of work and not enough time. I was still bound to a single thread of work at a time. So like any problem, I hacked around it. I started with a worktree, where it basically creates a whole other copy of the project I was working in, and that meant multiple threads.
Still I was limited to my single service, and the system that I work on has like 20 services. Wednesday comes, and I’m still cranking the tickets out, when I realized what I could do was create a repo with sub modules for every service. The agent works best when it can find the context it needs without being overloaded.
Thursday comes, and we’re not going to make it I’ve already put in about 40 hours. they said to lean in, so I did. After setting up my MCP servers for our ticket, documentation system, communication, and calendar systems. I told the agent to pull ALL of the tickets for the big feature we are working on, then go through our documentation and communications to look for mentions of this feature, and to turn that into design requirements, then after a Q&A session, we made a plan to implement all open tickets. my idea was that with the full context, it will be better able to perform
It worked, or at least it seemed to. I was almost embarrassed about it. I was talking to our systems architect about how everything is different, and I mentioned this branch of code. he said, ”You know what? Let’s try it“ we brought it to the team, and they figured let’s give it a shot. I hadn’t actually run the code outside of tests. So our QA team dug into it live. The first one worked. The second. The third, and on and on. We went from not going to be able to finish on time, to mostly done. We found a few small bugs, but such is the way of software, especially things as complex as this. My side project expanded. I created a CLI, a extension for my IDE to manage the local dev environments that could all run independently, and I made a dashboard that pulls all of my tickets, gives me a button to press that spins up an agent with special instructions. it pulls the details and writes the code, pushing it up for me to review. After that i added another button that fixes any issues that come up in review. My work flow became
- Push button
- Code review
- Maybe push another button
My boss said I had gone plaid. Hahahaha My dashboard became sophisticated, and my process lean. now I had a way to interact with the whole system. I had it solve big problems. Ones that would take months, solved in a day, two with QA. I had a system to unify our teams, and to allow business analysts to contribute code. Today, a week later than when I started the project, I talked to two directors and I blew their socks off. We’re talking about doing something like this for the entire company, and I talked about automating the two buttons. It was a big win. I know I have a big raise coming. It’s likely not enough considering my impact. I went out with friends, and AI came up. they’re pretty sure it’s going to lead to disaster. My general P(Doom) is about 60%. As I was leaving, I had the thought, Am I profiting off of human suffering? I’m proliferating these systems in more places, and my project will mean we are over-staffed at work. It kind of overwhelmed me. Am I the baddie?
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
modelservicecompany
Beware the Magical 2-Person, $1 Billion AI-Driven Startup
In early 2024, OpenAI CEO Sam Altman predicted there would be a “one-person billion dollar company, which would have been unimaginable without AI, but now it will happen.” Several media outlets recently concluded that the prediction came true (albeit with two employees). But the story looks less promising upon deeper inspection. Retain Healthy Skepticism When [ ]

AI Is Insatiable
While browsing our website a few weeks ago, I stumbled upon “ How and When the Memory Chip Shortage Will End ” by Senior Editor Samuel K. Moore. His analysis focuses on the current DRAM shortage caused by AI hyperscalers’ ravenous appetite for memory, a major constraint on the speed at which large language models run. Moore provides a clear explanation of the shortage, particularly for high bandwidth memory (HBM). As we and the rest of the tech media have documented, AI is a resource hog. AI electricity consumption could account for up to 12 percent of all U.S. power by 2028. Generative AI queries consumed 15 terawatt-hours in 2025 and are projected to consume 347 TWh by 2030. Water consumption for cooling AI data centers is predicted to double or even quadruple by 2028 compared to 2023. B
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Products

Beware the Magical 2-Person, $1 Billion AI-Driven Startup
In early 2024, OpenAI CEO Sam Altman predicted there would be a “one-person billion dollar company, which would have been unimaginable without AI, but now it will happen.” Several media outlets recently concluded that the prediction came true (albeit with two employees). But the story looks less promising upon deeper inspection. Retain Healthy Skepticism When [ ]

The Geometry Behind the Dot Product: Unit Vectors, Projections, and Intuition
The geometric foundations you need to understand the dot product The post The Geometry Behind the Dot Product: Unit Vectors, Projections, and Intuition appeared first on Towards Data Science .

AI Is Insatiable
While browsing our website a few weeks ago, I stumbled upon “ How and When the Memory Chip Shortage Will End ” by Senior Editor Samuel K. Moore. His analysis focuses on the current DRAM shortage caused by AI hyperscalers’ ravenous appetite for memory, a major constraint on the speed at which large language models run. Moore provides a clear explanation of the shortage, particularly for high bandwidth memory (HBM). As we and the rest of the tech media have documented, AI is a resource hog. AI electricity consumption could account for up to 12 percent of all U.S. power by 2028. Generative AI queries consumed 15 terawatt-hours in 2025 and are projected to consume 347 TWh by 2030. Water consumption for cooling AI data centers is predicted to double or even quadruple by 2028 compared to 2023. B

The one piece of data that could actually shed light on your job and AI
This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here. Within Silicon Valley’s orbit, an AI-fueled jobs apocalypse is spoken about as a given. The mood is so grim that a societal impacts researcher at Anthropic, responding Wednesday to a call for


Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!