Does AI work feel a bit too habit-forming?
Article URL: https://thedayninja.substack.com/p/why-im-paying-attention-to-ai-overload Comments URL: https://news.ycombinator.com/item?id=47592362 Points: 1 # Comments: 1
Most days I have multiple agents running.
Working with AI can feel a lot like pulling the handle on a slot machine. One finishes, another is already waiting. You check the result, adjust the prompt, spin up another thread, compare versions, keep going.
That is the part I do not think we are talking about enough.
Most people talk about AI in terms of productivity. Fair enough. It clearly is productive. But I think there is another side to it that is easier to miss, especially if you are ambitious and already wired to keep going.
The thing that gets me is the reward pattern. Sometimes the output is bad. Sometimes it is nearly right. Sometimes it is surprisingly good. That seems to be enough to keep the loop going.
What worries me is not just the time spent, but the kind of mental state it creates.
The context switching is jarring. Code review, writing, strategy, creative, then back again. You catch yourself queuing the next prompt during a call while trying to stay present with the person in front of you. You tell yourself it is efficient. Maybe it is. But it also does something to your attention - it makes it fragile.
When I first sensed it, I joked “time to serve the bots”, but I am not really joking anymore.
I pay attention to this because I have already had one serious health wake-up call from prolonged overload. A near-fatal heart attack changes how you interpret certain patterns. You stop romanticising intensity. You become more sensitive to the difference between genuine progress vs stimulation that only feels like progress.
That is why this moment with AI has my attention.
Some of this feels familiar in a way I do not entirely like. The difficulty stopping. The compulsion to get to the keyboard the moment you wake. The sense that stepping away is wasteful.
What unsettles me is not just the power of these tools, but the loop they create over the course of a day. I find myself stuck in AI-coding and unable to pull away to do other equally important things.
You prompt the model.
It gives you something back.
You assess it.
You refine it.
You try again.
You open another thread.
You launch another agent.
One finishes and the other is already waiting, ready to be fed.
Building apps most of my life, I know a bit about habit loops. Trigger, action, reward, repetition.
You know mechanics in social media, games, and other apps you come back to daily? What feels new here is that a similar pattern may now be showing up inside our daily work.
That is what makes it harder to spot.
The compulsion hides inside the output.
When somebody is doom scrolling, it is easier to tell something is off. When somebody is producing, shipping, solving, writing, coding, researching, and responding, the same warning signs are easier to miss. From the outside they look effective. From the inside they may be running hot in a way that is simply not sustainable.
I do not think this means AI itself is bad. I use it every day. I am not interested in anti-technology arguments. I am interested in understanding the human cost of a new kind of cognitive environment.
My instinct is that we are at the beginning of a new category of strain. Not just burnout from volume, but burnout from continuous AI-mediated stimulation. A workday made up of partial attention, variable rewards, unresolved loops, and very few natural stopping points.
That matters because many of the people adopting these tools hardest are exactly the people who already struggle to stop. Founders. Operators. Engineers. High-agency knowledge workers. People who can tolerate intensity for a long time and are used to mistaking endurance for invulnerability.
I know that pattern because I have lived too close to it.
That is one reason I care so much about this question, and one reason I am building in this space. I do not think the next generation of mental health support can focus only on crisis. It has to help people notice the pattern earlier, while they still look productive from the outside and before the cost becomes obvious.
We may need better language for this.
We may need better ways to measure it too. Not just output, but decision quality. Not just hours worked, but whether someone can still think clearly, stop cleanly, and return to centre.
AI is going to change how we work. That part is obvious.
What is less obvious is what it may be doing to our minds while we work.
That is the part I want to understand properly.
If you’ve felt this too, I’m collecting responses here
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Analyst News
Synthetic data is all you need for Reinforcement Learning - Security Boulevard
<a href="https://news.google.com/rss/articles/CBMinAFBVV95cUxOSEhHbzBaQlJudnp6akZnZ0lmWGYxT2pweTc5aFlLXzZnMTRSMjloSlc4T1IycllHX2RUWkxJSWhpVXowbFZxV3VZaXJvNmZzM3pLdDg1anJXMVpsRXpGemtsR2hURE5uTzljSUVVaTRyeGpfcGxqdmNCWTlPSEJZQldFSTAtSnR5M2dWQnczNWhESjY5RE9DY1N3dEQ?oc=5" target="_blank">Synthetic data is all you need for Reinforcement Learning</a> <font color="#6f6f6f">Security Boulevard</font>
DeepMind’s protein-folding AI has solved a 50-year-old grand challenge of biology - technologyreview.com
<a href="https://news.google.com/rss/articles/CBMiuAFBVV95cUxOcVdveUFDblpQUUJHZXhIOFJfenVKUzAtaFFSVVAtWXl3Z0xkaEpwTnRCSW5NQkxoRjY0STlvdU1YNnZfazZMcGp2WjAzY3BPTTRJZ2k1aHcwRkM5S2FnSGZ6MGZkOS1IOGcxWkJtd3RkUExVOXhxNVZqTXRYRVBxWHg3c2FsQmZGdWRKc0hKdXM1TkNWSE9VMFY1RTcwYTA5ajZWaEdEMUFQY18yWE1IQ1EwMC1UME1t0gG-AUFVX3lxTE5qNGQtRUV0MHN0TUc1VWlhYjZMalFkWWJHNHFpYnRhdVFaMXlQWW44TkNHTHZod3dZMjliaHFDRU5ualRxUWZZcUJWT1gxWldwTjQ2ckp2SG9wWk1VWTNoSXo4R2hpQUFJUUk1TWdKUzJMa04tMTlkLU83d2ZnUWk1bnA2WnBaaS1hUXNiNTBFdUZTZ19obnFtRkRVU3p4cFFneWJNc1JzMlZvYmd1bWtkSmR4cGpBcVV6LVZtU2c?oc=5" target="_blank">DeepMind’s protein-folding AI has solved a 50-year-old grand challenge of biology</a> <font color="#6f6f6f">technologyreview.com</font>
An Overview of Alphafold's Breakthrough - Frontiers
<a href="https://news.google.com/rss/articles/CBMioAFBVV95cUxOSmlKVXM2NGZZdnJFbW0xYV9KUnk5RFRiVXlSY0VpMXM2cGFiQWx3OFE5N3dxTnE2Z3RXYm1RS3NPTHdKbVpETDEydExVb3VxSFh4M0doVXpUWV9Ka3gyaVo4T1dYY3VqRjdiS2FWRkVseU0tVU1YOWxZcElleFFScnIwM0Rkc2U3aFVjUE1WN3ZFSkRHSWI2Q0VjS01NLTFq?oc=5" target="_blank">An Overview of Alphafold's Breakthrough</a> <font color="#6f6f6f">Frontiers</font>
From Privacy to ‘Glass Box’ AI, Stanford Students Are Targeting Real-World Problems - Stanford HAI
<a href="https://news.google.com/rss/articles/CBMirwFBVV95cUxPOWdnMnE2NlJaTUNxZjFma3JsaWpRUmtjdGxRaVlxRXNoOHlfZFhuSmFKaXNUWUNBbXk3N0piU21GRmtRZkRKaDZrbklUYzlaR1d5YnlHWWJBZExYSDA3SEwydm14WTFDYmlwY0RZREl3eFo3QmRsbW1fRWlpVWNaR2c2R2ZHZ0JJbVFHcDhJNXh0cXpLU1kzNUlSMVd1OS0yb3VoZF9HMDg0aFpRRFRJ?oc=5" target="_blank">From Privacy to ‘Glass Box’ AI, Stanford Students Are Targeting Real-World Problems</a> <font color="#6f6f6f">Stanford HAI</font>
Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!