Early Observations from Interviews with Engineering Teams Adopting AI
Article URL: https://jonathannen.com/observations-from-interviews/ Comments URL: https://news.ycombinator.com/item?id=47595563 Points: 2 # Comments: 0
The teams succeeding with AI coding tools aren't the ones with the best setups. They're the ones that changed how they work.
Since posting 100 PRs and my workflow I've had the chance to sit down with a small sample of engineering organizations adopting AI. It's not enough to develop a rubric just yet, for now I'm documenting the most clear patterns I'm seeing thus far.
To be clear -- I'm not talking about greenfield vibecoding. These are scale established organizations with multiple engineering teams, with existing customers, functioning software, and real process[1].
Bimodal adoption
There's a genuine split. Some teams are thriving, others are drowning[2].
On one side, teams that treated AI as a catalyst for rethinking how they build software. They've restructured codebases, changed review processes, rebuilt deployment pipelines, and invested heavily in shared learning. These teams are getting genuine lift.
On the other side, teams that dropped AI into their existing workflow and expected acceleration. What they got was chaos. PRs of wildly varying quality flooding a review process that was already a bottleneck[3]. Engineers learning on the fly with no shared playbook. Senior engineers -- often learning these new tools on the fly themselves -- trying to hold it all together.
The difference isn't talent or tooling -- it's whether leadership treated this as a process transformation or a tool rollout.
We're all "Product Engineers" now
This is the observation that I think it's the least discussed. You already see the shift in job descriptions around "Product Engineer", "Founding Engineer", and "AI Engineer".
One team described it well. Their best engineer -- extremely technical, competent with AI tools -- was bottlenecked on getting new work both deployed and specified. Meanwhile a less senior engineer was talking directly to customers, identifying pain points, and fleshing out potential features. The senior engineer was faster. The other engineer was more productive. That's the shift and it was a source of friction.
This isn't the lane for a lot of engineers. Many became engineers precisely because they wanted to solve well-defined technical problems, not sit in ambiguity deciding what the customer needs. And the teams pushing "product engineering" are finding exactly that split -- some engineers thrive in it, others are genuinely struggling. It's not a training issue. It's a fundamental change in what the job asks of you.
I don't have the answer here, but I think it's the question that matters most for how engineering teams evolve over the next few years.
Tooling over process
Engineers are spending serious time turning Claude Code into their own customized hotrod. Custom CLI/MCP servers, elaborate prompt chains, multi-step agent orchestrations. I get the appeal -- but for established codebases you get most of the benefit from a (relatively) simple setup.
One team had spent weeks on an elaborate agent pipeline and was still shipping fewer PRs than before they started. In my case, I default to a simple single-prompt approach more often than not.
Part of the problem is where we're drawing "best practice" from. Social media is full of solo developers or small teams using supercharged setups to smash out greenfield projects. Impressive -- but it does not translate to established codebases, teams, and processes. In those environments, supercharging just breaks things.
The review bottleneck
The same pattern shows up with process. Instead of asking "how might we work differently?" teams are shoe-horning AI into their existing workflow unchanged.
Take PR reviews. When I started my career, PRs didn't exist. In many organizations we committed to trunk and relied on code review meetings, pair programming, and honestly, trust. The PR-for-everything orthodoxy emerged over the last fifteen years and it served us well when the bottleneck was code quality and awareness. But it was always a trade-off against throughput, and the dynamics of that trade-off have changed dramatically[4].
Do you need a senior engineer to review every AI-generated change? I keep hearing "hell yes." I don't buy it for a raft of reasons. A copy change, a dependency bump, a refactor with full test coverage -- these don't need the same scrutiny as a new authentication flow. Risk-tiered reviews aren't a new idea, but they become essential when you're generating code at five or ten times the previous rate.
There are tools to make higher throughput safe. Feature flags. Progressive rollouts. Automated test coverage that gives you confidence. At Pyn we have Customer Success shipping PRs[5]. The time going into supercharged-code-tooling should be going into unlocking these process bottlenecks.
It's not just the engineering teams. Equally, you may have Customer Success teams stressed because features are shipping too fast. They're not able to communicate or document them effectively. That's another area where process improvements and adjusting expectations is essential. These changes do not exist in a silo. For a software organization it touches almost every single employee.
Not committing to the change
Several teams are running hackathons or "AI weeks." Usage spikes, people get excited, and then it fades. Within a month they're back to their old workflow.
Part of the problem is that hackathon side projects on greenfield ideas don't build skills for the day-to-day of an established codebase. And if the hackathon is on the day-to-day, it can make things worse. It's easy to make a mess of an existing codebase. You need to imagine that Claude is a competent software engineer that has just joined your team -- it needs onboarding to be successful[6].
Once the hackathon is over, "urgent get this done today" tickets come charging back. Under pressure people revert to what they know and what they can rely on.
Engineers doing AI adoption should be spending half their time right now on upskilling and improving process. That sounds like a lot. It isn't. The productivity gap between someone who's internalized these tools and someone still fighting them is enormous. And it compounds[7].
The other missing piece is systemic sharing. The person who figured out a great prompting pattern last Tuesday? Nobody else on the team knows about it. Individual experiments stay individual. Without a structure for sharing what works, adoption stalls[8].
The teams where leadership is mandating AI use but not carving out time for learning? Those are the stressed ones.
The craft question
I do not pretend to have a good answer on this, but I do know that ignoring it has the potential to be toxic.
In almost every interview, someone brings up the craft of coding[9]. Sometimes it's engineers themselves, sometimes a lead describing resistance. The framing is always some version of: "my engineers love coding and they feel like this is taking that away." This is the biggest adoption barrier I'm seeing. Leave it unresolved and you get a dysfunctional team -- some leaning in, others quietly resisting, some outright protesting.
The surface concern -- that AI replaces the satisfying parts of the job -- is the easier one to address. The teams doing well started with the tedious stuff. Dependency upgrades, boilerplate, test data generation. Not the features engineers care about. One lead told me resistance dropped significantly with that reframing alone.
The deeper concern is harder: that leaning on AI makes you shallow. That you generate solutions without truly understanding the system. This is legitimate and dismissing it is a mistake. I have views here on how this can be spun, but there are no right answers here. Whilst I'm a big proponent of these tools, this is a risk that is almost certain to materialize in the coming years. At the moment it just depends what timeframe you are optimizing for.
No surprise. The thriving teams all had high alignment on this -- not by accident, but by having the conversation explicitly.
Where this lands
I know it's a small sample so far, but the pattern from these interviews is consistent. The teams pulling ahead invested in process before tooling, created structures for shared learning, and gave their engineers time to adapt. The teams that skipped those steps bought the license and are still waiting for the magic.
The gap between these two groups will be wider in a year, not narrower. The compounding has barely started.
Get in touch if you're interested in being interviewed.
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Analyst News

Hallucination is not a Bug. It is a Theorem. Here is the 5th-Grade Math That Proves It.
I will show you — with actual numbers, actual matrices, computed by hand — why every AI must hallucinate. Not sometimes. Always. The proof… Continue reading on Towards AI »

AirAsia X hikes ticket prices by 40%, cut capacity by 10% as Iran war hits fuel costs
Southeast Asia’s largest low-cost carrier AirAsia X said on Monday it was raising ticket prices by as much as 40 per cent and cutting routes to cushion the impact of the war on Iran, but stressed demand for flights remained high. The Malaysia-based no-frills airline said about 10 per cent of its overall flights had been cut so far. It has raised fuel surcharges by about 20 per cent, while fare prices have increased between 31 per cent and 40 per cent. Average jet fuel costs have soared to about...

Engineering Algorithms for Dynamic Greedy Set Cover
arXiv:2604.03152v1 Announce Type: new Abstract: In the dynamic set cover problem, the input is a dynamic universe of elements and a fixed collection of sets. As elements are inserted or deleted, the goal is to efficiently maintain an approximate minimum set cover. While the past decade has seen significant theoretical breakthroughs for this problem, a notable gap remains between theoretical design and practical performance, as no comprehensive experimental study currently exists to validate these results. In this paper, we bridge this gap by implementing and evaluating four greedy-based dynamic algorithms across a diverse range of real-world instances. We derive our implementations from state-of-the-art frameworks (such as GKKP, STOC 2017; SU, STOC 2023; SUZ, FOCS 2024), which we simplify

Online Drone Coverage of Targets on a Line
arXiv:2604.02491v1 Announce Type: new Abstract: We study a problem of online targets coverage by a drone or a sensor that is equipped with a camera or an antenna of fixed half-angle of view $\alpha$. The targets to be monitored appear at arbitrary positions on a line barrier in an online manner. When a new target appears, the drone has to move to a location that covers the newly arrived target, as well as already existing targets. The objective is to design a coverage algorithm that optimizes the total length of the drone's trajectory. Our results are reported in terms of an algorithm's competitive ratio, i.e., the worst-case ratio (over all inputs) of its cost to that of an optimal offline algorithm. In terms of upper bounds, we present three online algorithms and prove bounds on their co

Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!