Agentic AI Patterns Reinforce Engineering Discipline
<img src="https://res.infoq.com/news/2026/03/agentic-engineering-patterns/en/headerimage/generatedHeaderImage-1774683224857.jpg"/><p>Paul Duvall recently discussed his library of engineering patterns for AI assisted development and practices that ground high quality delivery. Related discussions from Paul Stack and Gergely Orosz highlight a shift toward remixing and specification driven development.</p> <i>By Rafiq Gemmail</i>
On a recent AI DevOps Podcast, Paul Duvall discussed how agentic AI patterns are reinforcing core engineering discipline as the capability of modern models increases. He also shared his repository of agentic AI engineering patterns, where he is documenting and evolving practices for AI assisted software development.
Duvall, author of Continuous Integration: Improving Software Quality and Reducing Risk, positioned his collection of patterns as an exploration of how established engineering practices are being adapted through hands-on use of agentic AI in client work. He emphasised grounding AI generated output in shared patterns, stating that "engineering practices are becoming even more relevant when you have AI generating code."
Given the volume of code generated by AI, Duvall emphasised the continued importance of trunk based development, committing early and often, and automated testing, explaining that these become essential for maintaining quality as the rate of change increases.
Duvall also described a shift in how developers interact with code, observing that he is "not reviewing every line of code now" when working with AI generated output, as the volume of change makes this increasingly impractical. Instead, Duvall emphasised relying on automated validation and agentic guard rails, including codified skills that allow agents to review and refine their own output.
Duvall also discussed how approaches such as specification driven development are evolving existing engineering practice. Duvall's repository includes examples of agent readable specifications for an AWS IAM policy generation scenario, defining expected behaviour, constraints, and acceptance criteria up front, enabling agents to generate and validate output against a clear specification. Describing how familiar test first patterns are being adapted to guide AI assisted workflows, he said:
I'm literally... replicating what we did with Agile and XP... it literally says red, green, refactor... I go through that process.
Duvall also highlighted challenges earlier in the agentic lifecycle, particularly around defining intent. He noted that while AI tools can generate code quickly, vague or underspecified inputs often lead to inconsistent or unpredictable results. This has led to increased focus on driving agents with clearer specifications, including structured prompts that describe intent through role, context, and constraints, alongside specification driven development and acceptance tests derived from defined behaviour, noting that "if you don't fully describe what the intent is" you will "have random results."
A similar focus on clearer specifications was recently discussed on the DevSecOps Talks podcast with Paul Stack, System Initiative's director of product, who works on SWAMP, an agentic open source platform for automating and validating infrastructure. Stack described how he was restructuring development processes around agents, even to the point of refusing pull requests in favour of Github Issue-based workflows that feed into specification-driven development. He said:
We do not accept pull requests... if you have a design... open an issue and we'll interactively walk through this and we'll design it together.
Appearing on Scott Hanselman's podcast, Gergely Orosz, author of The Pragmatic Engineer newsletter, discussed an open source project that refrains from merging pull requests in favour of "remixing", where contributed PRs are rebuilt by agents in line with project standards. Contrasting this with autonomous agents using fully automated "Ralph loops," where subagents iteratively refine solutions until requirements are met, Hanselman acknowledged that although architectural and design "taste" is important in critical systems, the mentality of an "infinitely patient junior engineer" can be well suited to toil.
Stack also emphasised the importance of providing accurate architectural patterns and practices so that agents can "produce the code in a way that was coherent with your codebase," alongside defining architecture, constraints, and testing expectations up front. Like InfoQ's reporting of Boris Cherny's agentic workflow, Stack said that he also uses Claude's "plan mode" to review intent before execution, helping avoid "AI horror stories."
Duvall also pointed to the importance of shifting-right and extending these feedback loops into production. He described how observability, telemetry, and even tests in production can be used to shorten feedback cycles, interpreting and sending live signals back into the development lifecycle. Looking ahead, he suggested that AI may result in smaller, more focused teams, describing a move towards a "one pizza team" as coordination overhead reduces and automation increases.
Duvall suggested that, as with earlier shifts in engineering, quality is increasingly achieved through automation rather than human inspection. He said:
You're putting in mechanisms... such that the code is reviewed... but it might not be reviewed literally by you every single time.
Duvall and Stack both highlighted that AI assisted development requires a mix of shift left practices and shift right feedback, where behavioural definitions and production state become part of the validation process. Duvall also noted the advantages of AI analysing production telemetry more expansively to identify patterns and surface issues earlier.
Duvall's repository is continuously updated and defines structured patterns with maturity levels across development, security, and operational scenarios. The patterns include specification driven development, codified rules and architectural constraints, atomic decomposition with parallel agents, and observable development for workflows with automated traceability.
Acknowledging the shift beyond code-centric development, Orosz reflected that engineering identity and practice will move up a level, beyond the code itself. He said:
I think there is something much more than coding that makes us special and I think we should cultivate that.
About the Author
Rafiq Gemmail
Show moreShow less
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
agenticagent
The cognitive impact of coding agents
A fun thing about recording a podcast with a professional like Lenny Rachitsky is that his team know how to slice the resulting video up into TikTok-sized short form vertical videos. Here's one he shared on Twitter today which ended up attracting over 1.1m views! That was 48 seconds. Our full conversation lasted 1 hour 40 minutes. Tags: ai-ethics , coding-agents , agentic-engineering , generative-ai , podcast-appearances , ai , llms , cognitive-debt

Vulnerability Research Is Cooked
Vulnerability Research Is Cooked Thomas Ptacek's take on the sudden and enormous impact the latest frontier models are having on the field of vulnerability research. Within the next few months, coding agents will drastically alter both the practice and the economics of exploit development. Frontier model improvement won’t be a slow burn, but rather a step function. Substantial amounts of high-impact vulnerability research (maybe even most of it) will happen simply by pointing an agent at a source tree and typing “find me zero days”. Why are agents so good at this? A combination of baked-in knowledge, pattern matching ability and brute force: You can't design a better problem for an LLM agent than exploitation research. Before you feed it a single token of context, a frontier LLM already en
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Self-Evolving AI

The cognitive impact of coding agents
A fun thing about recording a podcast with a professional like Lenny Rachitsky is that his team know how to slice the resulting video up into TikTok-sized short form vertical videos. Here's one he shared on Twitter today which ended up attracting over 1.1m views! That was 48 seconds. Our full conversation lasted 1 hour 40 minutes. Tags: ai-ethics , coding-agents , agentic-engineering , generative-ai , podcast-appearances , ai , llms , cognitive-debt



Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!