How AI is reshaping the way data practitioners work
What happens to data work when AI changes everything? The hosts of The View on Data podcast share what's shifting and what isn't.
In this episode of The View on Data, hosts Faith McKenna, Paige Berry, and Erica "Ric" Louie sit down with Sam Ferguson, staff product designer at dbt Labs, to talk about what it means to design for data practitioners and how AI is reshaping that work in real time. The conversation covers everything from embedded natural language in SQL, to the typist analogy you didn't know you needed, to why your code comments might be more valuable than you think.
🎧 Listen & subscribe: Spotify | Apple Podcasts | Amazon Music | YouTube
From Mode to dbt: designing for the people who write the queries
Sam came to dbt Labs after nearly a decade at Mode Analytics, where the product was built around a core belief: analysts who write SQL deserve great tooling. That philosophical overlap with dbt made the transition feel natural. Both companies were thinking hard about code-first data practitioners long before it was a mainstream conversation.
But the move also meant expanding her mental model. At Mode, the center of gravity was the analyst. At dbt, it shifted upstream to analytics engineers and data engineers, people whose job is less about ad hoc answers and more about building the infrastructure that makes those answers possible.
One of the projects that bridged those two worlds was an AI assist feature she worked on at Mode, back when text-to-SQL was just starting to emerge. The idea: instead of switching between a chat window and your SQL editor, you could embed natural language directly inside your query. Write the SQL you know, leave a placeholder in plain English for the parts you don't, and let the model fill in the gaps. The closer the prompt was to the actual output, the better the result.
That experiment is still shaping how Sam thinks about AI UX today, specifically whether inline context (like code comments) might outperform separate AI instruction directories. More on that in a second.
Sam mentioned that the closer a user's natural language prompts were embedded to the actual code, the higher the output quality. Ric and Paige didn't need convincing.
Ric talked about writing comments not just for other humans, but for the logic itself: why this filter exists, what this variable is actually doing, what the weird edge case is and why it matters. Paige echoed it. She's learned more from reading well-commented code than from almost any other source.
The "feed two birds with one scone" framing stuck: good commenting practices already make your codebase easier for humans to navigate. They might also make it significantly easier for agents to work with. That's not a trade-off. That's just a bonus.
The typist analogy: what happens when a constraint disappears
Sam referenced a book called Reshuffle that offers a useful frame for thinking about how roles evolve under technological pressure. The example: typists.
Typing used to be a discrete, specialized job because editing was expensive. If you made a mistake on a typewriter, you started over. Word processors didn't just make typists faster. They removed the constraint that made the role distinct. Typing got absorbed into everything else.
The parallel for product design: one of the biggest historical constraints has been the cost of building. Designers spent enormous time simulating products before they existed, through mockups, prototypes, and user tests, to de-risk development before a single line of production code was written.
Now, anyone at dbt Labs can come to a meeting with a working concept. The simulation phase is collapsing. Which means design needs to reorganize around different constraints.
Sam's candidate for the new constraint: human attention. In a world of infinite features and infinite answers, the scarce resource isn't information. It's cognitive bandwidth. What gets prioritized? What gets ignored? What does "good" look like when you can generate a hundred options before lunch?
For data teams, the parallel lands in a similar place. If agents can answer any question, the hard part becomes figuring out which questions actually matter.
Outcome-driven workflows: familiar territory, new stakes
A big theme in the episode was the shift from output-driven to outcome-driven AI workflows. In the early GitHub Copilot days, the goal was code completion: predict the next line, write faster. Then it became code validation: review what the agent wrote, approve or reject. Now, with tools like Claude Code, you might not look at the files at all. You describe what you want the end result to look like, and you evaluate from there.
That's a meaningful shift in how practitioners relate to their work. But as the hosts noted: this isn't a new conversation in data.
"Do you really need to spend 2,000 years optimizing this table if you're not actually getting to the answer the business cares about?" Faith put it directly. The outcome vs. process tension has always been there. AI just raises the stakes and maybe forces teams to get better at navigating it.
Ric made a similar point through a different analogy: state-aware orchestration. The idea of saying "I want this table updated at this frequency, with these freshness guarantees" is structurally the same as saying "I want this outcome; figure out what needs to happen upstream." The framing is different. The principle isn't.
On hero mode, accountability, and bringing people along
One of the more honest stretches of the conversation: just because you can build something start to finish with AI assistance doesn't mean you should.
The concern isn't capability. It's isolation. If one person on a team can move twice as fast as everyone else but isn't documenting, sharing, or collaborating, the team doesn't go faster. It goes uneven. Ric's analogy was a boat with mismatched rowers. The strong rower doesn't win races alone.
Faith added an accountability angle that felt important: when AI produces bad output, "ChatGPT did that" isn't an acceptable response. You prompted it. You shipped it. You own it. That's true in training content, in data models, in dashboards, in anything that lands in front of a stakeholder who trusted you.
The flip side is worth naming too: AI makes it easier to create nonsense at scale. Overly complex queries, undocumented logic, architectures nobody can explain. The principle Paige has internalized is asking, at every step, what are you actually trying to accomplish? That question doesn't get less important when AI is in the loop. It gets more important.
Governance, context, and the onboarding problem
Something Sam is thinking about a lot right now: context management for agents isn't that different from onboarding a new teammate.
When someone joins your data team, you spend time helping them understand how you define things, where the data lives, which models are canonical and which ones are legacy noise. That knowledge transfer is slow and often informal.
Now imagine having to codify all of that for an agent, clearly enough that it can reason about your stack, your terminology, and your business logic without constant correction. That exercise forces you to ask: are these practices actually right? Do they still hold? Have we documented them anywhere, or do they just live in Paige's head?
The upshot: the work of building good AI context is also the work of building a better-documented, more navigable data environment for humans. Again, two birds.
Advice for staying level-headed
The episode closed with some grounded takeaways for data and tech professionals trying to navigate all of this without losing their minds.
Paige: Try stuff. Write it down. Track what's working and what isn't. The record you build over time becomes its own source of encouragement, proof that you're learning, even when it doesn't feel like it.
Sam: Experiment fast, because the switching cost has never been lower. And think about the constraints your role has historically worked around. Which ones are disappearing? Which ones are you glad to hand off? Start there.
Ric: Think in principles. Before adding AI to a workflow, ask what you're actually trying to get out of it and what guardrails you need to make sure it doesn't create more work than it saves. Move quickly, but bring people with you.
Faith: Use AI to enhance the experience, not to replace the thinking. The hard work of figuring out what learners actually need, what questions actually matter, what's just enough complexity and no more, that's still yours to do. AI can help you do the other parts faster so you have more time for it.
And maybe most importantly: good leadership, in AI or otherwise, raises the floor beneath everyone rather than enabling a few to touch the ceiling.
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Releases

7 CVEs in 48 Hours: How PraisonAI Got Completely Owned — And What Every Agent Framework Should Learn
PraisonAI is a popular multi-agent Python framework supporting 100+ LLMs. On April 3, 2026, seven CVEs dropped simultaneously. Together they enable complete system compromise from zero authentication to arbitrary code execution. I spent the day analyzing each vulnerability. Here is what I found, why it matters, and the patterns every agent framework developer should audit for immediately. The Sandbox Bypass (CVE-2026-34938, CVSS 10.0) This is the most technically interesting attack I have seen this year. PraisonAI's execute_code() function runs a sandbox with three protection layers. The innermost wrapper, _safe_getattr , calls startswith() on incoming arguments to check for dangerous imports like os , subprocess , and sys . The attack: create a Python class that inherits from str and over

I Built a Zero-Login Postman Alternative in 5 Weeks. My Cofounder Is an AI and I Work Long Shifts.
I started this because I wanted to know if the hype was real. Not the AI hype specifically. The whole thing — the idea that someone without a CS degree, without a team, without anyone around them who even knows what Claude.ai is, could build something real on weekends. I work long demanding shifts at a job that has nothing to do with software. My coworkers don't know what an API is. I barely knew what one was when I started. Five weeks later I have a live product with Stripe payments, a Pro tier, and an AI that generates production-ready API requests from plain English. I'm still not entirely sure what I'd use it for in my day job. But I know the journey was worth it. If you can't learn, you're done. Why This Exists One night I needed to test an API endpoint. I opened Postman. It asked me




Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!