I gave Claude Code our entire codebase. Our customers noticed. | Al Chen (Galileo)
Watch now | Al Chen (Field Engineer at Galileo) shows how he uses AI to query his entire codebase and deliver precise, real-time answers to enterprise customers without relying on docs or engineering
Al Chen is a field engineer at Galileo, an observability platform for AI applications, where he works on the front lines with enterprise customers asking highly technical questions. Despite never having held an engineering role, Al has built a system using Claude Code to query Galileo’s 15 separate repositories, combine that with Confluence documentation and customer-specific quirks, and deliver hyper-personalized answers that would otherwise require constant engineering support.
- How to use Claude Code to query multiple repositories simultaneously for customer support
- Why code is often a better source of truth than documentation
- How to combine repository context with Confluence and Slack using MCPs
- The “customer quirks” system that creates hyper-personalized deployment guides
- How to build virtuous loops that turn single customer questions into scalable knowledge
- Why information organization matters less in the AI era
- A simple 16-line script (written by Claude Code) that pulls the latest main branch across all your repositories to keep your context current
- How to reduce engineering interruptions to near-zero by empowering customer-facing teams to query the codebase directly
Orkes—The enterprise platform for reliable applications and agentic workflows
Tines—Start building intelligent workflows today
(00:00) Introduction to Al Chen
(02:50) The problem: documentation wasn’t enough
(04:23) Pulling 15 repos into VS Code
(06:03) How Claude Code queries the entire codebase
(08:00) Why current code beats documentation
(08:31) The pull script that keeps everything updated
(09:54) Opening projects at the multi-repo level
(11:40) Live demo: answering deployment questions
(13:25) The customer quirks system
(15:00) Living in chaos: why organization matters less now
(17:03) Competing on customer experience, not just product
(18:20) Should customers be able to query the code directly?
(20:05) Where humans still add value
(25:46) Using AI for reactive Slack support
(29:16) The “and then” workflow discovery
(32:07) Scaling processes across the team
(34:07) Lightning round and final thoughts
• Claude Code: https://claude.ai/code
• VS Code: https://code.visualstudio.com/
• Pylon: https://usepylon.com/
• Confluence: https://www.atlassian.com/software/confluence
• Slack: https://slack.com/
• Kubernetes: https://kubernetes.io/
• Stack Overflow: https://stackoverflow.com/
• Intercom: https://www.intercom.com/
LinkedIn: https://www.linkedin.com/in/thealchen/
Company: https://www.rungalileo.io
ChatPRD: https://www.chatprd.ai/
Website: https://clairevo.com/
LinkedIn: https://www.linkedin.com/in/clairevo/
Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email [email protected].
lennysnewsletter.com
https://www.lennysnewsletter.com/p/i-gave-claude-code-our-entire-codebaseSign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
claudeclaude code
qwen3.5 vs gemma4 vs cloud llms in python turtle
I have found python turtle to be a pretty good test for a model. All of these models have received the same prompt: "write a python turtle program that draws a cat" you can actually see similarity in gemma's and gemini pro's outputs, they share the color pallete and minimalist approach in terms of details. I have a 16 gb vram gpu so couldn't test bigger versions of qwen and gemma without quantisation. gemma_4_31B_it_UD_IQ3_XXS.gguf Qwen3_5_9B_Q8_0.gguf Qwen_3_5_27B_Opus_Distilled_Q4_K_S.gguf deepseek from web browser with reasoning claude sonnet 4.6 extended gemini pro from web browser with thinking submitted by /u/SirKvil [link] [comments]
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Models

Failure Mechanisms and Risk Estimation for Legged Robot Locomotion on Granular Slopes
arXiv:2603.06928v2 Announce Type: replace Abstract: Locomotion on granular slopes such as sand dunes remains a fundamental challenge for legged robots due to reduced shear strength and gravity-induced anisotropic yielding of granular media. Using a hexapedal robot on a tiltable granular bed, we systematically measure locomotion speed together with slope-dependent normal and shear granular resistive forces. While normal penetration resistance remains nearly unchanged with inclination, shear resistance decreases substantially as slope angle increases. Guided by these measurements, we develop a simple robot-terrain interaction model that predicts anchoring timing, step length, and resulting robot speed, as functions of terrain strength and slope angle. The model reveals that slope-induced per

qwen3.5 vs gemma4 vs cloud llms in python turtle
I have found python turtle to be a pretty good test for a model. All of these models have received the same prompt: "write a python turtle program that draws a cat" you can actually see similarity in gemma's and gemini pro's outputs, they share the color pallete and minimalist approach in terms of details. I have a 16 gb vram gpu so couldn't test bigger versions of qwen and gemma without quantisation. gemma_4_31B_it_UD_IQ3_XXS.gguf Qwen3_5_9B_Q8_0.gguf Qwen_3_5_27B_Opus_Distilled_Q4_K_S.gguf deepseek from web browser with reasoning claude sonnet 4.6 extended gemini pro from web browser with thinking submitted by /u/SirKvil [link] [comments]


Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!