What's Missing Between LLMs and AGI - Vishal Misra & Martin Casado
Vishal Misra returns to explain his latest research on how LLMs actually work under the hood. He walks through experiments showing that transformers update their predictions in a precise, mathematically predictable way as they process new information, explains why this still doesn't mean they're conscious, and describes what's actually required for AGI: the ability to keep learning after training and the move from pattern matching to understanding cause and effect. Resources: Follow Vishal Misra on X: https://x.com/vishalmisra Follow Martin Casado on X: https://x.com/martin_casado Stay Updated: Find a16z on YouTube: YouTube Find a16z on X Find a16z on LinkedIn Listen to the a16z Show on Spotify Listen to the a16z Show on Apple Podcasts Follow our host: https://twitter.com/eriktorenberg Ple
Could not retrieve the full article text.
Read on a16z Podcast →a16z Podcast
https://a16z.simplecast.com/episodes/whats-missing-between-llms-and-agi-vishal-misra-martin-casado-YRValfqTSign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
transformertrainingupdateAsk HN: Who is hiring? (April 2026)
Please state the location and include REMOTE for remote work, REMOTE (US) or similar if the country is restricted, and ONSITE when remote work is not an option. Please only post if you personally are part of the hiring company—no recruiting firms or job boards. One post per company. If it isn't a household name, explain what your company does. Please only post if you are actively filling a position and are committed to replying to applicants. Commenters: please don't reply to job posts to complain about something. It's off topic here. Readers: please only email if you are personally interested in the job. Searchers: try http://nchelluri.github.io/hnjobs/ , https://hnjobs.emilburzo.com , or this (unofficial) Chrome extension: https://chromewebstore.google.com/detail/hn-hiring-pro/mpfal... .
Unregulated chatbots are putting lives at risk | Letters
<p>Readers respond to an article about people whose lives were wrecked by delusional thinking after they used AI tools</p><p>Your coverage of AI-associated delusions exposes a gap that training-level guardrails cannot close (<a href="https://www.theguardian.com/lifeandstyle/2026/mar/26/ai-chatbot-users-lives-wrecked-by-delusion">Marriage over, €100,000 down the drain: the AI users whose lives were wrecked by delusion, 26 March</a>). As someone who has worked in health systems across fragile and low-income contexts, I find it striking that AI companies have failed to adopt a safeguard that even the most underresourced clinic in the world already uses: screening patients before exposing them to risk.</p><p>The <a href="https://www.mdcalc.com/calc/1725/phq9-patient-health-questionnaire9">Pati

Building Production RAG Systems in .NET 10: The Complete Guide to Embeddings
<h1> Building Production RAG Systems in .NET 10: The Complete Guide to Embeddings </h1> <h2> The Hallucination Problem </h2> <p>Your company spent $50K building an internal chatbot. It tells customers "yes, we ship internationally" when you only ship to the US. Your support team is drowning in corrections.</p> <p>Sound familiar?</p> <p>This happens because traditional LLMs generate responses from training data patterns, not your actual data. They hallucinate. They confidently state false information.</p> <p><strong>RAG (Retrieval-Augmented Generation) fixes this.</strong> Instead of hoping the LLM knows about your data, you explicitly feed it your documents first.</p> <h2> What Are Embeddings? </h2> <p>Think of embeddings as a way to convert text into mathematics.</p> <h3> The Simple Versi
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Models

Raspberry Pi raises prices by $11.25 to $150 citing memory prices, after hikes in December and February, and unveils a 3GB Raspberry Pi 4 model for $83.75 (Stevie Bonifield/The Verge)
Stevie Bonifield / The Verge : Raspberry Pi raises prices by $11.25 to $150 citing memory prices, after hikes in December and February, and unveils a 3GB Raspberry Pi 4 model for $83.75 — Prices are going up by over $100 in some cases thanks to those AI fools. … As of today, the price of the 16GB version …

I built a Mac app after getting surprised by my Claude bill
<p>A few months back I got my monthly API bill and felt sick.</p> <p>I had been vibe-coding pretty hard with Claude, and I knew it wasn't going to be zero. But the number was way higher than I expected. Like, embarrassingly higher. I had been running Claude Code sessions back to back, long context windows, lots of tool calls, and I had no idea how fast it was adding up.</p> <p>The worst part? I couldn't have known. There's no live feedback. You just work, and then you find out later.</p> <p>So I did what most developers do when something annoys them enough. I built a tool to fix it.</p> <h2> What I made </h2> <p>TokenBar is a macOS menu bar app that tracks your AI token usage in real time. It sits in your menu bar the whole time you're working and shows you your spend as it happens, not af
Your AI Just Wrote 500 Lines of Code. Can You Prove Any of It Works?
Image Disclaimer: This banner was conceptualized by the author and rendered using Gemini 3 Flash Image. A framework for figuring out when AI-generated code can be formally verified — and when you’re kidding yourself. I’ve been thinking about a problem that’s been bugging me for a while. We’re all using AI to write code now. Copilot, Claude, ChatGPT, internal tools — whatever your flavor. And the code is… surprisingly good? It passes tests, it looks reasonable, it usually does what you asked for. But “usually” is doing a lot of heavy lifting in that sentence. Here’s the thing nobody talks about at the stand-up: testing can show you bugs exist. It cannot prove they don’t. That’s not a philosophical position. It’s a mathematical fact, courtesy of Dijkstra, circa 1972. And it matters a lot mor
Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!