10 Years of Acquired (with Michael Lewis)
Why has Acquired — seemingly against all odds — “worked”? It's a puzzling question: episodes are four hours long, they come out infrequently, and they usually don’t have guests or video. Hardly the standard-issue playbook for podcasting success! And yet well over a million smart, curious and exceedingly busy humans share their (your!) valuable time with us every month. Why? This is the exact paradox that has been rolling around in the head of Michael Lewis (yes, that Michael Lewis) since he found the show earlier this year. So we asked Michael to be our guest "interlocutor" and share what he thinks is going on here, while we share ten lessons we've stolen (graciously) from companies we've studied and brought into Acquired itself. He takes us through the entire Acquired journey: how we star
Could not retrieve the full article text.
Read on Acquired Podcast →Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
modelupdateproduct
Is Turboquant really a game changer?
I am currently utilizing qwen3.5 and Gemma 4 model. Realized Gemma 4 requires 2x ram for same context length. As far as I understand, what turbo quant gives is quantizing kv cache into about 4 bit and minimize the loses But Q8 still not lose the context that much so isn't kv cache ram for qwen 3.5 q8 and Gemma 4 truboquant is the same? Is turboquant also applicable in qwen's cache architecture? because as far as I know they didn't tested it in qwen3.5 style kv cache in their paper. Just curious, I started to learn local LLM recently submitted by /u/Interesting-Print366 [link] [comments]

Help running Qwen3-Coder-Next TurboQuant (TQ3) model
I found a TQ3-quantized version of Qwen3-Coder-Next here: https://huggingface.co/edwardyoon79/Qwen3-Coder-Next-TQ3_0 According to the page, this model requires a compatible inference engine that supports TurboQuant. It also provides a command, but it doesn’t clearly specify which version or fork of llama.cpp should be used (or maybe I missed it). llama-server I’ve tried the following llama.cpp forks that claim to support TQ3, but none of them worked for me: https://github.com/TheTom/llama-cpp-turboquant https://github.com/turbo-tan/llama.cpp-tq3 https://github.com/drdotdot/llama.cpp-turbo3-tq3 If anyone has successfully run this model, I’d really appreciate it if you could share how you did it. submitted by /u/UnluckyTeam3478 [link] [comments]
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Market News

Big Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.
The AI landscape is experiencing unprecedented growth and transformation. This post delves into the key developments shaping the future of artificial intelligence, from massive industry investments to critical safety considerations and integration into core development processes. Key Areas Explored: Record-Breaking Investments: Major tech firms are committing billions to AI infrastructure, signaling a significant acceleration in the field. AI in Software Development: We examine how companies are leveraging AI for code generation and the implications for engineering workflows. Safety and Responsibility: The increasing focus on ethical AI development and protecting vulnerable users, particularly minors. Market Dynamics: How AI is influencing stock performance, cloud computing strategies, and



Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!