Exclusive | The Sudden Fall of OpenAI’s Most Hyped Product Since ChatGPT - WSJ
Exclusive | The Sudden Fall of OpenAI’s Most Hyped Product Since ChatGPT WSJ
Could not retrieve the full article text.
Read on Google News: ChatGPT →Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
productchatgpt
Y Combinator’s CEO says he ships 37,000 lines of AI code per day. A developer looked under the hood
We love a good old social media roast, and Y Combinator CEO Garry Tan found himself on the business end of a doozie Wednesday. Tan, who in a past life worked as an engineering manager at Palantir and has more recently been a vocal proponent for AI acceleration, bragged that he and his AI coding agents have been deploying 37,000 lines of code per day across five separate projects. “Absolutely insane week for agentic engineering,” Tan wrote in an X post on Monday, adding in a follow-up post that he was on a 72-day shipping streak. Absolutely insane week for agentic engineering 37K LOC per day across 5 projects Still speeding up pic.twitter.com/VR3utsduYx Garry Tan (@garrytan) March 30, 2026 Two days later, a Polish game developer and senior software engineer who goes by the username Gregorei
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Models

The 'Running Doom' of AI: Qwen3.5-27B on a 512MB Raspberry Pi Zero 2W
Yes, seriously, no API calls or word tricks. I was wondering what the absolute lower bound is if you want a truly offline AI. Just like people trying to run Doom on everything, why can't we run a Large Language Model purely on a $15 device with only 512MB of memory? I know it's incredibly slow (we're talking just a few tokens per hour), but the point is, it runs! You can literally watch the CPU computing each matrix and, boom, you have local inference. Maybe next we can make an AA battery-powered or solar-powered LLM, or hook it up to a hand-crank generator. Total wasteland punk style. Note: This isn't just relying on simple mmap and swap memory to load the model. Everything is custom-designed and implemented to stream the weights directly from the SD card to memory, do the calculation, an

My first impression after testing Gemma 4 against Qwen 3.5
​ I have been doing some early comparisons between Gemma 4 and Qwen 3.5, including a frontend generation task and a broader look at the benchmark picture. My overall impression is that Gemma 4 is good. It feels clearly improved and the frontend results were actually solid. The model can produce attractive layouts, follow the structure of the prompt well, and deliver usable output. So this is definitely not a case of Gemma being bad. That said, I still came away feeling that Qwen 3.5 was better in these preliminary tests. In the frontend task, both models did well, but Qwen seemed to have a more consistent edge in overall quality, especially in polish, coherence, and execution of the design requirements. The prompt was not trivial. It asked for a landing page in English for an advanc





Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!