Exclusive | The Sudden Fall of OpenAI’s Most Hyped Product Since ChatGPT - WSJ
<a href="https://news.google.com/rss/articles/CBMiogNBVV95cUxQLVFudzMtU3ZkNzZNaWdQWHNQQU9JdVlDYl9nX09mbEN5YmFCZXowS1lpR243QnRLRUhtVV9XNFRfbld3Z1ZfQ1BCTDlnbHdIUWhta3JKRThrN3dKN0w1blVSd2Q4c3VJaEJzZXZERWhXNnRFYTZGWHNIUXJHdlpjblN2RlhaRGM4c3lxcVFibWpJSHVyWmpVbERyNnYxWGpPeEttUzFlYi10WF9nMjV1emF4bEJSak9iaW5ReVlya0pqZjYzTEVLZlBBdFlRSjc3NWk3UUI1bzdDRjNWTDJHcEl6Rmh0djdXX0VGTC1RRXpKQnY4QTVLMzlaVV9NQ280VzFyYW9hTno3bVgycFRNcGFzYW5FUkUzSmdRbVZucm9aYmUtTVEycGp4RFFab2g5Y21ZNjgwV3B5TmNsWkZqR2ZPYnRuVWtmRk8zZjllUlIyYW9CM3BhdFlkMU1xQkNzT2JMTGN6SU5zSjlxeHJPZHZHUTBITHhZRE5wTGhhZDZSTG83dWlKYnRfYW40Vm5Rd0d1am5YbGh4dVNzRl9kRDh3?oc=5" target="_blank">Exclusive | The Sudden Fall of OpenAI’s Most Hyped Product Since ChatGPT</a> <font color="#6f6f6f">WSJ</font>
Could not retrieve the full article text.
Read on GNews AI video →Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
productchatgpt
Can artificial intelligence be governed—or will it govern us?
On July 16, 1945, when the world’s first nuclear explosion shook the plains of New Mexico, J. Robert Oppenheimer , who led the project, quoted the Bhagavad Gita : “Now I am become Death, the destroyer of worlds.” And indeed, he had. The world was never truly the same after nuclear power became a reality. Today, however, we have lost that reverence for the power of technology. Instead of proceeding deliberately and with caution, we rush ahead. In his Techno-Optimist Manifesto , tech investor Marc Andreessen implied that AI regulation was a form of murder. Defense Secretary Pete Hegseth punished Anthropic when it tried to impose limits on its own technology. Clearly, we’ve been here before and shown that we can meet the challenge. We contained the nuclear threat and put useful limits on the

Frontend Engineering at Palantir: Building a Backend-less Cross-Application API
About this Series Frontend engineering at Palantir goes far beyond building standard web apps. Our engineers design interfaces for mission-critical decision-making, build operational applications that translate insight to action, and create systems that handle massive datasets — thinking not just about what the user needs, but what they need when the network is unreliable, the stakes are high, and the margin for error is zero. This series pulls back the curtain on what that work really looks like: the technical problems we solve, the impact we have, and the approaches we take. Whether you’re just curious or exploring opportunities to join us, these posts offer an authentic look at life on our Frontend teams. In this blog post, a frontend engineer based in CA shares an overview of several f
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Models

OpenAI doesn’t expect to be profitable until at least 2030 as AI costs surge
As OpenAI and Anthropic move closer to their planned initial public offerings, more details about the finances of both artificial intelligence giants are starting to emerge. It was no secret these companies were bleeding cash, but seeing the actual numbers is still striking. Neither company has made its filings official. Both are in the process of recruiting investors and have recently closed funding rounds, which meant opening their books. The Wall Street Journal got a peek . According to internal estimates, OpenAI will not turn a profit until 2030, while Anthropic expects slight positive results this year, followed by another year of losses before staying in the green in 2028 and 2029. Spending on AI training will be staggering. In 2028, OpenAI projects spending $121 billion on computing

Qwen 4B/9B and Gemma E4B/26B A4B for multilingual entity extraction, summarisation and classification?
Hi, LLM newbie here. Has anyone benchmarked these smaller models on multilingual entity extraction, summarisation and classification? I'm particularly interested in your opinion when it comes to finetuning them to reach higher success rates and reliability. What is your general feeling of the performance and capabilities? I saw plenty posts here but rarely the ones that mention multilingual entity extraction, summarisation or classification submitted by /u/Creative-Fuel-2222 [link] [comments]

What's the most optimized engine to run on a H100?
Hey guys, I was wondering what is the best/fastest engine to run LLMs on a single H100? I'm guessing VLLM is great but not the fastest. Thank you in advance. I'm running a LLama 3.1 8B model. submitted by /u/Obamos75 [link] [comments]

Good local models that can work locally on my system with tools support
So I have a gaming laptop, RTX 4070 (12 GB VRAM) + 32 GB RAM. I used llmfit to identify which models can I use on my rig, and almost all the runnable ones seem dumb when you ask it to read a file and execute something afterwards, some does nothing, some search the web, some understand that they need to read a file but can't seem to go beyond that. The ones suggested by Claude or Gemini are fairly the same ones I am trying. I am using Ollama + Claude code. I tried: qwen2.5-coder:7b, qwen3.5:9b, deepseek-r1:8b-0528-qwen3-q4_K_M, unsloth/qwen3-30B-A3B:Q4_K_M The last one, I need to disable thinking in Claude for it to actually start working and still fails! My plan is to plan using a frontier model, then execute said plan with a local model (not major projects or code base, just weekend ideat

Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!