BREAKING: LLM “reasoning” continues to be deeply flawed - Marcus on AI | Substack
<a href="https://news.google.com/rss/articles/CBMidEFVX3lxTFBvRjRDTnNHTFB6WHRkU3o5VzlKUER6ZGFibXB6VmlfanBtLUJYYnB5QjYtZXNaZTJQMnNYOFA0dkVraC1rMXMtT3dRZUo4Z2FJdktwZEVQY3k2RzVVT3pZc2hqQU0ya2J5NEx3MDVuOFhfMExV?oc=5" target="_blank">BREAKING: LLM “reasoning” continues to be deeply flawed</a> <font color="#6f6f6f">Marcus on AI | Substack</font>
Could not retrieve the full article text.
Read on Google News: LLM →Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
reasoning
Found how to toggle reasoning mode for Gemma in LM-Studio!
I’ve figured out how to trigger the reasoning process by adding "/think" to the system prompt. Heads up: the thought tags have an unusual pipe ( | ) placement, which is why many LLM fail to parse the reasoning section correctly. So Start String is : " thought" And End String is " " Here is the Jinja template: https://pastebin.com/MGmD8UiC Tested and working with the 26B and 31B versions. submitted by /u/Adventurous-Paper566 [link] [comments]

Considerations for growing the pie
Recently some friends and I were comparing growing the pie interventions to an increasing our friends' share of the pie intervention, and at first we mostly missed some general considerations against the latter type. 1. Decision-theoretic considerations The world is full of people with different values working towards their own ends; each of them can choose to use their resources to increase the total size of the pie or to increase their share of the pie. All of them would significantly prefer a world in which resources were used to increase the size of the pie, and this leads to a number [of] compelling justifications for each individual to cooperate. . . . by increasing the size of the pie we create a world which is better for people on average, and from behind the veil of ignorance we s

Tutorial - How to Toggle On/OFf the Thinking Mode Directly in LM Studio for Any Thinking Model
LM Studio is an exceptional tool for running local LLMs, but it has a specific quirk: the "Thinking" (reasoning) toggle often only appears for models downloaded directly through the LM Studio interface. If you use external GGUFs from providers like Unsloth or Bartowski, this capability is frequently hidden. Here is how to manually activate the Thinking switch for any reasoning model. ### Method 1: The Native Way (Easiest) The simplest way to ensure the toggle appears is to download models directly within LM Studio. Before downloading, verify that the **Thinking Icon** (the green brain symbol) is present next to the model's name. If this icon is visible, the toggle will work automatically in your chat window. ### Method 2: The Manual Workaround (For External Models) If you prefer to manage
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Models

Is Turboquant really a game changer?
I am currently utilizing qwen3.5 and Gemma 4 model. Realized Gemma 4 requires 2x ram for same context length. As far as I understand, what turbo quant gives is quantizing kv cache into about 4 bit and minimize the loses But Q8 still not lose the context that much so isn't kv cache ram for qwen 3.5 q8 and Gemma 4 truboquant is the same? Is turboquant also applicable in qwen's cache architecture? because as far as I know they didn't tested it in qwen3.5 style kv cache in their paper. Just curious, I started to learn local LLM recently submitted by /u/Interesting-Print366 [link] [comments]




Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!