Live
Black Hat USADark ReadingBlack Hat AsiaAI Businessb8646llama.cpp ReleasesIran claims it has hit Oracle data center in Dubai, Amazon data center in Bahrain — country has threatened to attack Nvidia, Intel, and others, tootomshardware.comThe prompt as a genre: instructional rhetoric for language modelsGenerative AII spent a year burning money on AI and finally decided to do something about itGenerative AIThe largest programming community on Reddit just banned all content related to AI LLMs — r/programming is prioritizing only high-quality discussions about AItomshardware.comEveryone Is Worshipping the Wrong AI Heroes—What Hidden Figures Teaches Us About This MomentGenerative AIAI Pair Programming Made Us Faster — But Worse EngineersGenerative AIWhy We Need to Stop Obsessing Over AI ModelsGenerative AIThe AI Professional Development Loop — and What It Devalues for TeachersGenerative AIBeyond Autoregression: How Diffusion Language Models Are Rewriting the Rules of AIGenerative AIMicrosoft deepens its commitment to Japan with $10 billion investment in AI infrastructure, cybersecurity, and workforce - Microsoft SourceGNews AI cybersecurityAI and humanoids have no place in West Virginia’s schools - West Virginia WatchGNews AI educationBlack Hat USADark ReadingBlack Hat AsiaAI Businessb8646llama.cpp ReleasesIran claims it has hit Oracle data center in Dubai, Amazon data center in Bahrain — country has threatened to attack Nvidia, Intel, and others, tootomshardware.comThe prompt as a genre: instructional rhetoric for language modelsGenerative AII spent a year burning money on AI and finally decided to do something about itGenerative AIThe largest programming community on Reddit just banned all content related to AI LLMs — r/programming is prioritizing only high-quality discussions about AItomshardware.comEveryone Is Worshipping the Wrong AI Heroes—What Hidden Figures Teaches Us About This MomentGenerative AIAI Pair Programming Made Us Faster — But Worse EngineersGenerative AIWhy We Need to Stop Obsessing Over AI ModelsGenerative AIThe AI Professional Development Loop — and What It Devalues for TeachersGenerative AIBeyond Autoregression: How Diffusion Language Models Are Rewriting the Rules of AIGenerative AIMicrosoft deepens its commitment to Japan with $10 billion investment in AI infrastructure, cybersecurity, and workforce - Microsoft SourceGNews AI cybersecurityAI and humanoids have no place in West Virginia’s schools - West Virginia WatchGNews AI education
AI NEWS HUBbyEIGENVECTOREigenvector

Think Anywhere in Code Generation

arXiv cs.SEby Xue Jiang, Tianyu Zhang, Ge Li, Mengyang Liu, Taozhi Chen, Zhenhua Xu, Binhua Li, Wenpin Jiao, Zhi Jin, Yongbin Li, Yihong DongApril 1, 20261 min read0 views
Source Quiz

arXiv:2603.29957v1 Announce Type: new Abstract: Recent advances in reasoning Large Language Models (LLMs) have primarily relied on upfront thinking, where reasoning occurs before final answer. However, this approach suffers from critical limitations in code generation, where upfront thinking is often insufficient as problems' full complexity only reveals itself during code implementation. Moreover, it cannot adaptively allocate reasoning effort throughout the code generation process where difficulty varies significantly. In this paper, we propose Think-Anywhere, a novel reasoning mechanism that enables LLMs to invoke thinking on-demand at any token position during code generation. We achieve Think-Anywhere by first teaching LLMs to imitate the reasoning patterns through cold-start training

Authors:Xue Jiang, Tianyu Zhang, Ge Li, Mengyang Liu, Taozhi Chen, Zhenhua Xu, Binhua Li, Wenpin Jiao, Zhi Jin, Yongbin Li, Yihong Dong

View PDF HTML (experimental)

Abstract:Recent advances in reasoning Large Language Models (LLMs) have primarily relied on upfront thinking, where reasoning occurs before final answer. However, this approach suffers from critical limitations in code generation, where upfront thinking is often insufficient as problems' full complexity only reveals itself during code implementation. Moreover, it cannot adaptively allocate reasoning effort throughout the code generation process where difficulty varies significantly. In this paper, we propose Think-Anywhere, a novel reasoning mechanism that enables LLMs to invoke thinking on-demand at any token position during code generation. We achieve Think-Anywhere by first teaching LLMs to imitate the reasoning patterns through cold-start training, then leveraging outcome-based RL rewards to drive the model's autonomous exploration of when and where to invoke reasoning. Extensive experiments on four mainstream code generation benchmarks (i.e., LeetCode, LiveCodeBench, HumanEval, and MBPP) show that Think-Anywhere achieves state-of-the-art performance over both existing reasoning methods and recent post-training approaches, while demonstrating consistent generalization across diverse LLMs. Our analysis further reveals that Think-Anywhere enables the model to adaptively invoke reasoning at high-entropy positions, providing enhanced interpretability.

Subjects:

Software Engineering (cs.SE); Machine Learning (cs.LG)

Cite as: arXiv:2603.29957 [cs.SE]

(or arXiv:2603.29957v2 [cs.SE] for this version)

https://doi.org/10.48550/arXiv.2603.29957

arXiv-issued DOI via DataCite

Submission history

From: Xue Jiang [view email] [v1] Tue, 31 Mar 2026 16:24:03 UTC (169 KB) [v2] Thu, 2 Apr 2026 11:40:46 UTC (170 KB)

Was this article helpful?

Sign in to highlight and annotate this article

AI
Ask AI about this article
Powered by Eigenvector · full article context loaded
Ready

Conversation starters

Ask anything about this article…

Daily AI Digest

Get the top 5 AI stories delivered to your inbox every morning.

Knowledge Map

Knowledge Map
TopicsEntitiesSource
Think Anywh…modellanguage mo…benchmarktrainingannounceanalysisarXiv cs.SE

Connected Articles — Knowledge Graph

This article is connected to other articles through shared AI topics and tags.

Knowledge Graph100 articles · 160 connections
Scroll to zoom · drag to pan · click to open

Discussion

Sign in to join the discussion

No comments yet — be the first to share your thoughts!

More in Models