Live
Black Hat USAAI BusinessBlack Hat AsiaAI BusinessI Renamed All 43 Tools in My MCP Server. Here's Why I Did It Now.Dev.to AIWhy AI Pilots Fail — And the 5 Patterns That Actually Get to ProductionDev.to AIBuilding Predictive Maintenance Systems for Infrastructure MonitoringDev.to AIThe Best Scribe Alternative in 2026 (Privacy-First, AI-Ready)Dev.to AII Started Building a Roguelike RPG — Powered by On-Device AI #2Dev.to AIGR4AD: Kuaishou's Production-Ready Generative Recommender for Ads Delivers 4.2% Revenue LiftDev.to AIFAOS Neurosymbolic Architecture Boosts Enterprise Agent Accuracy by 46% via Ontology-Constrained ReasoningDev.to AIOwn Your Data: The Wake-Up CallDev.to AIHow I Replaced 6 Paid AI Subscriptions With One Free Tool (Saved $86/Month)Dev.to AIClaude Code subagent patterns: how to break big tasks into bounded scopesDev.to AIAnthropic says Claude subscriptions will no longer cover usage on third-party tools like OpenClaw starting April 4 at 12pm PT, to better manage capacity (Boris Cherny/@bcherny)TechmemeNo-AI code analysis found issue in HF tokenizersHacker News AI TopBlack Hat USAAI BusinessBlack Hat AsiaAI BusinessI Renamed All 43 Tools in My MCP Server. Here's Why I Did It Now.Dev.to AIWhy AI Pilots Fail — And the 5 Patterns That Actually Get to ProductionDev.to AIBuilding Predictive Maintenance Systems for Infrastructure MonitoringDev.to AIThe Best Scribe Alternative in 2026 (Privacy-First, AI-Ready)Dev.to AII Started Building a Roguelike RPG — Powered by On-Device AI #2Dev.to AIGR4AD: Kuaishou's Production-Ready Generative Recommender for Ads Delivers 4.2% Revenue LiftDev.to AIFAOS Neurosymbolic Architecture Boosts Enterprise Agent Accuracy by 46% via Ontology-Constrained ReasoningDev.to AIOwn Your Data: The Wake-Up CallDev.to AIHow I Replaced 6 Paid AI Subscriptions With One Free Tool (Saved $86/Month)Dev.to AIClaude Code subagent patterns: how to break big tasks into bounded scopesDev.to AIAnthropic says Claude subscriptions will no longer cover usage on third-party tools like OpenClaw starting April 4 at 12pm PT, to better manage capacity (Boris Cherny/@bcherny)TechmemeNo-AI code analysis found issue in HF tokenizersHacker News AI Top
AI NEWS HUBbyEIGENVECTOREigenvector

CL4SE: A Context Learning Benchmark For Software Engineering Tasks

arXiv cs.SEby Haichuan Hu, Quanjun Zhang, Ye Shang, Guoqing Xie, Chunrong Fang, Zhenyu Chen, Liang XiaoApril 1, 20262 min read0 views
Source Quiz

arXiv:2602.23047v2 Announce Type: replace Abstract: Context engineering has emerged as a pivotal paradigm for unlocking the potential of Large Language Models (LLMs) in Software Engineering (SE) tasks, enabling performance gains at test time without model fine-tuning. Despite its success, existing research lacks a systematic taxonomy of SE-specific context types and a dedicated benchmark to quantify the heterogeneous effects of different contexts across core SE workflows. To address this gap, we propose CL4SE (Context Learning for Software Engineering), a comprehensive benchmark featuring a fine-grained taxonomy of four SE-oriented context types (interpretable examples, project-specific context, procedural decision-making context, and positive & negative context), each mapped to a represen

View PDF HTML (experimental)

Abstract:Context engineering has emerged as a pivotal paradigm for unlocking the potential of Large Language Models (LLMs) in Software Engineering (SE) tasks, enabling performance gains at test time without model fine-tuning. Despite its success, existing research lacks a systematic taxonomy of SE-specific context types and a dedicated benchmark to quantify the heterogeneous effects of different contexts across core SE workflows. To address this gap, we propose CL4SE (Context Learning for Software Engineering), a comprehensive benchmark featuring a fine-grained taxonomy of four SE-oriented context types (interpretable examples, project-specific context, procedural decision-making context, and positive & negative context), each mapped to a representative task (code generation, code summarization, code review, and patch correctness assessment). We construct high-quality datasets comprising over 13,000 samples from more than 30 open-source projects and evaluate five mainstream LLMs across nine metrics. Extensive experiments demonstrate that context learning yields an average performance improvement of 24.7% across all tasks. Specifically, procedural context boosts code review performance by up to 33% (Qwen3-Max), mixed positive-negative context improves patch assessment by 30% (DeepSeek-V3), project-specific context increases code summarization BLEU by 14.78% (GPT-Oss-120B), and interpretable examples enhance code generation PASS@1 by 5.72% (DeepSeek-V3). CL4SE establishes the first standardized evaluation framework for SE context learning, provides actionable empirical insights into task-specific context design, and releases a large-scale dataset to facilitate reproducible research in this domain.

Comments: 23 pages, 4 figures

Subjects:

Software Engineering (cs.SE)

Cite as: arXiv:2602.23047 [cs.SE]

(or arXiv:2602.23047v2 [cs.SE] for this version)

https://doi.org/10.48550/arXiv.2602.23047

arXiv-issued DOI via DataCite

Submission history

From: Shang Ye [view email] [v1] Thu, 26 Feb 2026 14:28:57 UTC (235 KB) [v2] Tue, 31 Mar 2026 14:31:22 UTC (249 KB)

Was this article helpful?

Sign in to highlight and annotate this article

AI
Ask AI about this article
Powered by Eigenvector · full article context loaded
Ready

Conversation starters

Ask anything about this article…

Daily AI Digest

Get the top 5 AI stories delivered to your inbox every morning.

More about

modellanguage modelbenchmark

Knowledge Map

Knowledge Map
TopicsEntitiesSource
CL4SE: A Co…modellanguage mo…benchmarkreleaseannounceopen-sourcearXiv cs.SE

Connected Articles — Knowledge Graph

This article is connected to other articles through shared AI topics and tags.

Knowledge Graph100 articles · 144 connections
Scroll to zoom · drag to pan · click to open

Discussion

Sign in to join the discussion

No comments yet — be the first to share your thoughts!

More in Models