Live
Black Hat USAAI BusinessBlack Hat AsiaAI BusinessI Audited 30+ Small Businesses on Their AI Visibility. Here's What Most Are Getting Wrong.Dev.to AIHow to Actually Monitor Your LLM Costs (Without a Spreadsheet)Dev.to AIОдин промпт приносит мне $500 в неделю на фрилансеDev.to AINetflix AI Team Just Open-Sourced VOID: an AI Model That Erases Objects From Videos — Physics and AllMarkTechPostUnderstanding Data Modeling in Power BI: Joins, Relationships, and Schemas Explained.DEV CommunityHow to Supercharge Your AI Coding Workflow with Oh My CodexDev.to AIThe 11 steps that run every time you press Enter in Claude CodeDev.to AIBig Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.Dev.to AIOptimizing Claude Code token usage: lessons learnedDEV CommunityAgents Bedrock AgentCore en mode VPC : attention aux coûts de NAT Gateway !DEV CommunityIntroduction to Python ProgrammingDev.to AIWhen a Conversation with AI Became ContinuityMedium AIBlack Hat USAAI BusinessBlack Hat AsiaAI BusinessI Audited 30+ Small Businesses on Their AI Visibility. Here's What Most Are Getting Wrong.Dev.to AIHow to Actually Monitor Your LLM Costs (Without a Spreadsheet)Dev.to AIОдин промпт приносит мне $500 в неделю на фрилансеDev.to AINetflix AI Team Just Open-Sourced VOID: an AI Model That Erases Objects From Videos — Physics and AllMarkTechPostUnderstanding Data Modeling in Power BI: Joins, Relationships, and Schemas Explained.DEV CommunityHow to Supercharge Your AI Coding Workflow with Oh My CodexDev.to AIThe 11 steps that run every time you press Enter in Claude CodeDev.to AIBig Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.Dev.to AIOptimizing Claude Code token usage: lessons learnedDEV CommunityAgents Bedrock AgentCore en mode VPC : attention aux coûts de NAT Gateway !DEV CommunityIntroduction to Python ProgrammingDev.to AIWhen a Conversation with AI Became ContinuityMedium AI
AI NEWS HUBbyEIGENVECTOREigenvector

LLM-Meta-SR: In-Context Learning for Evolving Selection Operators in Symbolic Regression

arXiv cs.NEby [Submitted on 24 May 2025 (v1), last revised 31 Mar 2026 (this version, v3)]April 1, 20262 min read1 views
Source Quiz

arXiv:2505.18602v3 Announce Type: replace Abstract: Large language models (LLMs) have revolutionized algorithm development, yet their application in symbolic regression, where algorithms automatically discover symbolic expressions from data, remains limited. In this paper, we propose a meta-learning framework that enables LLMs to automatically design selection operators for evolutionary symbolic regression algorithms. We first identify two key limitations in existing LLM-based algorithm evolution techniques: lack of semantic guidance and code bloat. The absence of semantic awareness can lead to ineffective exchange of useful code components, while bloat results in unnecessarily complex components; both can hinder evolutionary learning progress or reduce the interpretability of the designed

View PDF HTML (experimental)

Abstract:Large language models (LLMs) have revolutionized algorithm development, yet their application in symbolic regression, where algorithms automatically discover symbolic expressions from data, remains limited. In this paper, we propose a meta-learning framework that enables LLMs to automatically design selection operators for evolutionary symbolic regression algorithms. We first identify two key limitations in existing LLM-based algorithm evolution techniques: lack of semantic guidance and code bloat. The absence of semantic awareness can lead to ineffective exchange of useful code components, while bloat results in unnecessarily complex components; both can hinder evolutionary learning progress or reduce the interpretability of the designed algorithm. To address these issues, we enhance the LLM-based evolution framework for meta-symbolic regression with two key innovations: a complementary, semantics-aware selection operator and bloat control. Additionally, we embed domain knowledge into the prompt, enabling the LLM to generate more effective and contextually relevant selection operators. Our experimental results on symbolic regression benchmarks show that LLMs can devise selection operators that outperform nine expert-designed baselines, achieving state-of-the-art performance. Moreover, the evolved operator can further improve a state-of-the-art symbolic regression algorithm, achieving the best performance among 28 symbolic regression and other machine learning algorithms across 116 regression datasets. This demonstrates that LLMs can exceed expert-level algorithm design for symbolic regression.

Subjects:

Neural and Evolutionary Computing (cs.NE); Artificial Intelligence (cs.AI); Machine Learning (cs.LG)

Cite as: arXiv:2505.18602 [cs.NE]

(or arXiv:2505.18602v3 [cs.NE] for this version)

https://doi.org/10.48550/arXiv.2505.18602

arXiv-issued DOI via DataCite

Submission history

From: Hengzhe Zhang [view email] [v1] Sat, 24 May 2025 08:52:56 UTC (1,281 KB) [v2] Fri, 8 Aug 2025 06:50:37 UTC (579 KB) [v3] Tue, 31 Mar 2026 12:07:54 UTC (750 KB)

Was this article helpful?

Sign in to highlight and annotate this article

AI
Ask AI about this article
Powered by Eigenvector · full article context loaded
Ready

Conversation starters

Ask anything about this article…

Daily AI Digest

Get the top 5 AI stories delivered to your inbox every morning.

More about

modellanguage modelbenchmark

Knowledge Map

Knowledge Map
TopicsEntitiesSource
LLM-Meta-SR…modellanguage mo…benchmarkannounceapplicationinterpretab…arXiv cs.NE

Connected Articles — Knowledge Graph

This article is connected to other articles through shared AI topics and tags.

Knowledge Graph100 articles · 323 connections
Scroll to zoom · drag to pan · click to open

Discussion

Sign in to join the discussion

No comments yet — be the first to share your thoughts!

More in Models