Live
Black Hat USADark ReadingBlack Hat AsiaAI BusinessDante-2B: I'm training a 2.1B bilingual fully open Italian/English LLM from scratch on 2×H200. Phase 1 done — here's what I've built.Reddit r/LocalLLaMAMastering AI Careers in 90 Days: Transformative OpportunitiesMedium AIPSSU: The Minimal Architecture for Persistent AIDev.to AIComplete Guide to MCP (Model Context Protocol) in 2026 — Architecture, Implementation, and Enterprise RoadmapDev.to AIFrom Answers to ProcessesMedium AIUnlocking Document Intelligence: A Comprehensive Guide to Multimodal ExtractionMedium AII Studied 40 Viral AI Reels to Find What Actually Works (With Real Numbers)Dev.to AIFive Questions Every AI Investor Should Ask About Intelligence ArchitectureDev.to AIОдин промпт заменил мне 2 часа работы в деньDev.to AIThe 12 AI Tools Actually Worth Using in ClassroomsDev.to AICode Ignition: How AI Sparks Innovation in Software DevelopmentDev.to AIThe Silent Freeze: When Your Model Runs Out of Credits Mid-ConversationDev.to AIBlack Hat USADark ReadingBlack Hat AsiaAI BusinessDante-2B: I'm training a 2.1B bilingual fully open Italian/English LLM from scratch on 2×H200. Phase 1 done — here's what I've built.Reddit r/LocalLLaMAMastering AI Careers in 90 Days: Transformative OpportunitiesMedium AIPSSU: The Minimal Architecture for Persistent AIDev.to AIComplete Guide to MCP (Model Context Protocol) in 2026 — Architecture, Implementation, and Enterprise RoadmapDev.to AIFrom Answers to ProcessesMedium AIUnlocking Document Intelligence: A Comprehensive Guide to Multimodal ExtractionMedium AII Studied 40 Viral AI Reels to Find What Actually Works (With Real Numbers)Dev.to AIFive Questions Every AI Investor Should Ask About Intelligence ArchitectureDev.to AIОдин промпт заменил мне 2 часа работы в деньDev.to AIThe 12 AI Tools Actually Worth Using in ClassroomsDev.to AICode Ignition: How AI Sparks Innovation in Software DevelopmentDev.to AIThe Silent Freeze: When Your Model Runs Out of Credits Mid-ConversationDev.to AI
AI NEWS HUBbyEIGENVECTOREigenvector

OptiMer: Optimal Distribution Vector Merging Is Better than Data Mixing for Continual Pre-Training

arXiv cs.CLby Haiyue Song, Masao UtiyamaApril 1, 20261 min read0 views
Source Quiz

arXiv:2603.28858v1 Announce Type: new Abstract: Continual pre-training is widely used to adapt LLMs to target languages and domains, yet the mixture ratio of training data remains a sensitive hyperparameter that is expensive to tune: they must be fixed before training begins, and a suboptimal choice can waste weeks of compute. In this work, we propose OptiMer, which decouples ratio selection from training: we train one CPT model per dataset, extract each model's distribution vector, which represents the parameter shift induced by that dataset, and search for optimal composition weights post-hoc via Bayesian optimization. Experiments on Gemma 3 27B across languages (Japanese, Chinese) and domains (Math, Code) show that OptiMer consistently outperforms data mixture and model averaging baseli

View PDF HTML (experimental)

Abstract:Continual pre-training is widely used to adapt LLMs to target languages and domains, yet the mixture ratio of training data remains a sensitive hyperparameter that is expensive to tune: they must be fixed before training begins, and a suboptimal choice can waste weeks of compute. In this work, we propose OptiMer, which decouples ratio selection from training: we train one CPT model per dataset, extract each model's distribution vector, which represents the parameter shift induced by that dataset, and search for optimal composition weights post-hoc via Bayesian optimization. Experiments on Gemma 3 27B across languages (Japanese, Chinese) and domains (Math, Code) show that OptiMer consistently outperforms data mixture and model averaging baselines with 15-35 times lower search cost. Key findings reveal that 1) the optimized weights can be interpreted as data mixture ratios, and retraining with these ratios improves data mixture CPT, and 2) the same vector pool can be re-optimized for a given objective without any retraining, producing target-tailored models on demand. Our work establishes that data mixture ratio selection, traditionally a pre-training decision, can be reformulated as a post-hoc optimization over distribution vectors, offering a more flexible paradigm for continual pre-training.

Comments: Preprint, 20 pages, 10 tables, 12 figures

Subjects:

Computation and Language (cs.CL); Artificial Intelligence (cs.AI); Machine Learning (cs.LG)

Cite as: arXiv:2603.28858 [cs.CL]

(or arXiv:2603.28858v1 [cs.CL] for this version)

https://doi.org/10.48550/arXiv.2603.28858

arXiv-issued DOI via DataCite (pending registration)

Submission history

From: Haiyue Song [view email] [v1] Mon, 30 Mar 2026 18:00:02 UTC (1,202 KB)

Was this article helpful?

Sign in to highlight and annotate this article

AI
Ask AI about this article
Powered by Eigenvector · full article context loaded
Ready

Conversation starters

Ask anything about this article…

Daily AI Digest

Get the top 5 AI stories delivered to your inbox every morning.

More about

modeltrainingannounce

Knowledge Map

Knowledge Map
TopicsEntitiesSource
OptiMer: Op…modeltrainingannouncejapanarxivfindingsarXiv cs.CL

Connected Articles — Knowledge Graph

This article is connected to other articles through shared AI topics and tags.

Knowledge Graph100 articles · 151 connections
Scroll to zoom · drag to pan · click to open

Discussion

Sign in to join the discussion

No comments yet — be the first to share your thoughts!

More in Models