Hugging Face Releases SmolLM3: A 3B Parameter Model That Rivals 7B Models
SmolLM3 demonstrates that careful data curation and training techniques can achieve 7B-class performance with just 3B parameters, making powerful AI accessible on consumer hardware.
Hugging Face has released SmolLM3, a 3-billion parameter language model that achieves performance comparable to models twice its size. The release represents a significant advancement in model efficiency, with implications for edge deployment and consumer hardware applications.
The model was trained on a carefully curated dataset of 11 trillion tokens, with particular emphasis on code, mathematics, and scientific reasoning. Through a combination of improved tokenization, architectural refinements, and a novel training curriculum, the team achieved remarkable efficiency gains.
SmolLM3 runs comfortably on consumer GPUs with 8GB VRAM and can even be quantized to run on high-end smartphones. The model achieves 68.4% on MMLU and 72.1% on HumanEval, metrics that were previously only achievable with much larger models.
The full model weights, training code, and dataset composition details have been released under an Apache 2.0 license, continuing Hugging Face's commitment to open-source AI development.
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
