🔥 allenai/OLMo-core
PyTorch building blocks for the OLMo ecosystem — Trending on GitHub today with 66 new stars.
Building blocks for OLMo modeling and training
Installation
First install PyTorch according to the instructions specific to your operating system and hardware.
For development, we recommend installing from source:
git clone https://github.com/allenai/OLMo-core.git cd OLMo-core pip install -e .[all]git clone https://github.com/allenai/OLMo-core.git cd OLMo-core pip install -e .[all]Or you can install from PyPI with:
pip install ai2-olmo-core
There are a number of optional dependencies that must be installed to use certain functionality as well, including:
-
flash-attn, ring-flash-attn, and TransformerEngine for the corresponding attention backends.
-
Liger-Kernel for a low-memory "fused-linear" loss implementation.
-
torchao for float8 training.
-
grouped_gemm for dropless mixture-of-experts (MoE) models. You may need to compile from source until PR #21 is released (post v0.1.6).
-
QuACK for some CuTe-based kernels.
The published Docker images contain all core and optional dependencies, and are regularly tested on our in-house H100 clusters. But there are several things to keep in mind if you intend to use these images:
-
They do not come with the OLMo-core package installed, only its dependencies, to accommodate for regular code changes.
-
They may not work on your own cluster if you have different hardware or driver/CUDA versions.
If the published images do not work for your use-case for any of the above reasons, you could adapt our Dockerfile to build your own images.
Official training scripts
Official training scripts for released models can be found in src/scripts/official/.
These scripts are meant to be launched with torchrun, or with OLMo-core's Beaker launch CLI if you have access to Beaker.
For example:
torchrun --nproc-per-node=8 src/scripts/official/OLMo2/OLMo-2-0325-32B-train.py \ --save-folder=/path/to/save/checkpointstorchrun --nproc-per-node=8 src/scripts/official/OLMo2/OLMo-2-0325-32B-train.py \ --save-folder=/path/to/save/checkpointsYou can override most configuration options from the command-line. For example, to override the learning rate you could launch the script like this:
torchrun --nproc-per-node=8 src/scripts/official/OLMo2/OLMo-2-0325-32B-train.py \ --save-folder=/path/to/save/checkpoints \ --train_module.optim.lr=6e-3torchrun --nproc-per-node=8 src/scripts/official/OLMo2/OLMo-2-0325-32B-train.py \ --save-folder=/path/to/save/checkpoints \ --train_module.optim.lr=6e-3To continue annealing from a checkpoint, we use a separate script which can be launched like this:
torchrun --nproc-per-node=8 src/scripts/official/OLMo2/OLMo-2-0325-32B-anneal.py \ --save-folder=/path/to/save/checkpoints \ --checkpoint=https://olmo-checkpoints.org/ai2-llm/peteish32/step721901torchrun --nproc-per-node=8 src/scripts/official/OLMo2/OLMo-2-0325-32B-anneal.py \ --save-folder=/path/to/save/checkpoints \ --checkpoint=https://olmo-checkpoints.org/ai2-llm/peteish32/step721901Available Training Scripts
Model Family Directory Description
OLMo-2
src/scripts/official/OLMo2/
Training scripts and model card for OLMo-2 32B models
OLMo-3
src/scripts/official/OLMo3/
Training scripts and model cards for OLMo-3 7B and 32B models
Inference
With Hugging Face Transformers
You can use our Hugging Face transformers integration to run inference on the OLMo checkpoints:
pip install transformers>=4.57.0
from transformers import AutoModelForCausalLM, AutoTokenizer olmo = AutoModelForCausalLM.from_pretrained("allenai/Olmo-3-1125-32B") tokenizer = AutoTokenizer.from_pretrained("allenai/Olmo-3-1125-32B") message = ["Language modeling is "] inputs = tokenizer(message, return_tensors='pt', return_token_type_ids=False)from transformers import AutoModelForCausalLM, AutoTokenizer olmo = AutoModelForCausalLM.from_pretrained("allenai/Olmo-3-1125-32B") tokenizer = AutoTokenizer.from_pretrained("allenai/Olmo-3-1125-32B") message = ["Language modeling is "] inputs = tokenizer(message, return_tensors='pt', return_token_type_ids=False)inputs = {k: v.to('cuda') for k,v in inputs.items()} # optional verifying cuda
olmo = olmo.to('cuda')
response = olmo.generate(inputs, max_new_tokens=100, do_sample=True, temperature=1.0, top_p=0.7) print(tokenizer.batch_decode(response, skip_special_tokens=True)[0])`
Alternatively, with the Hugging Face pipeline abstraction:
from transformers import pipeline olmo_pipe = pipeline("text-generation", model="allenai/Olmo-3-1125-32B") print(olmo_pipe("Language modeling is"))from transformers import pipeline olmo_pipe = pipeline("text-generation", model="allenai/Olmo-3-1125-32B") print(olmo_pipe("Language modeling is"))With vLLM
vLLM provides high-throughput inference for OLMo models. You can use it for offline batched inference:
pip install vllm>=0.11.0
from vllm import LLM, SamplingParams llm = LLM(model="allenai/Olmo-3-1125-32B") sampling_params = SamplingParams(temperature=1.0, top_p=0.7) prompts = ["Language modeling is"] outputs = llm.generate(prompts, sampling_params) for output in outputs: prompt = output.prompt generated_text = output.outputs[0].text print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")from vllm import LLM, SamplingParams llm = LLM(model="allenai/Olmo-3-1125-32B") sampling_params = SamplingParams(temperature=1.0, top_p=0.7) prompts = ["Language modeling is"] outputs = llm.generate(prompts, sampling_params) for output in outputs: prompt = output.prompt generated_text = output.outputs[0].text print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")For more details, see the vLLM documentation.
With Olmo-core (beta)
Autoregressive generation is supported directly in Olmo-core. Using this capability, we provide a chat-loop demo that can be used to interact with models in an interactive chat session:
python -m olmo_core.generate.chat https://olmo-checkpoints.org/ai2-llm/Olmo-3-1025-7B/stage3/step11921/ --max-new-tokens 512
Evaluation
Additional tools for evaluating OLMo models are available at the OLMo Eval and olmes repositories.
Development
The Python library source code is located in src/olmo_core. The corresponding tests are located in src/test. The library docs are located in docs. You can build the docs locally with make docs.
Code checks:
-
We use pytest to run tests. You can run all tests with pytest -v src/test. You can also point pytest at a specific test file to run it individually.
-
We use isort and black for code formatting. Ideally you should integrate these into your editor, but you can also run them manually or configure them with a pre-commit hook. To validate that all files are formatted correctly, run make style-check.
-
We use ruff as our primary linter. You can run it with make lint-check.
-
We use mypy as our type checker. You can run it with make type-check.
Citing
@misc{olmo20242olmo2furious, title={{2 OLMo 2 Furious}}, author={{Team OLMo} and Pete Walsh and Luca Soldaini and Dirk Groeneveld and Kyle Lo and Shane Arora and Akshita Bhagia and Yuling Gu and Shengyi Huang and Matt Jordan and Nathan Lambert and Dustin Schwenk and Oyvind Tafjord and Taira Anderson and David Atkinson and Faeze Brahman and Christopher Clark and Pradeep Dasigi and Nouha Dziri and Michal Guerquin and Hamish Ivison and Pang Wei Koh and Jiacheng Liu and Saumya Malik and William Merrill and Lester James V. Miranda and Jacob Morrison and Tyler Murray and Crystal Nam and Valentina Pyatkin and Aman Rangapur and Michael Schmitz and Sam Skjonsberg and David Wadden and Christopher Wilhelm and Michael Wilson and Luke Zettlemoyer and Ali Farhadi and Noah A. Smith and Hannaneh Hajishirzi}, year={2024}, eprint={2501.00656}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2501.00656}, }@misc{olmo20242olmo2furious, title={{2 OLMo 2 Furious}}, author={{Team OLMo} and Pete Walsh and Luca Soldaini and Dirk Groeneveld and Kyle Lo and Shane Arora and Akshita Bhagia and Yuling Gu and Shengyi Huang and Matt Jordan and Nathan Lambert and Dustin Schwenk and Oyvind Tafjord and Taira Anderson and David Atkinson and Faeze Brahman and Christopher Clark and Pradeep Dasigi and Nouha Dziri and Michal Guerquin and Hamish Ivison and Pang Wei Koh and Jiacheng Liu and Saumya Malik and William Merrill and Lester James V. Miranda and Jacob Morrison and Tyler Murray and Crystal Nam and Valentina Pyatkin and Aman Rangapur and Michael Schmitz and Sam Skjonsberg and David Wadden and Christopher Wilhelm and Michael Wilson and Luke Zettlemoyer and Ali Farhadi and Noah A. Smith and Hannaneh Hajishirzi}, year={2024}, eprint={2501.00656}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2501.00656}, }Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
githubtrendingopen-source
Sources: Mark Zuckerberg is back to writing code after a two-decade hiatus, submitting three diffs to Meta s monorepo, and is a heavy user of Claude Code CLI (Gergely Orosz/The Pragmatic Engineer)
Gergely Orosz / The Pragmatic Engineer : Sources: Mark Zuckerberg is back to writing code after a two-decade hiatus, submitting three diffs to Meta's monorepo, and is a heavy user of Claude Code CLI Mark Zuckerberg and Garry Tan join the trend of C-level folks jumping back into coding with AI. Also: a bad week for Claude Code and GitHub, and more

B70: Quick and Early Benchmarks & Backend Comparison
llama.cpp: f1f793ad0 (8657) This is a quick attempt to just get it up and running. Lots of oneapi runtime still using "stable" from Intels repo. Kernel 6.19.8+deb13-amd64 with an updated xe firmware built. Vulkan is Debian but using latest Mesa compiled from source. Openvino is 2026.0. Feels like everything is "barely on the brink of working" (which is to be expected). sycl: $ build/bin/llama-bench -hf unsloth/Qwen3.5-27B-GGUF:UD-Q4_K_XL -p 512,16384 -n 128,512 | model | size | params | backend | ngl | test | t/s | | ------------------------------ | ---------: | ---------: | ---------- | --: | --------------: | -------------------: | | qwen35 27B Q4_K - Medium | 16.40 GiB | 26.90 B | SYCL | 99 | pp512 | 798.07 ± 2.72 | | qwen35 27B Q4_K - Medium | 16.40 GiB | 26.90 B | SYCL | 99 | pp16384

Distributed 1-bit LLM inference over P2P - 50 nodes validated, 100% shard discovery, CPU-only
There are roughly 4 billion CPUs on Earth. Most of them sit idle 70% of the time. Meanwhile, the AI industry is burning $100B+ per year on GPU clusters to run models that 95% of real-world tasks don't actually need. ARIA Protocol is an attempt to flip that equation. It's a peer-to-peer distributed inference system built specifically for 1-bit quantized models (ternary weights: -1, 0, +1). No GPU. No cloud. No central server. Nodes discover each other over a Kademlia DHT, shard model layers across contributors, and pipeline inference across the network. Think Petals meets BitNet, minus the GPU requirement. This isn't Ollama or llama.cpp — those are great tools, but they're single-machine. ARIA distributes inference across multiple CPUs over the internet so that no single node needs to hold
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Open Source AI

B70: Quick and Early Benchmarks & Backend Comparison
llama.cpp: f1f793ad0 (8657) This is a quick attempt to just get it up and running. Lots of oneapi runtime still using "stable" from Intels repo. Kernel 6.19.8+deb13-amd64 with an updated xe firmware built. Vulkan is Debian but using latest Mesa compiled from source. Openvino is 2026.0. Feels like everything is "barely on the brink of working" (which is to be expected). sycl: $ build/bin/llama-bench -hf unsloth/Qwen3.5-27B-GGUF:UD-Q4_K_XL -p 512,16384 -n 128,512 | model | size | params | backend | ngl | test | t/s | | ------------------------------ | ---------: | ---------: | ---------- | --: | --------------: | -------------------: | | qwen35 27B Q4_K - Medium | 16.40 GiB | 26.90 B | SYCL | 99 | pp512 | 798.07 ± 2.72 | | qwen35 27B Q4_K - Medium | 16.40 GiB | 26.90 B | SYCL | 99 | pp16384



Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!