🔥 google-research/timesfm
Hey there, little explorer! 🚀
Imagine you have a magic crystal ball that can guess what will happen next! 🔮
Google made a super-smart computer friend called TimesFM. It's like a very, very good guesser.
This friend is special because it looks at things that happen over time, like how many cookies you eat each day, or how tall you grow each year. 🍪🌱
Then, it tries to guess what will happen tomorrow! Will you eat more cookies? Will you grow taller?
It's like teaching a robot to see patterns in time, so it can help us guess the future for fun things! And lots of people think it's very cool! ✨
TimesFM (Time Series Foundation Model) is a pretrained time-series foundation model developed by Google Research for time-series forecasting. — Trending on GitHub today with 366 new stars.
TimesFM (Time Series Foundation Model) is a pretrained time-series foundation model developed by Google Research for time-series forecasting.
-
Paper: A decoder-only foundation model for time-series forecasting, ICML 2024.
-
All checkpoints: TimesFM Hugging Face Collection.
-
Google Research blog.
-
TimesFM in BigQuery: an official Google product.
This open version is not an officially supported Google product.
Latest Model Version: TimesFM 2.5
Archived Model Versions:
- 1.0 and 2.0: relevant code archived in the sub directory v1. You can pip install timesfm==1.3.0 to install an older version of this package to load them.
Update - Oct. 29, 2025
Added back the covariate support through XReg for TimesFM 2.5.
Update - Sept. 15, 2025
TimesFM 2.5 is out!
Comparing to TimesFM 2.0, this new 2.5 model:
-
uses 200M parameters, down from 500M.
-
supports up to 16k context length, up from 2048.
-
supports continuous quantile forecast up to 1k horizon via an optional 30M quantile head.
-
gets rid of the frequency indicator.
-
has a couple of new forecasting flags.
Along with the model upgrade we have also upgraded the inference API. This repo will be under construction over the next few weeks to
-
add support for an upcoming Flax version of the model (faster inference).
-
add back covariate support.
-
populate more docstrings, docs and notebook.
Install
- Clone the repository:
git clone https://github.com/google-research/timesfm.git cd timesfm
- Create a virtual environment and install dependencies using uv:
Create a virtual environment
uv venv
Activate the environment
source .venv/bin/activate
Install the package in editable mode with torch
uv pip install -e .[torch]
Or with flax
uv pip install -e .[flax]
Or XReg is needed
uv pip install -e .[xreg]
-
[Optional] Install your preferred torch / jax backend based on your OS and accelerators (CPU, GPU, TPU or Apple Silicon).:
-
Install PyTorch.
-
Install Jax for Flax.
Code Example
import torch import numpy as np import timesfmimport torch import numpy as np import timesfmtorch.set_float32_matmul_precision("high")
model = timesfm.TimesFM_2p5_200M_torch.from_pretrained("google/timesfm-2.5-200m-pytorch")
model.compile( timesfm.ForecastConfig( max_context=1024, max_horizon=256, normalize_inputs=True, use_continuous_quantile_head=True, force_flip_invariance=True, infer_is_positive=True, fix_quantile_crossing=True, ) ) point_forecast, quantile_forecast = model.forecast( horizon=12, inputs=[ np.linspace(0, 1, 100), np.sin(np.linspace(0, 20, 67)), ], # Two dummy inputs ) point_forecast.shape # (2, 12) quantile_forecast.shape # (2, 12, 10): mean, then 10th to 90th quantiles.`
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
githubtrendingopen-source
Built a zero allocation, header only C++ Qwen tokenizer that is nearly 20x faster than openai Tiktoken
I'm into HPC, and C++ static, zero allocation and zero dependancy software. I was studying BPE tokenizers, how do they work, so decided to build that project. I hardcoded qwen tokenizer for LLMs developers. I really know that whole Tokenization phase in llm inference is worth less than 2% of whole time, so practically negligible, but I just "love" to do that kind of programming, it's just an educational project for me to learn and build some intuition. Surprisingly after combining multiple different optimization techniques, it scored really high numbers in benchmarks. I thought it was a fluke at first, tried different tests, and so far it completely holds up. For a 12 threads Ryzen 5 3600 desktop CPU, 1 GB of English Text Corpus: - Mine Frokenizer: 1009 MB/s - OpenAI Tiktoken: ~ 50 MB/s Fo

Organization runner controls for Copilot cloud agent
Each time Copilot cloud agent works on a task, it starts a new development environment powered by GitHub Actions. By default, this runs on a standard GitHub-hosted runner, but teams The post Organization runner controls for Copilot cloud agent appeared first on The GitHub Blog .
![[P] Remote sensing foundation models made easy to use.](https://d2xsxph8kpxj0f.cloudfront.net/310419663032563854/konzwo8nGf8Z4uZsMefwMr/default-img-data-stream-CS83dLj3ogncWS7WB2tNYE.webp)
[P] Remote sensing foundation models made easy to use.
This project enables the idea of tasking remote sensing models to acquire embeddings like we task satellites to acquire data! https://github.com/cybergis/rs-embed submitted by /u/amritk110 [link] [comments]
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Open Source AI

Gemma 4 is great at real-time Japanese - English translation for games
When Gemma 3 27B QAT IT was released last year, it was SOTA for local real-time Japanese-English translation for visual novel for a while. So I want to see how Gemma 4 handle this use case. Model: Unsloth's gemma-4-26B-A4B-it-UD-Q5_K_M Context: 8192 Reasoning: OFF Softwares: Front end: Luna Translator Back end: LM Studio Workflow: Luna hooks the dialogue and speaker's name from the game. A Python script structures the hooked text (add name, gender). Luna sends the structured text and a system prompt to LM Studio Luna shows the translation. What Gemma 4 does great: Even with reasoning disabled, Gemma 4 follows instructions in system prompt very well. With structured text, gemma 4 deals with pronouns well. This is one of the biggest challenges because Japanese spoken dialogue often omit subj


Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!