Google Analysis has launched TimesFM-2.5, a 200M-parameter, decoder-only time-series basis mannequin with a 16K context size and native probabilistic forecasting assist. The brand new checkpoint is stay on Hugging Face. On GIFT-Eval, TimesFM-2.5 now tops the leaderboard throughout accuracy metrics (MASE, CRPS) amongst zero-shot basis fashions.


What’s Time-Collection Forecasting?
Time-series forecasting is the observe of analyzing sequential knowledge factors collected over time to determine patterns and predict future values. It underpins vital purposes throughout industries, together with forecasting product demand in retail, monitoring climate and precipitation traits, and optimizing large-scale programs corresponding to provide chains and power grids. By capturing temporal dependencies and differences due to the season, time-series forecasting permits data-driven decision-making in dynamic environments.
What modified in TimesFM-2.5 vs v2.0?
- Parameters: 200M (down from 500M in 2.0).
- Max context: 16,384 factors (up from 2,048).
- Quantiles: Optionally available 30M-param quantile head for steady quantile forecasts as much as 1K horizon.
- Inputs: No “frequency” indicator required; new inference flags (flip-invariance, positivity inference, quantile-crossing repair).
- Roadmap: Upcoming Flax implementation for sooner inference; covariates assist slated to return; docs being expanded.
Why does an extended context matter?
16K historic factors enable a single ahead go to seize multi-seasonal construction, regime breaks, and low-frequency elements with out tiling or hierarchical stitching. In observe, that reduces pre-processing heuristics and improves stability for domains the place context >> horizon (e.g., power load, retail demand). The longer context is a core design change explicitly famous for two.5.
What’s the analysis context?
TimesFM’s core thesis—a single, decoder-only basis mannequin for forecasting—was launched within the ICML 2024 paper and Google’s analysis weblog. GIFT-Eval (Salesforce) emerged to standardize analysis throughout domains, frequencies, horizon lengths, and univariate/multivariate regimes, with a public leaderboard hosted on Hugging Face.
Key Takeaways
- Smaller, Quicker Mannequin: TimesFM-2.5 runs with 200M parameters (half of two.0’s dimension) whereas bettering accuracy.
- Longer Context: Helps 16K enter size, enabling forecasts with deeper historic protection.
- Benchmark Chief: Now ranks #1 amongst zero-shot basis fashions on GIFT-Eval for each MASE (level accuracy) and CRPS (probabilistic accuracy).
- Manufacturing-Prepared: Environment friendly design and quantile forecasting assist make it appropriate for real-world deployments throughout industries.
- Broad Availability: The mannequin is stay on Hugging Face.
Abstract
TimesFM-2.5 exhibits that basis fashions for forecasting are shifting previous proof-of-concept into sensible, production-ready instruments. By slicing parameters in half whereas extending context size and main GIFT-Eval throughout each level and probabilistic accuracy, it marks a step-change in effectivity and functionality. With Hugging Face entry already stay and BigQuery/Mannequin Backyard integration on the way in which, the mannequin is positioned to speed up adoption of zero-shot time-series forecasting in real-world pipelines.
Take a look at the Model card (HF), Repo, Benchmark and Paper. Be at liberty to take a look at our GitHub Page for Tutorials, Codes and Notebooks. Additionally, be happy to comply with us on Twitter and don’t neglect to affix our 100k+ ML SubReddit and Subscribe to our Newsletter.

Michal Sutter is a knowledge science skilled with a Grasp of Science in Knowledge Science from the College of Padova. With a stable basis in statistical evaluation, machine studying, and knowledge engineering, Michal excels at reworking complicated datasets into actionable insights.