Academic

A Foundation Model for Instruction-Conditioned In-Context Time Series Tasks

arXiv:2603.22586v1 Announce Type: new Abstract: In-context learning (ICL) allows a model to adapt at inference time by conditioning on examples rather than updating parameters. Existing time-series foundation models use implicit positional context, retrieval, or task-specific objectives, but rarely explicit instruction-conditioned demonstrations. We present a foundation model for instruction-conditioned in-context time-series tasks based on a quantile-regression T5 encoder-decoder. Historical examples and queries are encoded with a structured tokenization scheme that marks target series, covariates, context, and task-specific future information. A hierarchical Transformer with per-example encoding, example-level fusion, and cross-example attention conditions decoding on demonstration pairs, enabling forecasting and related tasks without task-specific fine-tuning. We train on large-scale real and synthetic time series using supervised forecasting plus self-supervised tasks, including i

A
Anish Saha, Konstantin Shmakov
· · 1 min read · 20 views

arXiv:2603.22586v1 Announce Type: new Abstract: In-context learning (ICL) allows a model to adapt at inference time by conditioning on examples rather than updating parameters. Existing time-series foundation models use implicit positional context, retrieval, or task-specific objectives, but rarely explicit instruction-conditioned demonstrations. We present a foundation model for instruction-conditioned in-context time-series tasks based on a quantile-regression T5 encoder-decoder. Historical examples and queries are encoded with a structured tokenization scheme that marks target series, covariates, context, and task-specific future information. A hierarchical Transformer with per-example encoding, example-level fusion, and cross-example attention conditions decoding on demonstration pairs, enabling forecasting and related tasks without task-specific fine-tuning. We train on large-scale real and synthetic time series using supervised forecasting plus self-supervised tasks, including imputation, reconstruction, classification, anomaly detection, and source demixing. This multi-task training learns a distribution over task mappings and improves adaptation to local structure at inference time. Across diverse datasets, frequencies, and horizons, our method outperforms strong foundation baselines on point and probabilistic forecasting benchmarks, including fev-bench and GIFT-Eval, while remaining competitive on classification and anomaly detection.

Executive Summary

This article presents a foundation model for instruction-conditioned in-context time-series tasks, leveraging a quantile-regression T5 encoder-decoder architecture. The proposed model employs a structured tokenization scheme to encode historical examples and queries, enabling the conditioning of decoding on demonstration pairs. The model is trained on large-scale real and synthetic time series data using a multi-task approach, which includes supervised forecasting and self-supervised tasks. The results demonstrate the model's superiority over strong foundation baselines on point and probabilistic forecasting benchmarks, while maintaining competitiveness on classification and anomaly detection tasks. The proposed model's ability to adapt to local structure at inference time and its scalability to diverse datasets, frequencies, and horizons make it a promising solution for time-series forecasting and related tasks.

Key Points

  • Introduction of a novel foundation model for instruction-conditioned in-context time-series tasks
  • Use of a quantile-regression T5 encoder-decoder architecture for encoding and decoding
  • Employment of a structured tokenization scheme for encoding historical examples and queries
  • Multi-task training approach incorporating supervised forecasting and self-supervised tasks
  • Superior performance over strong foundation baselines on point and probabilistic forecasting benchmarks

Merits

Strengths of the Model

The proposed model's ability to adapt to local structure at inference time and its scalability to diverse datasets, frequencies, and horizons make it a promising solution for time-series forecasting and related tasks. The model's multi-task training approach enables it to learn a distribution over task mappings, improving its performance on a range of tasks.

Demerits

Limitations of the Model

The model's reliance on a quantile-regression T5 encoder-decoder architecture may limit its applicability to certain types of time-series data. Additionally, the model's performance may degrade in scenarios where the demonstration pairs are not representative of the underlying time-series data.

Expert Commentary

The proposed foundation model for instruction-conditioned in-context time-series tasks represents a significant advancement in the field of time-series forecasting. The model's ability to adapt to local structure at inference time and its scalability to diverse datasets, frequencies, and horizons make it a promising solution for a range of practical and policy-related applications. However, further research is needed to fully explore the model's potential and to address its limitations. Additionally, the model's reliance on a quantile-regression T5 encoder-decoder architecture may limit its applicability to certain types of time-series data. Nevertheless, the proposed model is an important contribution to the field and has the potential to be applied in a range of domains.

Recommendations

  • Further research is needed to fully explore the model's potential and to address its limitations.
  • The model's reliance on a quantile-regression T5 encoder-decoder architecture should be investigated, and alternative architectures should be explored to improve its applicability to certain types of time-series data.

Sources

Original: arXiv - cs.LG