Leveraging the power of images for time-series forecasting
Which is the biggest challenge when building a pretrained time-series model?
Answer: Finding high-quality, diverse time-series data. We’ve discussed this in previous articles.
There are 2 main approaches to building a foundation forecasting model:
- “Bootstrap” an LLM: Repurpose a pretrained LLM like GPT-4 or Llama by applying fine-tuning or tokenization strategies tailored for time-series tasks.
- “From scratch“: Build a large-scale time-series dataset and pretrain a model from scratch, hoping it generalizes to new data.
While the 1st approach works since Transformers are general-purpose computation engines, it doesn’t yield the best results. The 2nd approach has been more successful as seen here: MOIRAI, TimesFM, TTM, etc.
However, these models seem to follow the scaling laws and their performance depends on finding extensive time-series data — which brings us back to the original challenge.
But what if we could leverage a different modality, like images? This might seem counterintuitive, but some researchers explored this hypothesis and produced groundbreaking results. In…