Exposing Weaknesses in Time Series LLMs

Exposing Weaknesses in Time Series LLMs

Uncovering critical security vulnerabilities in forecasting models

This research reveals how Large Language Models used for time series forecasting are vulnerable to targeted adversarial attacks, potentially compromising their reliability in critical applications.

  • Black-box attack framework demonstrates how malicious actors can manipulate LLM forecasting outputs
  • Minimal data perturbations can cause significant prediction errors while remaining undetected
  • Security gaps identified across various time series forecasting models
  • Defense mechanisms are urgently needed before widespread deployment in sensitive domains

These findings highlight a critical security concern for organizations deploying LLM-based forecasting in finance, healthcare, and infrastructure monitoring, where prediction manipulation could have severe consequences.

Adversarial Vulnerabilities in Large Language Models for Time Series Forecasting

36 | 104