
Stealing AI Art Prompts: A New Security Threat
How attackers can reverse-engineer valuable text-to-image prompts with minimal samples
This research reveals a significant vulnerability in text-to-image models that threatens the commercial prompt marketplace ecosystem.
- Introduces EvoStealer, a differential evolution approach that can extract valuable prompt templates using just a few image samples
- Creates Prism, a benchmark of 50 templates and 450 images to test prompt-stealing attacks
- Demonstrates that even complex, commercially valuable prompts can be reverse-engineered
- Highlights urgent security implications for AI artists, prompt engineers, and marketplace platforms
This work matters for security professionals as it exposes a novel IP theft vector in generative AI that could undermine emerging creative economies and requires technical countermeasures.
Vulnerability of Text-to-Image Models to Prompt Template Stealing: A Differential Evolution Approach