
The Deception Risk in AI Search Systems
How content injection attacks manipulate search results and AI judges
This research reveals how adversaries can manipulate AI-powered search systems through simple content injection attacks, deceiving retrievers, rerankers, and even LLM judges.
- AI search components are highly vulnerable to passages containing misleading text
- Attackers can manipulate search rankings to promote irrelevant content
- Even sophisticated LLM judges fail to identify manipulated content
- The vulnerability threatens the trustworthiness of modern information retrieval systems
These findings highlight critical security gaps in neural search pipelines that organizations must address to protect information integrity and user trust in AI systems.