Defending Recommender Systems from Attacks

Defending Recommender Systems from Attacks

A robust retrieval-augmented framework to combat LLM vulnerabilities

This research introduces RETURN, a novel framework that enhances security in LLM-powered recommendation systems by purifying malicious inputs and defending against profile poisoning attacks.

  • Addresses critical security vulnerabilities in LLM-based recommenders that can be exploited through minor perturbations
  • Leverages item-item co-occurrence patterns to detect and filter out poisoned user profiles
  • Demonstrates improved robustness against adversarial attacks while maintaining recommendation quality
  • Provides a practical solution for businesses to implement more secure AI-powered recommendation systems

This work is particularly valuable for e-commerce platforms and content providers that rely on LLM recommendations, offering a practical defense mechanism against increasingly sophisticated attacks that could manipulate user experiences or create security breaches.

Retrieval-Augmented Purifier for Robust LLM-Empowered Recommendation

92 | 104