
Bypassing AI Defenses with No Prior Knowledge
Using CLIP as a surrogate model for no-box adversarial attacks
This research demonstrates how CLIP models can be leveraged to create effective adversarial attacks against AI systems without any prior knowledge of the target model or training data.
- Introduces MF-CLIP, a novel approach utilizing multiple CLIP models as surrogates to generate transferable adversarial examples
- Achieves higher attack success rates than traditional methods in no-box attack scenarios
- Reveals critical security vulnerabilities in widely deployed vision systems, even when attackers have minimal information
- Demonstrates that diverse CLIP model ensembles significantly enhance attack transferability
Why it matters: This work exposes serious security implications for AI deployments in high-stakes environments, showing that sophisticated attacks can succeed even with severe knowledge constraints, necessitating more robust defense mechanisms.
MF-CLIP: Leveraging CLIP as Surrogate Models for No-box Adversarial Attacks