
Protecting Medical AI from Theft
Novel attacks expose vulnerabilities in medical imaging models
This research reveals a concerning adversarial domain alignment technique that enables stealing medical multimodal language models, particularly those used in radiology.
- Attackers can extract model functionality using only public medical images
- The technique successfully transfers knowledge across different medical domains
- Proposed domain alignment method achieves up to 60% of target model performance
- Current watermarking defenses prove inadequate against these attacks
This research highlights critical security vulnerabilities in healthcare AI systems, emphasizing the urgent need for robust protection mechanisms for these valuable medical assets.
Medical Multimodal Model Stealing Attacks via Adversarial Domain Alignment