Securing DNA Language Models Against Attacks

Securing DNA Language Models Against Attacks

First Comprehensive Assessment of Adversarial Robustness in DNA Classification

This research evaluates how vulnerable DNA language models are to adversarial attacks when performing DNA classification tasks, with implications for biomedical security.

Key Findings:

  • DNA Language Models (GROVER, DNABERT2, Nucleotide Transformer) show significant vulnerability to carefully crafted adversarial inputs
  • Various attack strategies (nucleotide substitution, insertions/deletions, codon-level manipulations) were tested to assess model robustness
  • The research identifies critical security gaps in current DNA sequence analysis platforms
  • Findings highlight the need for enhanced safeguards in biomedical AI systems

This work matters for biology and healthcare as DNA language models increasingly support critical applications in genomics, disease diagnosis, and personalized medicine, where model reliability and security are essential.

Exploring Adversarial Robustness in Classification tasks using DNA Language Models

19 | 104