Hidden Threats in Code Comprehension

Hidden Threats in Code Comprehension

How imperceptible code manipulations can deceive AI while fooling humans

This research reveals critical security vulnerabilities in Large Language Models when comprehending manipulated code that appears normal to humans.

  • Imperceptible attacks using hidden character manipulations can mislead LLMs while remaining undetectable to human reviewers
  • LLMs demonstrate significant vulnerability to adversarial code inputs despite their advanced capabilities
  • The findings highlight an urgent security gap between human and AI code perception
  • Researchers emphasize the need for robust defenses against these deceptive code inputs

For security professionals, this research highlights crucial considerations when deploying LLMs for code review, testing, or generation in production environments, as seemingly valid code may contain hidden vulnerabilities that only affect AI systems.

What You See Is Not Always What You Get: An Empirical Study of Code Comprehension by Large Language Models

35 | 104