The Silent Censor

The Silent Censor

Uncovering how LLMs filter political information

This research reveals systematic patterns in how large language models selectively filter or refuse to provide information on political topics.

  • Identifies both hard censorship (outright refusals) and soft censorship (selective omission) in LLM responses
  • Documents significant variations in censorship practices across different LLM systems
  • Demonstrates how political topics receive inconsistent treatment compared to non-political queries

For security professionals, this work highlights critical transparency issues in AI information systems that may influence public discourse and decision-making without users' awareness.

What Large Language Models Do Not Talk About: An Empirical Study of Moderation and Censorship Practices

110 | 124