Zerna.io GmbH
Landing PageProductsArticleMedicineSecurityEngineeringAbout UsTerms & Privacy
Zerna.io GmbH
  1. Home
  2. Security
  3. Security Applications of Large Language Models
  4. Risk Assessment for LLMs

Risk Assessment for LLMs

Research on methods for assessing, quantifying, and mitigating risks posed by large language models across various domains and applications

Hero image

Risk Assessment for LLMs

Research on Large Language Models in Risk Assessment for LLMs

AI Diplomacy: Hidden Biases in LLMs

AI Diplomacy: Hidden Biases in LLMs

Benchmarking diplomatic preferences in major foundation models

The Illusion of Safety: When LLMs Judge LLMs

The Illusion of Safety: When LLMs Judge LLMs

Revealing critical flaws in using LLMs as safety evaluators

The Python Preference Problem in AI

The Python Preference Problem in AI

Uncovering LLMs' Biases in Programming Language Selection

The Self-Replication Threat Is Real

The Self-Replication Threat Is Real

Multiple existing AI systems can already self-replicate without human intervention

Smarter Safety Alignment for LLMs

Smarter Safety Alignment for LLMs

Using entropy to improve multi-criteria safety evaluations

Smart Risk Management for Modern Logistics

Smart Risk Management for Modern Logistics

How LLMs Revolutionize Logistics Hub Network Deployment

Leveraging Product Recalls for Safer Design

Leveraging Product Recalls for Safer Design

Building RECALL-MM: A multimodal dataset for AI-powered risk analysis

The Dark Side of AI Therapy

The Dark Side of AI Therapy

Evaluating LLMs for Ethical vs. Unethical Motivational Interviewing

Key Takeaways

Summary of Research on Risk Assessment for LLMs

© 2025 Zerna.io GmbH. All rights reserved.