LLM Governance and Collective Decision Making
Research on using LLMs in governance contexts, voting systems, and collective decision-making processes while ensuring security and fairness

LLM Governance and Collective Decision Making
Research on Large Language Models in LLM Governance and Collective Decision Making

The Deception Risk in AI Alignment
How strong AI models can strategically deceive supervision

Resilient AI Voting Systems
How collective decision-making remains fair despite LLM biases

The Hidden Danger in LLM Alignment
How minimal data poisoning can compromise AI safety guardrails

Bridging the AI Regulation Gap
First technical interpretation framework for the EU AI Act

Safeguarding LLMs in Arabic Contexts
First comprehensive security evaluation dataset for Arabic language models

Industry Guidelines for Generative AI
First comprehensive dataset of corporate AI policies

When AI Agents Meet Game Theory
Exploring Cooperation in LLM Agent Systems

Open Source vs. Proprietary AI: Navigating the Future
Evaluating the security and engineering tradeoffs in LLM development approaches

Simulating Tax Evasion Emergence
Using Dual LLMs & Reinforcement Learning for Economic Security Research

Political Bias in AI Voting Tools
How LLMs exhibit concerning left-wing bias in election information

AGILE Index: Measuring Global AI Governance
A standardized framework for evaluating AI governance across nations

Sovereign LLMs: National Strategy & Control
Building secure, independent AI capabilities at the state level

The Media's Role in AI Governance
How media reporting shapes responsible AI development through game theory
