
Governance and Regulation
Managing Autonomous AI Risks and Opportunities
European Union – The EU AI Act
- Risk-based regulatory approach categorizing AI systems by potential harm
- Bans on "unacceptable risk" applications like social scoring and surveillance
- High-risk AI systems subject to strict controls and requirements
- Transparency obligations when AI interacts with humans
- Labeling requirements for AI-generated content (deepfakes)
- Requirements for general-purpose AI models (foundation models) starting August 2025
United States Approach
- Decentralized, sector-specific regulation rather than a single federal AI law
- Executive Order on Safe, Secure, and Trustworthy AI (2023)
- Advanced AI model safety test results shared with government
- Federal agency guidelines for specific use cases like hiring and consumer protection
- Voluntary commitments from leading AI companies
- NIST AI Risk Management Framework providing voluntary guidance
Chinese Regulatory Framework
- Interim Measures for Generative AI Services (effective August 2023)
- Requirements to adhere to socialist core values and register algorithms
- Focus on content moderation and security
- Mandatory clear labeling of AI-generated content
- Balancing tight control with rapid development
- Encouraging indigenous innovation in AI algorithms and chips
International and Multilateral Efforts
- UN Secretary-General proposing international AI regulatory body
- UNESCO's Recommendation on the Ethics of AI providing guiding principles
- G7 launching "Hiroshima AI Process" for governance coordination
- Global AI Safety Summit leading to 28-country declaration on risk management
- Development of technical standards and certifications through ISO and IEEE
- Increasing corporate focus on responsible AI development
"The period of 2025–2030 will determine how society harnesses AI agents: whether we do so responsibly and inclusively, or face backlash and mistrust."