WHO SHOULD CONTROL AI?

Governments?
Tech companies?
Or no one?

This week we look at how AI is actually governed today.

Not in theory, in reality.

Tree forces are shaping the rules right now:
- Governments trying to regulate AI
- Tech companies writing their own safety frameworks
- Incidents showing what goes wrong in practice

The unconformable truth:

AI governance is still being invented.

An nobody agrees on the rules.

SESSION OUTLINE

During this session we will:
1️⃣ Discuss real AI incidents
2️⃣ Examine frontier AI risks
3️⃣ Analyse current regulation (EU AI Act, SB-53)
4️⃣ Simulate an AI governance decision

The goal is simple:

Understand why governing AI is extremely difficult.

HOMEWORK

Please bring:

  • 1 AI incident

  • 1 plausible risk

  • 1 concern about AI governance

  • 1 opinion about current regulation

We will use these as the basis for discussion.

READING MATERIAL

📚 Pre-reading package — max 60-90 min total

1. Real AI Incidents (15 min)

OECD AI Incidents Monitor (watch + read 2-3 cases) Real incidents, no opinions. Source: OECD AI Incidents Monitor (database)
Assignment 1: Choose one incident and answer:

  • What went wrong?

  • Was it misuse, accident, or misalignment?

2. Frontier risks & governance lens (15-20 min)

International AI Safety Report (Extended Summary for Policymakers) This is literally designed to brief policymakers. Not academic material.
Source: International AI Safety Report (Extended Summary for Policymakers)
Assignment 2: Highlight 3 risks that you consider "already plausible."

3. Governance reality: company frameworks (20 min)

Scan one of these frameworks:

Assignment 3: What capability threshold should stop deployment?

4. Regulation in practice (20 min)

Assignment 4: Write:

  • 1 thing you like about current AI regulation

  • 1 concern about its effectiveness