IN THIS LESSON
Why are Global Powers Competing Over Advanced AI?
This week discusses some foundational concepts to understand the global implications of advanced AI, including its enormous geopolitical implications, risks, and what drives AI progress.
-
The AI Triad and What It Means for National Security Strategy (CSET , 2020) (20 minutes)
Read the Executive Summary
Read Part 2
Introduces a framework for understanding the main inputs to machine learning called the "AI Triad", consisting of algorithms, data, and compute. This framework will help guide our tour of China’s domestic AI development capacity landscape.
Additionally, look over the following terms & definitions, as written by AI education org BlueDot Impact. You don’t need to know these deeply, but these are some common technical vocabulary terms in AI development discussions, so it’s good to have some familiarity. (5 minutes)
Definitions:
FLOP: a single basic mathematical operation (like addition or multiplication) that a computer performs. The total number of FLOP is a measure of computational work done during AI training.
Compute efficiency is a measure of how “good” an AI model you get with a given financial investment.
E.g. if OpenAI spends $500M on compute to train their next AI model, how "good" will it be?
Hardware price-performance measures the amount of computational resources (FLOP) available for a given financial investment.
E.g. how many FLOP would OpenAI get with a $500M investment into NVIDIA H200 AI chips running for 1 month?
This is improving at ~1.4x/year (i.e. you get 1.4x more FLOP per $ each year) due to Moore's Law.
Algorithmic efficiency measures how “good” of an AI model you get with a given number of FLOP.
E.g. if OpenAI run their NVIDIA H200 chips for 1 month training an AI model (~10^26 FLOP), how “good” will that AI model be?
This is improving at ~3x/year (i.e. you need 3x less FLOP to achieve the same capability each year).
Compute efficiency (Capability/$) = Hardware price-performance (FLOP/$) × Algorithmic efficiency (Capability/FLOP)
This is improving at ~4x/year (i.e. you need 4x less $ to achieve the same capability each year)
-
International AI Governance Context (Saad Siddiqui, 2025) (20 minutes)
Read Section A: AI risks and governance lenses
This document introduces some key concepts in the geopolitics of advanced AI. (It was originally created as an introductory material for research fellows studying US-China international agreements on advanced AI systems.)
The Artificial General Intelligence Race and International Security (RAND, 2025) (15 minutes)
Read Chapter 1: “Introduction”
This publication examines the dynamics of the AGI race and its potential implications for international security and stability.
-
Strategic Visions in AI Governance: Mapping Pathways to Victory (IAPS, 2026) (30 minutes)
Read Executive Summary
Read A. Introduction
Read C. Policy Implications of Strategic Visions
What AI policy objectives should we work towards? This depends greatly on one’s strategic vision. Strategic visions are high-level views about how to successfully navigate the transition to a world with powerful AI systems. The goal of this report is to help policy entrepreneurs choose which strategic visions to embrace and identify policy objectives that are robust to uncertainty about which visions are best.
International AI projects and differential AI development (Forethought, 2026) (15 minutes)
Argues that international AI governance proposals should be more surgical, focus on limiting the most-dangerous capabilities while trying to actively encourage the most-helpful capabilities.
-
Add a short summary or a list of helpful resources here.