IN THIS LESSON
a
a
-
What risks does AI pose? (BlueDot Impact, 2024) (15 minutes)
A list of some of the risks that AI could pose, prepared by BlueDot Impact, an AI Safety education organization.
(Optional) You may also want to review some readings from week 2, such as:
The Artificial General Intelligence Race and International Security (RAND, 2025)
AI Risks & Governance Lenses (Saad Siddiqui, 2025)
-
AI Safety in China: 2024 in Review (Concordia, 2025) (10 minutes)
“This review covers five areas corresponding to our previous comprehensive reports on the State of AI Safety in China in October 2023 and May 2024: domestic governance, international governance, technical research, expert views, and corporate governance.”
China's AI Safety Evaluations Ecosystem (Concordia AI, 2024) (20 minutes)
This report describes “requirements around AI safety evaluations in Chinese AI Governance”. Chinese labs and researchers conduct several types of safety evaluations.
-
China’s Views on AI Safety Are Changing—Quickly (Carnegie Endowment for International Peace, 2024) (15 minutes)
This brief interview with Matt Sheehan, a Fellow at the Carnegie Endowment for International Peace specializing in China’s AI safety and governance, discusses China’s AI industry and the PRC’s current positions on AI safety and deepfakes.
Executive Summary
“The emergence of the China AI Safety and Development Association (CnAISDA) is a pivotal moment for China’s frontier AI governance… Despite its potential importance, little has been publicly reported on CnAISDA. What is it? How did it come about? And what does it signal about the direction of Chinese AI policy more broadly? This paper provides the first comprehensive analysis of these questions.”
China's Military AI Roadblocks (CSET, 2024) (10 minutes)
Executive Summary
Introduction
Chinese Experts’ Views of AI-Enabled Warfare
This paper explores Chinese views and subsequent actions one specific cluster of risks. Chinese experts “express concerns about the risks of outbreaks or escalations of wars, civilian deaths, and friendly force targeting by AI-enabled military systems due to insufficiently trustworthy AI… Despite these concerns, they appear to favor developing next generation military capabilities, as their successful operationalization would provide the PLA with its best chance at triumphing over adversaries in future combat.”
-
Add a short summary or a list of helpful resources here.