Extreme AI Risk W05

In this week's newsletter, we explore the topic of modern large models’ alignment and examine criticisms of extreme AI risk arguments.
In this week's newsletter, we explore the topic of modern large models’ alignment and examine criticisms of extreme AI risk arguments.

Join the Alignment Jam hackathon this weekend to get experience in doing ML safety research! https://ais.pub/scale

Opportunities
Sources
Extreme AI Risk W05
Broadcast by