About This Quiz

Most people have heard about AI but don't understand the safety concerns that experts are raising - like whether AI systems will stay under human control as they become more powerful. Without grasping these risks and what can be done about them, people can't make informed decisions about AI's role in society or consider how they might help make AI development safer.

This quiz helps you discover your perspective on AI safety issues and connects you to resources to learn more. It was created by Helen King as part of the AI Safety Collab - Summer 2025 Program - a program helping people transition from learning about AI safety to taking meaningful action.

What's AI Safety? Ensuring advanced AI systems don't cause catastrophic harm, remain aligned with human values, and don't pose existential risks to humanity. Some people are so concerned about these risks that they've even gone on hunger strikes to protest AI development.

Want to Learn More?

Key reads:

  • If Anyone Builds It, Everyone Dies (2025) by Nate Soares and Eliezer Yudkowsky is the latest introduction to AI safety from the Machine Intelligence Research Institute (MIRI), a nonprofit focused on ensuring AI remains beneficial.

  • Superintelligence (2014) by Nick Bostrom was the first book to really make the public case for AI safety concerns. While written before today's AI systems, its arguments remain influential.

  • Uncontrollable (2023) by Darren McKee is a layman's introduction to AI safety. The book covers how powerful AI systems might become, why superhuman AI might be dangerous, and what we can do to prevent AI from killing billions.

Back to Quiz