πŸ’»

AI risk

Priority ranking
Understanding
Related to Readings and resources database (Cause Area)
to investigate

Resources to check out

Positively shaping the development of artificial intelligence β†’ 80,000 Hours Problem Profile ~ 30 min
Notes on AI in this curriculum β†’ Short Note ~ 2 min
What Do We Know about AI Timelines? β†’ Article ~ 15 min An Open Philanthropy article about the current thinking on AI timelines. The article was written to inform OpenPhil’s cause prioritisation.
The Case for Taking AI Seriously β†’ Article ~ 40 min A Vox Future Perfect article laying out the argument for taking AI seriously as they see it. They define AI and the risks (from a Yudkowsky-Bostrom perspective) and some of the current thinking about the problem.
Potential Risks from Advanced AI β†’ EAG Talk ~ 30 min β€œIn this 2017 talk, Daniel Dewey presents Open Philanthropy Project's work and thinking on advanced artifical intelligence. He also gives an overview over the field, distinguishing between strategic risks - related to how influential actors will react to the rise of advanced AI systems - and misalignment risks - related to whether AI systems will reliably do what we want them to do.”

Notes

Cause Prioritization considerations

  • What causes is this cause more important than?
  • What causes could be more important that this?
  • How does this interact with other causes?

What this cause area most needs

Other notes