💻

Existential Risk from Artificial Intelligence

Questions to consider

  • Do you believe that advanced AI poses an existential risk?
    • What are the core pieces of evidence/information/intuition that inform your overall opinion?
    • These “core pieces” of information are also known as “cruxes”. A crux is defined as “any fact that if an individual believed differently about it, they would change their conclusion in the overall conclusion.”
  • How soon do you think we will achieve transformative AI, if ever? Why – what pieces of evidence/information/intuition are you using?
    • Transformative AI, which we have roughly and conceptually defined as AI that precipitates a transition comparable to (or more significant than) the agricultural or industrial revolution.” – Holden Karnofsky (Open Philanthropy, 2016)
    • Perhaps consider forecasting work on AI timelines.
  • What actions should humanity be taking (if any) with respect to potential risks from advanced AI, concretely? If you found this problem important, what actions would you take and why?

Resources

If you’re interested in learning more about technical AI safety or AI governance, probably the single most useful resource is the AISF (AI Safety Fundamentals) Program. You can read through the (technical) AI alignment curriculum or the AI governance curriculum on your own, or ideally apply to the program to be in one of the reading group cohorts!

👉
Thus, the recommended readings for this week are: read through the AISF alignment curriculum or AISF governance curriculum to see what’s covered each week and pick 3+ readings that interest you.

Prioritize interesting readings from the curricula. Here are some example readings (from ~Week 1 in the AISF curriculum) that may or may not be in the above curricula:

If you already know a lot about potential risks from advanced AI, some next steps to get involved are: