This week presents the case for existential risk reduction as an important cause area and introduces participants to some of the key anthropogenic risks. While this is a key example when discussing longtermism, this week focuses on x-risk reduction as an independent cause area.
- The importance, neglectedness, and tractability (ITN) framework: The most important problems generally affect a lot of people, are relatively under-invested in, and can be meaningfully improved with a small amount of work.
- Thinking on the margin: If you're donating $1, you should give that extra $1 to the intervention that can most cost-effectively improve the world. There are many great initiatives with a very high average impact per dollar that will have a low marginal impact because they can't get the same efficiency at scale (they display "diminishing marginal returns").
- Crucial considerations: It can be extremely hard to figure out whether some action helps your goal or causes harm, particularly if you’re trying to influence complex social systems or the long term. This is part of why it can make sense to do a lot of analysis of the interventions you’re considering.
- Understand the definition of and what does and doesn’t count as an existential catastrophe and an existential risk.
- Anthropogenic existential risk over the next two centuries may constitute a significant risk to humanity.
- The cause of reducing existential risks is very neglected and is undervalued by markets, nations, and single generations since existential security is an intergenerational global public good.
- Future pandemics, particularly those manufactured by humans, could be much worse than COVID-19, and there are things we can do to prepare.
Tips for this Session
Acknowledge at the beginning of the session that this can be upsetting and difficult to talk about but its importance is why we dedicate time to talking about it.
We recommend that you mention considerations around information hazards in this section, which is a particularly salient topic for biosecurity conversations. Check out and consider this typology on info hazards before this week’s session. We recommend that you invite people to check out the supplementary readings for more on biosecurity and highly recommend that participants familiarize themselves with the articles that discuss the issue of information hazards. If one is not mindful of info hazards, it is possible to all too easily shoot oneself in the foot, causing disproportionate harm even when acting with good intentions.
During the Session
Defining existential risk
- Can someone give a definition of existential catastrophe and of existential risk?
- Other than extinction, what other kinds of existential risk might there be?
Importance of reducing existential risk
- We’ve been talking about the idea that we and everyone we know could die in a catastrophe. That’s pretty intense. How do you feel about it, and what are your emotional reactions to it?
- Can anyone explain Toby Ord’s reasoning about why an existential catastrophe is so much worse than other global catastrophes?
- What do you think about Toby Ord’s estimates of existential risk? What are the implications of existential risk being high or low?
- Do you agree with Toby Ord that existential risks are neglected by society?
- What are the best arguments that we shouldn’t be too worried about existential risk?
- Do you think applying the ITN framework to existential risk suggests that it is a cause area worth focusing on?
- From the standpoint of catastrophic risks, what factors might make biological weapons more concerning than nuclear and chemical weapons? What factors might make them less concerning?
- We are living through the worst pandemic in a century. Do you think we are more or less at risk of a future catastrophic pandemic now than we were in 2019? Why?
- What jumps out to you as the most intriguing or potentially promising interventions for reducing biological risk?
- What makes interventions in GCBRs (Global Catastrophic Biological Risks) neglected in comparison to other global health interventions? What implications might this have for the work we choose to prioritize?
- Do you think there are other existential risks that we haven’t discussed?
- Did anyone look into the history of close calls from nuclear weapons? How did you feel when reading it?
- Should we consider doing things that could do a huge amount of good, but don’t have lots of supporting evidence, or should we do things that we have strong evidence do a lesser amount of good? How should we decide?