Differences in Impact

Differences in Impact

Week By Week Guides

This session continues in a similar vein to week 1, but with more focus on methods of measurement and expected value. We hope that, by the end of this week, participants will understand that it is possible in principle (but hard!) to quantify good and adjudicate between causes.

Key Principles

  • Differences in impact: It appears that some of our options to help do many times more good than others. People generally don’t appreciate this, and so miss out on significant opportunities to help.
  • Fermi estimates: When you’re trying to make a decision, it can be useful to make a rough calculation for which option is best. Even if there’s a lot of uncertainty, this can give you a rough answer and can tell you which things are most important to estimate next.

Key Points

  • We can use quantitative estimates to reduce uncertainty in order to make better decisions
  • (Through discussion) Quantification is a complement to, not a replacement for, other decision-making heuristics.
  • (Through discussion) We should use quantification in cases where the scale of impact may vary a lot, but may not be tracked by our intuitive judgements.
  • Practising generating quantitative estimates and comparing outcomes

Tips for this session

Note that this week will often involve discussion of income, which can be a sensitive topic, and should be handled carefully.

Finally, QALYs are a great example of a way that we can begin including quantification in our decisions about how best to help, but it’s important that moderators realise that they are far from perfect as a metric, for reasons including:

  • QALYs are a measure of health, and as such, can only be said to correlate with individual welfare. They will correlate less, or not at all, with other things we might care about intrinsically, like the environment or justice
  • Their weightings are also informed by people's expectations of how bad certain outcomes would be, rather than by sampling their experience in the moment
  • Beyond this, as always trying to reduce complex phenomena into a single metric like this will necessarily lose information, and it’s important that we acknowledge that QALYs are a useful tool that adds to a decision, rather than resolving it immediately.
  • See here for more discussion, and here for a discussion of what different measures of near-term welfare might imply working on.

During the Session

In-Session Exercise (10 mins)

Split into groups of 2-4 and go through the following thought experiment:

There’s a burning building with 500 people trapped inside. Suppose you only have enough resources to implement one of the following two options:

  1. Get half of the cohort to decide between:
    1. Save 400 lives, with certainty.
    2. Save all 500 lives, with a 90% probability; save no lives, 10% probability
  2. Get the other half to decide between
    1. 100 people die, with certainty.
    2. 90% chance no one dies; 10% chance all 500 people die.

Point out that people often pick 1a) but and 2b), even though 1 and 2 are the same gamble. Discuss.

Facilitator Notes

Both questions 1 and 2 are exactly the same numbers and scenario, just phrased differently.

When you give people phrasing 1, they're likely to intuitively think option A is better because people like the certainty of saving 400 lives and don't want to risk everyone dying.

When you give people phrasing 2, they tend to pick option b because having people die with certainty feels bad.

The fact that our intuitions can be fooled by the phrasing of the question is a good reason we should do explicit expected value calculations and not be so risk-averse.

Another interesting thought I like to bring up here is which option would you prefer if you were one of the people in danger of dying? (presumably option b since it gives you the best shot at survival)

Discussion questions

Discuss the exercise

  • How did everyone find the exercise?
  • If you feel comfortable sharing, what did you estimate your total future income to be? Did this seem surprisingly high or low?
  • What did you work out you could achieve with 10% of your future income? Did this seem like more or less than you would have expected?
    • You may want to compare this to the drowning child/burning building thought experiment.
  • Which charity did you pick to donate to? Why was this, why did the outcomes of donating to that charity seem more valuable than the outcomes of the other charities? Did you find it hard to choose between different outcomes?

Benefits of cost-effectiveness estimates

  • How can we go about comparing different interventions/cause areas? Can quantitative estimates of impact be useful even if they’re imprecise?
    • How useful are QALYs? What are the benefits? What are the potential drawbacks?
    • How useful is considering importance, tractability, and neglectedness?
    • How useful is EV? What are the benefits? What are the potential drawbacks?

Limitations of cost-effectiveness estimates

  • What types of outcomes are particularly hard to measure (or even impossible)? How should we treat such outcomes?
  • What kind of problems can we run into when we try quantifying cost-effectiveness? What features are not captured by such estimates? Are there important features of an intervention that are not captured by cost-effectiveness estimates? Which?
    • Things to draw out in the discussion:
      • It is particularly important to use explicit quantification in cases where the scale of impact might vary a lot but may not be well tracked by our intuitive judgements
      • Explicit quantification is a complement to, not a replacement for, other decision-making heuristics

Application: Donation decisions

  • Discuss the approaches of GiveWell vs. Open Philanthropy:
    • Can anyone summarise the difference between the approaches of GiveWell and Open Philanthropy?
    • If an intervention is not backed by strong evidence, what are some reasons that it might still be worthwhile to pursue it?