Type
Week By Week Guides
This week focuses on transformative artificial intelligence and the risks that it poses. If transformative artificial intelligence is developed this century, it may begin to make many significant decisions for us and rapidly accelerate changes like economic growth, as well as cause existential risks to the future of humanity.
This week youâll also practice the skill of calibration, with the hope that when you say that something is 60% likely, it will actually happen about 60% of the time. This is important for making good judgements under uncertainty.
Key Points
- Transformative artificial intelligence may well be developed this century.
- Transformative AI would be a massive deal and could pose significant existential risks.
- Practice the skill of calibration, with the hope that when you say that something is 60% likely, it will happen about 60% of the time. This is important for making good judgements under uncertainty.
During the Session
Icebreaker (10 mins)
- In groups of 2 to 4, people share their rose, bud, and thorn for the past week
- Rose = A success, a small win, or something positive that happened.
- Thorn = A challenge you experienced
- Bud = Something you are looking forward to knowing more about or experiencing
Discussion questions
AI and transformative events
- Were there any readings or arguments within the readings that felt like they presented the case for taking AI safety really clearly? Could you summarize the argument(s)?
- What do you think are the biggest cruxes for deciding how to approach topics around AI safety?
- What was most surprising to you about the readings this week? What was the most banal?
- What did you find the most challenging or unconvincing about the arguments that AI safety is a crucial global problem worth working on?
Bayesâ Rule
- How would applying Bayesâ Rule to an analysis of the worldâs problems change how you see where you think you could make the biggest impact, if at all?
- Whatâs something that you feel very unsure about that you think is important for your life or your priorities? How would applying Bayesâ rule to this issue impact your process for thinking it through?
Suffering Risks
- Whatâs the difference between an x-risk and an s-risk?
- Do you think avoiding suffering is more or less important than protecting the potential for a great future?