Tips to Help Your Conversation Go Well
Image from EA Israel
Content
Lead by Example
Actions speak louder than words. This isn’t a conversational tip, but more of a general tip for influencing people. The best way of influencing those around you is when they can see that you are passionate about the way you donate your time or money – and that doing so is making the world a better place.
Define Effective Altruism
Have a simple, uncontroversial description of what effective altruism is (ideally one that the person is unlikely to disagree with) to start the conversation clearly and positively.
E.g. “Effective altruism is applying evidence and analysis to the goal of making the world a better place.” Or something that seems to work in a less formal context - like a social gathering such as “working out how to be better at being less awful human beings”
Several other suggestions are listed here.
You may wish to explain the term “effective altruism”: The word “altruism” may need explaining, and the term “effective altruism” does insinuate that other ways of being altruistic are ineffective or useless, however EA really is about seeking the most effective ways of doing good. So it might be worth clarifying the term.
Avoid Jargon
When you're explaining new concepts, be conscientious about the language you're using and how accessible it is to those unfamiliar with EA. Michael Aird includes some suggestions around jargon in this EA Forum post.
Use Personal Stories
If you can tie in your own personal story of exploration of these ideas into the narrative. How did you come across EA ideas? What inspired you? What things do you do? What changed you? Talking about your experiences can also reduce the chances of you coming across judgemental towards the other person.
Use the Person’s Interests
Relate to the person’s interests, beliefs and goals, if you know enough to do so.
Most people agree on basic moral principles. For instance, most people agree that suffering is bad, so it is easier to start from this point.
Show how EA can help achieve their goals. E.g. EA can help provide a community, a sense of purpose, clarity on career paths, volunteering opportunities.
If you already know what issues the person cares about (or if they freely offer this information during the conversation), it might work well to discuss how EA principles could work within that cause area.
However, if you don’t know what the person cares about, and they simply asked you about EA, then it probably isn’t appropriate to ask them 20 questions to find out their position on morality and world issues before you start explaining EA! In that case you’d be better to simply answer the question, and wait until a bit later in the conversation to ask what they are interested in.
Highlight the Process of EA
It is easy for EAs to come across sounding like they think they know everything. By emphasising EA as a question or an ongoing process to work out the most impactful things to do to improve the world, you are less likely to sound arrogant.
Part of EA is about being curious about the world, and being able to change our beliefs based on evidence and arguments. This can be demonstrated in conversation by showing curiosity towards the other person’s thoughts and beliefs.
This is an aspirational goal, but if you have the knowledge to do this, you could try to use EA principles to offer “insightful, nuanced, productive takes on lots and lots of other things so that people … can see how effective altruists have tools for answering questions.” (Kelsey Piper, 80,000 Hours podcast)
Use Caution When Discussing Morality
Members of the EA community are split on whether they feel they have an obligation to good, or whether it is just an opportunity (or neither). Since framing EA as an obligation can seem judgemental towards others, it may be more effective to frame effective altruism as an opportunity.
However, arguments based on obligations such as Peter Singer’s drowning child thought experiment can be extremely compelling to some people, so you shouldn’t shy away from using such arguments entirely, especially if you yourself find this framing compelling and have good reason to believe your audience will too
It is sometimes useful to state your moral beliefs as assumptions before you outline any recommendations. E.g. You may say something like “I believe people are equally worthy of help, no matter where they live” before explaining why you donate to global poverty charities, or “I believe that the extinction of the human race would be the greatest tragedy” before talking about how you are interested in existential risks. This can pre-empt potential criticisms if the person you are talking to doesn’t share the same beliefs as you.
Remember that EAs and EA orgs use moral judgements to decide what causes/careers/charities are the best. For example, GiveWell uses their moral beliefs to inform their cost-effectiveness analyses. A person with different moral beliefs from GiveWell, using the same evidence, could reasonably disagree with GiveWell’s top charity list.
It may be best to avoid talking about utilitarianism (the moral philosophy that states that we should act to maximise wellbeing for the greatest number of people). Many people find utilitarianism off-putting, and it isn’t a necessary part of EA. Of course if you are talking to someone who is interested in having a deep conversation about moral philosophy, then go for it, but clarify that there are a variety of moral positions within the EA community so that the person you are talking to (or any listeners) realise your position isn’t necessarily what everyone in the community believes.
Consider That Your Beliefs Might Be Uncommon
This section might apply to people that spend a lot of their time thinking about EA. If your brain exists in EA land, it can be easy to forget how uncommon many EA ideas are. It is worth remembering what beliefs are unusual, so that you don’t assume the other person already agrees with you.
Avoid jargon and acronyms.
It is common for people to believe we have significantly more obligation to help those near to us than those far away.
It is also common for people to care more about justice, equality or fixing problems that humans have created, than maximising welfare. Note that EA can still have a lot to offer people with these beliefs, such as finding highly effective ways to prevent climate change or to empower women.
Some people, especially those concerned with environmental destruction think humans are doing more harm than good to the world, and so don’t consider human extinction as all that bad.
There is a huge range of beliefs about the efficacy of charity. Some people seem to think charity is always good, and others seem to think all charity is harmful.
Some people are very concerned with overpopulation, and so don’t believe that saving lives is an obvious good. However, there is a good chance global health and poverty charities reduce population in the longer term.
Many people think that it is very cheap to save someone’s life in a developing country, so many not find GiveWell’s cost-effectiveness numbers impressive. This is probably due to ads such as these, where charities claim to save lives for tiny sums. See Spencer Greenberg’s study on this topic.
There is a wide range in peoples estimates of the chances of human extinction. Greenberg’s study on this suggests that Mechanical Turk workers give a much smaller chance of human extinction in the next 50 years than EAs do (0.00001% vs 1% chance). So your idea of “a low chance” might be very different to what the person you are talking to thinks. Conversely, Students for High-Impact Charity presenters found high school students often gave very high estimates (answers for the chance of extinction by 2100 ranged from 0% to 100% and in most classes the median was over 20%).
Not everyone’s utopia looks the same, and scenarios of the future can often sound like science fiction. Holden Karnofsky discusses a study he did on this topic at the end of this 80,000 Hours podcast, and suggests focusing on the possibility of eliminating negative things e.g. a world with no disease, no hunger and where people have freedom, rather than specifics of your ideal vision of the future.
Most people (in the US anyway) recognise that farmed animals are capable of suffering and that its bad to hurt them, but many people are unsure/unaware of how much suffering is actually occuring in these farms. See Greenberg’s study on this. This evidence suggests it might be better to focus on conditions in most farms more than the sentience of animals.
Explaining Why an Action Isn’t Prioritised By EA
Often in conversation, people can bring up issues or actions that are important to them and ask what you think of it, or what the general EA take on the matter would be. If you don’t think it is worth prioritising, then explaining why some cause areas or actions aren’t common EA causes or actions can improve understanding and avoid the impression that EA is about doing any altruistic acts. However, this can come with the downside of insinuating that your conversation partner is unintelligent or has wasted their time. This tension may be unresolvable, but you might be able to minimise it in the following ways:
Ask for more information about the cause or action.
Recognising the tragedy of the issue, or the value of their suggested action.
Highlighting that there is so much tragedy in the world and so many different actions we can take, but that we regrettably can’t do everything.
Explain why you think other cause areas or actions might be more important.
Explain that deciding what to work on is ongoing, so your decision might change.
Point out that it’s great that they want to help/do good, don’t undervalue that they are one of the people that care, when so many don’t.
Preventing “Bait and Switch”
Discussing lesser known cause areas such as AI safety or wild animal suffering can elicit skepticism, and you may not have the time to thoroughly explain the issue, or for the person you are talking to to process the information. It is likely easier, and possibly more compelling, to talk about cause areas that are more widely understood and cared about, such as global poverty and animal welfare. However, mentioning only one or two less controversial causes might be misleading, e.g. a person could become interested through evidence-based effective global poverty interventions, and feel misled at an EA event mostly discussing highly speculative research into a cause area they don’t understand or care about. This can feel like a “bait and switch”—they are baited with something they care about and then the conversation is switched to another area.
One way of reducing this tension is to ensure you mention a wide range of global issues that EAs are interested in, even if you spend more time on one issue. It probably isn’t useful to advocate for effective altruism community building though, as this makes it seem like EAs are self-recommending, unless the person you are talking to is already committed to EA.
Also note that global poverty was the most popular cause selected by the EA survey respondents in 2020, so focusing on this cause area does not have to be misleading. However many leaders of EA organisations are more focused on community building and the long term future than animal advocacy and global poverty.
Don’t Push It!
If someone seems uncomfortable, defensive, or just looks a bit bored, it might be time to move on to another topic entirely.
Remember that a lot of people don’t give as much thought to these topics, so try not to overload a person with too many new ideas at once. If you have the luxury of talking to people over several weeks or months it might be good to start with one or two ideas, and perhaps bring up more ideas at a later date if you think they’d be receptive.
There are a lot of cognitive and emotional challenges to accepting EA ideas, so even if we were able to present a watertight argument for EA, it might not be accepted. Geoffrey Miller describes ten barriers to accepting EA ideas in a slideshow here.