Written in a bit of a rush, but I think that captures how it felt to be me in the throes of epistemic upheaval.
Aaron Gertler
First story
I read the newspaper every morning from age ~11 to age ~18. Over time, I spent less time on local sports + comics and more time on national news + op-eds. Around age 15, I added a bunch of blogs and news sites to my daily routine.
My impression: Everything was on fire. Suffering was everywhere. Systems couldn’t be trusted. Lone heroes (or small groups, at best) would bring change to small pieces of the world, but the everything-is-on-fire-ness of the situation didn’t seem to change.
Meanwhile, every issue seemed to be roughly equally important. They all got stories of about the same length, headlines with the same big letters, and lots of people yelling on both sides. What was I supposed to work on?
I kept a journal for “problems I want to solve someday”, and it got longer and longer. I had realistic expectations — maybe I’d be able to make a small dent in one of the problems, and be one of the lone heroes in the news. But it all felt so minor. When I graduated from high school, I was a very cynical person.
Then, as a college freshman, I read Harry Potter and the Methods of Rationality, which opened my eyes to a new way of thinking about the world — as something that could actually be changed, drastically, for the better, by just a few people who focused on the right problems. I became less cynical. But I still read the same news sites as I had before, and got a similar perspective from my college coursework, and kept recording issues in my journal…
…until, as a sophomore, I read “Privileging the Question”, and it spun my head around.
I’d never really considered that the media’s portrayal of reality could be systematically different than the actual shape of reality. Sure, I knew that media could be biased — but I’d never considered that media from different “sides” could be uniformly pushing me to focus on a tiny set of contentious, controversial issues that would make me click, rather than the issues that actually affected the most people, or bore the greatest risk to our future.
This article drove me to start reading more LessWrong, in hopes of figuring out which questions were actually most important to think about. I soon found GiveWell, and effective altruism, and realized that my “problems I want to solve someday” journal wasn’t actually going to guide my future. I was very happy about this, because it meant that a huge number of smart people had spent years systematically thinking about something I’d only ever dabbled in.
LessWrong and GiveWell inspired me to think more clearly, and to prioritize, prioritize, prioritize.
(Sadly, I don’t remember other specific articles from that era which had really strong effects on me. But I do think that Rationality: from AI to Zombies is worth reading despite the length. It really held up when I revisited the material in 2017.)
Second story
Here’s another story that happened a few years later. It’s another epistemic mistake, showing that I hadn’t become perfectly rational despite my college epiphany.
After I graduated from college, I went to the highest-paid job I could find, at a software company in a very cheap city. I wanted to save money so I could be flexible later. So far, so good.
I started an EA group at the company, which kept me thinking about effective altruism on a regular basis even without my college group. It wasn’t nearly as fun to run as the college group — people who work full-time jobs are hard to convince to come to meetings, and my co-organizers kept getting other jobs and leaving. But I still felt like “part of EA”.
Eventually, I decided to move on from the company. So I applied to GiveWell, got to the very last step of the application process… and got rejected.
Well, I thought, I guess it makes sense that I’m not qualified for an EA job. My grades weren’t great, and I was never a big researcher in college. Time to figure out something else to do.
(Notice the mistake yet?)
I moved to San Diego, where my soon-to-be-wife was living. I spent the next 18 months as a freelance tutor and writer, feeling generally dissatisfied with my life. My city’s EA group tended to meet far away, I didn’t have a car, I was busy with family stuff, and I gradually became less and less engaged with EA.
Through an old connection, I was introduced to a couple who ran a private EA-aligned foundation and lived in my city. I ended up doing some part-time operations work for them. This involved a lot of conversations with different effective charities, including GiveDirectly and GiveWell. And I did a bit of research.
This boosted my confidence, though I kept running into limitations — GiveDirectly’s CEO wanted to hire a research assistant for his lab at UCSD, but I’d totally forgotten my old R classes and wasn’t a good candidate, despite having a great connection from my operations work. There goes maybe the best opportunity I’m every going to get as a washed-up 24-year-old. Oh, well.
In early 2018, I got an email from someone at Open Philanthropy, inviting me to apply for a new research position. I was excited by the sudden opportunity and threw everything I had into the process. I made it to the last step… and got rejected. (Turns out they asked hundreds of people to apply, but I wasn’t aware of that yet.)
Well, I thought, I guess it makes sense that I’m still not qualified for an EA job. I’m not a kid with limitless potential anymore. I haven’t learned anything important since college. I guess it’s back to finding a coding bootcamp and trying to get a “real job”.
(How about now? Is the mistake beginning to stand out?)
I was barely engaged in EA at all at this point. But I did happen to see an 80,000 Hours page with a survey for “people interested in operations”. It only took a few minutes to fill out, so I went ahead with it — not expecting it to lead anywhere.
I got an email soon after from Open Philanthropy’s head of operations, inviting me to apply for an ops position. I did the work test, but they hired someone before I’d made it deep into the process. Still, they were happy enough that they referred me for something I hadn’t known existed — a Centre for Effective Altruism workshop for people seeking EA ops positions. It was literally weeks away when I found out, but I had no plans (hooray for freelancing?) and was able to drop everything for a multi-day event in the Bay Area.
The event changed everything. I met lots of people at EA orgs, had a coaching session with an 80K advisor, and was told about lots of job opportunities I hadn’t known existed. (I didn’t know about the 80,000 Hours job board at that point.)
Over the next six months, I got pulled in for contract work at multiple CFAR workshops and applied to seven or eight jobs. One of those applications led to my CEA position.
And here’s the punchline: It was all luck.
Not in the deep sense in which everything that ever happens to us is luck. But luck as in “sure, I’ll fill out this survey”. If I hadn’t filled out the survey, I might be working as an entry-level developer somewhere (or still tutoring!).
I had basically given up on finding a high-impact career in a field I loved because I’d been rejected from two (two!) jobs, at some of the most selective organizations in EA. I hadn’t been checking job sites. I had assumed I was meant to be earning-to-give because I didn’t have any useful research or coding experience, and because I’d been bumming around on Wyzant.com instead of getting a PhD or building career capital at McKinsey.
Meanwhile, here was how I could have thought of myself:
- Literally started two EA groups
- Wrote for more college publications than anyone else at my university. Was an editor for multiple magazines. Learned to write reasonable prose very quickly. After college, got paid to write things (sometimes) and founded a small business (tutoring) that I built up to “reasonable personal income” level in the first year.
But it didn’t even occur to me that EA had writing jobs, or that EA orgs wouldn’t exactly be in a position to hire professional reporters or novelists to copyedit their websites and would instead be forced to rely on the likes of me. I assumed that there was a limitless supply of graduates from good colleges with 3.9 GPAs and math degrees who would be infinitely more “effective” at EA roles than me (there are people like that, but their supply is limited!).
I had to be poked and prodded through every step of my career journey because I just wasn’t. Being. Agenty.
If I had a time machine, I think I could convince my late-2016 self to start applying for EA jobs in roughly five minutes, with the following questions:
- Are there any jobs at high-impact organizations that you might be qualified to do?
- Are you sure?
- Have you checked?
- Have you considered that the organization 80,000 Hours, which you have heard of, might maintain a collection of job opportunities? On their website which is all about getting a job?
- Even if you aren’t a good candidate, do you lose anything by applying? Does the org gain anything by your not applying? Is it worth spending two minutes of a hiring manager’s time on the off chance that you might be a candidate worth interviewing?
- What’s that? Why am I shaking you by the shoulders?
- Because if I slap you as hard as I want to on this timeline, some of my teeth will vanish.
I didn’t need to turn my life around or learn a bunch of new skills from scratch. I just needed someone to break me out of my funk with a dose of reality — someone who could tell me an optimistic story that would drive me to action.
What was my mistake?
Something like “too much humility”, or “over-updating on my value based on the result ‘no job’ instead of the result ‘final round interview’”, or “being kind of depressed and, as a result, never actually looking for the thing I kept saying I wanted”.
(I spent dozens of hours on generic job websites from 2016-18, but never even typed “effective altruism jobs” into Google.)
This probably sounds really dumb. That’s because it was really dumb. Also, a harder mistake to make now — we’re better about promoting EA’s resources to anyone who enters the community.
But I still see people in EA who seem weirdly underconfident, or hampered by impostor syndrome, and I recognize myself in that. It’s easy for us to tell ourselves a story in which we’re not the best, ergo not “effective”, ergo not capable of doing something truly impactful.
Remember that epistemics also applies to you — and humility is another form of bias. Is there anything you’re better at than most people? Have you ever had a job you were really good at? What’s the optimistic story you could tell about yourself?
Also, the world is very big, EA is sort of big, and possibilities are legion:
- Have you read every job on 80K’s job board?
- How long would that take — an hour, maybe, if you filter out whatever categories you don’t qualify for?
- For that matter, have you actually read 80K’s new career guide in full? Done the exercises and everything?
- What are some open questions in EA that you could try to answer, even amateurishly, thus developing research experience (and a sense of whether you even like EA-style research)?
- What’s a project that might benefit from another set of hands, and help you build your network (and a sense of whether you even like operations)?
One thing that came out of thinking about this experience later: When I hear about an opportunity or need within EA, I try to default to “maybe I could do that, are there any reasons I couldn’t?” rather than “could I do that?” My job keeps me busy, but this mindset has helped me collect a bunch of project ideas that might be useful in the future.
Final conclusion-y bit
Some things aren’t in your control at this point (e.g. the college you attend). But many things are. This isn’t an original insight, but I still really needed it back in 2016 and it didn’t click until I’d wasted a lot of time — so I’m sharing it with you now.
(Also: if you need someone to ask you some really obvious questions, like the ones I highlighted above, I’m around. Schedule a call or send me an email.)
****
One more thing: Even during my “successful” application period, I got rejected from quite a few jobs without making it anywhere near the final round. I spent three years in college working 20 hours/week on journalism, then spent seven hours on Vox’s initial writing exercise for the Future Perfect job that Kelsey Piper got, and never heard back at all.
This story isn’t meant to convey that you have to be in the top X% of GiveWell applicants or whatever to have a lot of good opportunities — if you’re reading this, there’s probably a very good role for you somewhere, whether it’s full-time or ETG or volunteering or just learning professional skills somewhere else and applying them to an EA project later.
What’s a concrete way you’ve improved your epistemics? What was the benefit of that? How did you do it?
A few examples:
- I’m in the habit of fact-checking news I read. Is this statistic accurate? Does this excerpted quote reflect the tone of the original source? Does the journalist leave the story feeling exactly the same way they did before, without updating a single view after months of research? I think this has helped me find my way to more accurate news sources, or at least sources that confess their uncertainty.
- When I hear about something that is meant to be “the next big thing” (e.g. an up-and-coming startup), and I feel a twinge of skepticism, I find some way of checking back later. This might be a reminder in my calendar, or a formal prediction that I’ll be prompted to check on. This helps me stay attuned to hype and optimism bias, and has ingrained in me that most ambitious projects either underperform the founders’ expectations or fail completely.
- I also use PredictionBook to track lots of other predictions — about the news, about people I know, and about myself. It’s helped me find some areas where I tended to be too optimistic (hospital software deployment timelines) or pessimistic (getting to the next stage of job applications).
- Looking back, it probably could have helped me more, had I used it better — I spent ~100 hours earning a pseudo-scammy “online MBA” that sounded exclusive. I gave myself 90% odds of getting rejected, so I was thrilled to be accepted. But I could have said to myself: “This is a surprise. Did I miss something about this program?”
- And I did miss something: It wasn’t very exclusive, wasn’t rigorous, and was largely focused on adding graduates from fancy schools (like me) for marketing purposes.
- In appropriate social settings — with close friends, or people from the EA/rationalist communities — I often make small bets. Some examples:
- Betting against someone who thought EA Giving Tuesday would take in less money in 2019 than 2018. This prompted me to reflect whether I just liked the EA Giving Tuesday team because they were nice people, or because I really thought they’d improved the quality of their work. I decided on “quality”, and won the bet. (This was a pretty minor occurrence, but the general lesson might be something like “make sure your belief in someone is actually well-calibrated, given our tendency to be biased based on how we feel about someone, good or bad”)
- When a friend was scared of her apartment being robbed, I bet her something like $10 against $5,000 that she wouldn’t be robbed while she lived there. She didn’t take the bet, but she knew me well enough that my willingness to make the bet was evidence that I really, really thought she was safe and I wasn’t just giving her empty reassurance.
How have you benefited from noticing and fixing epistemics mistakes in your life (both big and small)?
- Making predictions about my own productivity has helped me improve my sense how much I should actually expect to complete in one day.
- It took months of tracking weekly “sprints” at CEA, then doing less than I’d hoped to, before I finally felt confident enough in my self-perception to ask my manager to pick an “anti-charity” that I’d have to give $100 to if I failed to complete everything. (I did finish all the sprints that week.)
- I work for a small private foundation. When I started there in 2016, the person in charge would have lots of requests after every meeting, and I answered the only way I knew how: “Okay, sure, I think I can get to those.” I’d always struggle to get everything he asked for, and he’d often forget half of his questions and pivot to something else. After a while, I realized that I wasn’t actually trying to predict whether he’d get use out of what he asked for — I was just assuming he had perfect self-knowledge. Because no one has perfect self-knowledge, I began to push back against requests that I thought weren’t relevant to our core goals. He was happy for the help (turns out he had enough self-knowledge to realize he was too ambitious), and we both became more focused and productive by focusing on “core” work.
- Before I joined CEA, I spent a lot of time being fairly pessimistic about my own utility — I’d worked a series of jobs where I was either in a zero-sum position (helping some people at the expense of others, e.g. SAT tutoring) or not having a clear counterfactual impact (was the foundation I helped really making better donation choices because of me?).
- I realized that I had a tendency to overlook what other people thought and focus on my own beliefs, including about myself. So I made a concerted effort to collect information about my own utility from other people.
- This didn’t mean surveying my friends. But I did scan through lots of old conversations on a bunch of platforms to see compliments people had given me — friends, coworkers, clients, etc. Individually, I could brush off each one as politeness, but collectively, I began to see patterns that actually seemed to reflect consistently good personal traits, and ways in which I was genuinely helpful.
- I then collected all these examples into a Google Doc, which I’d look at when I wanted to feel a bit better. (Now that I’m at CEA, I feel more secure in my impact, so I rarely look at the doc now, but it helped me a lot in 2017/2018.)