Responses and Testimonies on EA Growth

Follow up to Monday’s post Why Hasn’t Effective Altruism Grown Since 2015?. See discussions on r/scc, LessWrong and EA Forum.

I’m honored to have received responses from Scott Alexander, Katja Grace (AI Impacts), Peter Hurford (Rethink Priorities), and Rob Bensinger (MIRI), among many other insightful replies.

This is a long post, so I’m going to put the summary first:

  • EA has shifted from “earning to give” to “x-risk”, leading to less of as mass movement and more focused attempts to cultivate and develop talent
  • By some metrics, EA is continuing to grow. Most notably, non-Open Phil donations to Give Well causes are way up the last few years.
  • Good Ventures is intentionally holding off increased giving while Give Well and Open Philanthropy build capacity.

In short: EA is stagnating in some ways, and not in others. Overall, this seems to be according to plan, and is not a failure of growth funding to achieve its stated cause.

EA has shifted from “money-focus” to “talent-focus”, and is bottlenecked by research debt

Scott Alexander provides a useful history: Around 2014, Good Ventures stepped in and started providing a huge amount of money. For comparison, Giving What We Can has recorded $222 million in donations ever, Good Ventures gives around that much every year. This made “earning to give” much less compelling. [1]

Second, as EA has grown, it’s become harder and harder to rise quickly and get a “good” job at a top institution:

there’s a general sense that most things have been explored, there are rules and institutions, and it’s more of a problem of learning an existing field and breaking into an existing social network rather than being part of a project of building something new.

These are both compelling to me, but we shouldn’t be fatalistic about the latter and just accept that intellectual movements have limited capacity. Effective altruism is aware of the problem, or at least of something similar From MIRI:

Imagine that you have been tasked with moving a cube of solid iron that is one meter on a side. Given that such a cube weighs ~16000 pounds, and that an average human can lift ~100 pounds, a naïve estimation tells you that you can solve this problem with ~150 willing friends.

But of course, a meter cube can fit at most something like 10 people around it. It doesn’t matter if you have the theoretical power to move the cube if you can’t bring that power to bear in an effective manner. The problem is constrained by its surface area.

What’s the solution? From Distill’s opening essay on Research Debt:

Achieving a research-level understanding of most topics is like climbing a mountain. Aspiring researchers must struggle to understand vast bodies of work that came before them, to learn techniques, and to gain intuition. Upon reaching the top, the new researcher begins doing novel work, throwing new stones onto the top of the mountain and making it a little taller for whoever comes next.

…The climb is seen as an intellectual pilgrimage, the labor a rite of passage. But the climb could be massively easier. It’s entirely possible to build paths and staircases into these mountains. The climb isn’t something to be proud of.

The climb isn’t progress: the climb is a mountain of debt.

So if it’s gotten harder to build something new, we shouldn’t take it as the natural result of building a mountain of knowlege. We should see it as a failure of the community to distill that knowledge, perform the interpretive labor, and build a path for future scholars. Research distillation is one possibility. Another is tackling the problem from new disciplines.

Consider Leopold Aschenbrenner’s Global Priorities Institute paper on Existential risk and growth. He wrote this as an undergraduate, and it’s now cited on the 80,000 hours problem profile for Economic Growth. Why was this possible? I would argue it’s because academic macroeconomics has been relatively neglected as an tool for EA-relevant problems. There are plenty of RCTs and development economists, but few people seem to be taking this perspective.

That’s not a critique of EA’s diversity. If you take a quick look at GiveWell’s team page, you’ll see a pretty wide variety of academic backgrounds. It’s not like it’s all Oxford philosophers.

But I think the problem persists because of how we conceive of EA-relevant work, and the relative lack of mature institutions. The other shortcut to the top of the mountain is research advising. But generally this requires being part of an existing EA institution, which requires prior research experience, and as Scott noted, it’s just become very difficult to break in.

And in EA’s defense, it’s not as if academia has this solved either. Major universities are basically not growing either, and it is becoming harder to get to the top of many fields. See also Benjamin Jones on the The Burden of Knowledge.

EA hasn’t stopped growing

Peter Hurford shared a much more comprehensive list of EA growth metrics. Some of them are going up, some are stagnating or going down. Here’s a non-comprehensive list of evidence in favor of growth from 2015-2018:

  • 80k pageviews 
  • EA subreddit
  • 80k newsletter signups
  • Donations recorded in EA survey

And in favor of stagnation:

  • Google search interest
  • Pageviews of Wikipedia page for Effective Altruism
  • The Life You Can Save web traffic
  • EA survey respondents who identify as EA
  • Non-Open Phil Give Well donations (though this is now up recently)
  • Give Well unique visitors

Robert Wiblin notes that “It’s worth keeping in mind that some of these rows are 10 or 100 times more important than others”. If you’re really curious, the whole post is great.

Still, it’s complicated. Some of these are measures of active interest. So you might argue that if 70k people read the Wikipedia page for EA every year, that’s a huge win. People who are already part of the community aren’t going to be referencing the page every year, so this implies some kind of growth.

In other cases, I’m less convinced. 80k newsletter signups are increasing but EA survey respondents are stagnant, which I interpret to mean the newsletter is just growing within the existing EA community, rather than reaching new people.

u/xarkn also provides a short analysis showing that posts on EA forum are accelerating.

Rob Bensinger says that growth has been averted because there are downsides to being a larger movement. Peter’s post confirms that:

the result of the intentional effort across several groups and individuals in EA over the past few years to focus on high-fidelity messaging and growing the impact of pre-existing EAs and deliberate decisions to stop mass marketing, Facebook advertising, etc. The hope is that while this may bring in fewer total people, the people it does bring in will be much higher quality on average.

On the same theme of “stagnation as a choice”, mingyuan points out that Good Ventures is intentionally holding off increased giving while GiveWell and OpenPhil build capacity. She links to these helpful posts (1, 2). Rob ads “They also think they can find more valuable things to spend the money on than bednets and GiveDirectly.”

Katja Grace points out that I’m essentially double-counting OpenPhil stagnation, and should really be focusing on GiveWell’s non-OpenPhil numbers, which you’ll note are way up:

Katja also links to this post from late 2020 where Open Phil says “We have allocated an additional $100 million for GiveWell top charities, GiveWell standout charities, and GiveWell Incubation Grants in the year-end period (beyond what we’ve already granted to GiveWell-recommended charities earlier this year).” GiveWell hasn’t released 2020 data so this hasn’t shown up yet, but will presumably look like a large jump over 2019.

She also debates my interpretation of GWWC growth, and argues that cumulative growth is a win for EA even if the rate is decelerating.

I pretty much agree with all of her points, except the conclusion:

I’m inclined to interpret this evidence as mildly supporting ‘EA has grown since 2015’, but it doesn’t seem like much evidence either way. I think we should at least hold off on taking for granted that EA hasn’t grown since 2015 and trying to explain why.

Even if we’re not sure about the rate of growth, there’s no need to “hold off” trying to explain it. Perhaps I should have titled my post something like:

  • Why isn’t EA growing even faster?
  • If EA is growing, why doesn’t it show up in some statistics?
  • Assuming EA hasn’t grown, why not?

These framings all survive Katja’s criticism, and the bulk of my post makes sense under any of them. If we’re only about 50% sure EA is stagnating, it’s still worth trying to understand why.

If I had to rewrite it today, it would be with the framing “Given that we’re pouring a substantial amount of money into EA community growth, why doesn’t it show up in some of these metrics?”

The other big question, whether having an EA mindset is innate, remains relevant whether we are attempting to grow or merely grow faster.

Other explanations

u/skybrian2 writes that the global health stuff is pretty much solved, we know where donations will have the most impact. At the same time, US politics has gotten a lot worse, and there are now more compelling non-EA causes.

I find this pretty compelling. In 2015 non-EA work didn’t feel quite as critical. In 2017, it felt like fixing US politics was a prerequisite to making progress on many other problems.

u/fubo argues that social graphs are saturated, there’s been burn out and a demographic shift.

I wrote earlier that the median SSC survey respondent was 30 in 2019. So it’s reasonable to think that over the last few years people started setting down, wanting to own homes, start families and so on. That all assumes that it’s a single cohort, but this seems reasonable.

Edit: An earlier version of this post missed some of the comments from the cross-post on EA forum. Here are some highlights:
Brain Tan mentions that the number of local groups is growing quickly. Though again I would note that the rate of change peaked in 2015.

David Moss shares this chart saying “I fear that most of these metrics aren’t measures of EA growth, so much as of reaping the rewards of earlier years’ growth…  looking at years in EA and self-reported level of engagement, we can see that it appears to take some years for people to become highly engaged”.

I have a different interpretation, which is that less engaged people are much more likely to churn out of the movement entirely and won’t show up in this data.


There were lots of great responses, some quoted in full, others excepted below. This is not an exhaustive list, and obviously you shouldn’t assume that it’s a random sample. I won’t provide too much commentary, except to say what surprised me:

  • Lots of people come across Yudkowsky/LW as teenagers, which aligns with my earlier hypothesis about the SSC survey data.
  • A few people mentioned being Christians in EA, despite the “godlessness” of the rationalist movement
  • There seems to be a lot of recent growth among elite university chapters, confirming the notion that EA has pivoted from attempting to be a mass movement to trying to recruit talent
  • A lot of people confirm that they already felt a strong predisposition towards rationalist/EA ideas and that they have been largely “unable” to convince people. On the other hand, Peter Hurford writes that several people in the EA survey say they were “convinced”. I’m not sure this is actually incompatible, it depends on which people you’re talking about.

Without further ado, here are some of the notes I received.

Sabrina Chwalek

I think google search results and dollars donated aren’t capturing a huge section of the movement for a simple reason: we’re still students. If you look at CEA’s strategy, they’re focusing “recruitment efforts on students and young professionals” and prioritizing the long-term growth of the movement. Most of the community building grants are going to university groups who are successfully recruiting undergrads, but those undergrads aren’t donating yet. And since most university groups introduce people to EA through their internal fellowships and programming, there’s less of a need to go google what EA is. (For example, Brown EA only started two years ago and >100 people have gone through our fellowship programs.) Plus, CEA’s 2020 report says they doubled the number of attendees at their events last year.

I can’t speak on behalf of CEA and other EA orgs, but it seems plausible that the stagnation in movement growth would coincide with the decision to focus on student groups. I believe CEA is also trying to prioritize attracting highly engaged EAs, rather than just semi-involved community members, which means it’s less important to have a larger number of less-engaged people.

Based off my experience facilitating fellowship groups, people seem to fall into the following buckets for how innate EA is. First, EA all makes sense. These are the rare people who’ve already heard about EA or Less Wrong and are EA-aligned. Second, they’re passionate about one cause area in the movement (global health and development, animal welfare, etc.) and end up being exposed to other cause areas and ideas through our fellowship programs. (The second group of people is significantly more common than the first.) Third, they’re initially put off by EA/don’t understand the movement, but for whatever reason things fall into place during the fellowship. Then the remaining group of people don’t engage for various reasons.

In response to your final paragraph, my own experience joining EA was very natural. I came across 80,000 Hours in high school and definitely felt like “wow, there’s an entire movement of people who see the world like I do and want to do the most good they can.” However, I don’t think EA has to be intuitive for people who can become engaged members. In the beginning, I mostly cared a lot about global health and development and was consequently really hooked by the idea of effective giving and expanding our moral circles. It took me a couple years before I fully came around to longtermism and the significance of x-risks, and I’m still grappling with how longtermism fits into my tentative career options. Spending more time in the movement opened my eyes to a range of other ideas and cause areas.


My origin story is: I read Yudkowskys old essay on the FAQ to the meaning of life, and I was instantly converted. Since then I followed LessWrong, and later joined Effective Altruism when that popped up, as well as starting to read SSC.

hat is it about it: I think it’s just someone smart reasoning seriously on the most important topics and trying to get it right. To this day I have probably found most of the new interesting ideas I have ever encountered from here.

Marshall Quander Polaris

The changes to my philosophical outlook due to the rationalist community and other subsequent education have just been cleaning up around the edges, i.e. figuring out what exactly I think is a better state of the world, what I think a moral person is, etc.


I first encountered the rationalsphere through the original lesswrong sequences back when I was a child. The feeling that I got when reading them was definitely that they “fit into place”. I don’t recall much in the way of “revolutionary new concepts overturning my conception of the world”, but rather mostly a combination of “that’s roughly what I thought, explained eloquently and in more detail” and “I haven’t really thought seriously about that specific topic before, but yeah, that seems right”.

…I had a similar reaction to EA upon encountering it, thinking something along the line of “yeah, that’s pretty obviously the right thing to do.”

…Think along the lines of someone making a remark about you being “willing to bite that bullet” on some issue, but where you just feel like “bite what bullet?”


Perhaps the big growth in effective altruism is getting to the point where we are spreading the beliefs and spirit of reform without necessarily needing others to join the community. I have no evidence for this, not even anecdotal, but if we can change the charitable sector’s mindset to pay more attention to outcomes assessment and cost effectiveness, that’s a victory even if no one considers themselves an EA.


My experience of reading Yudkowsky’s sequences back in 2012 was revelatory. My experience of reading Peter Singer and existential risk stuff matches yours, but that all happened after the sequences.


I was very resistant to EA for a long time. EA acquaintances tried to convert me and failed; the whole view of the world they were promulgating seemed very flat. Maybe a year later, I came across a blog post from a Christian Effective Altruist. In his telling, it was obvious that as a Christian, it is right to give money to the poor — and, furthermore, God doesn’t show favoritism, and neither should you. This blog post basically converted me. It opened up a way to being something-like-an-effective-altruist without having to be a utilitarian. It turned EA from being about math to being about being-turned-outward, receptive to the humanity of everyone else, not just the people closest to you.  (This scope is expanding for me also to e.g. animals.)

…All of this is to say, I was probably innately open to being turned toward a pretty radical moral system — the 10% was not the sticking point for me. I am also not innately an Effective Altruist — I am super turned off by utilitarianism and am indeed probably not an Effective Altruist at all. But I did have some sort of conversion experience that resulted in me taking the GWWC pledge — it made caring about e.g. global health over other causes ‘click’ for me in a way that utilitarian arguments failed to.

Trenton Bricken

Personal anecdote: I was raising money for charities, got fed up as they seemed silly and googled “what is the most effective charity” leading to givewell. Separately came across Nick Bostrom and read superintelligence and it all just made sense. This was all before I knew what EA was or that it was an umbrella for all of this.

Mathematics Grad Student

My experience is that these things were not inside me all along. I never thought “wow, everyone else is an idiot” or “wow, these people get it.” I just thought “oh, cool, that makes sense.”

Before encountering SSC and LessWrong about five years ago, I had opinions like “death is the natural order of things” (I am now anti-nonconsensual-death) and “polyamory is bad” (I have since discovered that I somewhat prefer polyamory to monogamy). These are somewhat extreme examples; on most topics rationalist writing didn’t change my mind so much as just introduce me to ideas I had never thought about before, but which I promptly recognized as valuable/useful. A lot of it feels obvious-in-retrospect but not something I would have thought of on my own. EA (with the exception of AI stuff, which took me a while to come around on) is the thing which felt most immediately deeply obvious as soon as I encountered the idea, but even then it is not something I would have generated independently.

I will add a couple of caveats. One is that although rationalist and EA ideas were novel to me when I encountered them, it is plausible that some people are more “innately receptive” to these ideas than others, and I am toward the more receptive end of the distribution. Another is that I am not as serious about rationalism or EA as some. I don’t regularly read LessWrong and have read only some random subset of the Sequences (though I do regularly or semiregularly read other rationalist blogs, e.g. SSC/ACX and a few of the things on its blogroll). Likewise, I donate 10% but I have not switched careers for EA reasons. (Yet, growth mindset…?)

Long Cao

Yes, it was certainly an eye-opening/enlightenment event. Helped me notice bias in daily reasoning, improved (to some minor extent) my decision making which resulted in positive gain.

I was from a 3rd-world country where education is more like ornament, and people always do differently from what they say.

Floris Wolswijk

Somewhere in 2013 I spend some time in the summer learning about ethics and Peter Singer gave one lecture during that course. This led me to think something along the lines of “I’m a student living on say 1000 euro per month, if I start work and earn 2000+, I should be able to donate 10% of that”. Reading one or two EA-related books later, I was (and still am) organizing a local group and donating 10% of my income.

All that to say that I think I was very open to the ideas and was/am still using a similar style of reasoning that appealed to me. It does give me some warm fuzzies to donate, but mostly I think it’s the right thing to do and not that difficult to do.

At the same time, with our local group I think that we’ve spoken to 800+ people in person over the years and that has resulted in only marginal changes. Hopefully some will donate more effectively (e.g. family and friends come to me to ask about a recommendation related to climate change), and at least one person has since taken the GWWC Pledge.


Firstly, I should say that most of my exposure to EA has been through Christians in EA, a rare convergence of beliefs that definitely makes me an outlier in this generally godless movement. However, I think I’m pretty engaged in the broader EA movement as both a consumer (blogs, books, podcasts) and as a participant (I’ve been to a student conference, and am part of an EA fellowship right now, and I’m planning to use the PhD I’m currently working on to either donate a lot of money or to directly work on global health or biosecurity).

I saw your reddit post and I agree that EA seems more to express something people already agree with than to change their mind. While some people feel like EA is too demanding, my Christian beliefs and my own optimising mindset had already converged on Singer-style utilitarianism even if I’d never read Singer, and finding EA was actually a relief - firstly, that I’m not alone in this, and secondly, I can stop feeling guilty about everything and instead focus my guilt on a small number of important things.

There are a finite number of people with this kind of mindset and I expect we’ve reached most of them, at least with the current EA pitch. For all that we focus on the rational appeal, the emotional appeal of the ideas is probably more important - you have to care about being effective in addition to caring about helping others, and those don’t seem particularly well correlated.

Most of the potential for future growth is probably going to be in different cultural contexts, but that’s inherently harder and slower since we basically need “translators” to stumble on EA and then get involved. We may be at the point of diminishing returns for more recruitment, but on the other hand maintaining the movement at its current size will require new recruitment. I personally think there’s potential to get more Christians involved by emphasising how well EA complements Christian doctrine, I imagine there are ways to do this in other cultural contexts as well. I actually know a Muslim EA that I’m planning to discuss this with on Sunday in the context of movement building, I think your post will give me lots to think about so I might link to that.

However, I’m not sure if telling people about EA achieves nothing if they don’t join us, as you say people mostly agree with our ideas but just don’t identify with the movement. That suggests to me that we have plenty to work with. This is going to be harder to quantify, but optimistically I’d hope we could make AI research think more about safety, make politics (voters and politicians) more concerned about the long term future, speed up the development of vegan substitutes for animal products, and make the average person think more critically about charity and where the money actually goes. These are ambitious goals, but I don’t think they’re beyond the capabilities of a small but well resourced and committed group. Some progress towards these goals seems likely anyway but hopefully it’s not too arrogant to think we can have an amplifying role.

80,000 hours probably has a lot of impact, even if people just go through an “EA phase” then loose interest, they’ll hopefully end up on a more EA trajectory and then stick with it as the path of least resistance. Hopefully. I guess it could instead be world ending if we tell people how powerful AI and molecular biology are, and we convince them to change careers but also talk so much about ethics that it bores them.

James Brooks

I strongly identify with your quote from John Nerst’s Origin story. Since I was a child I have wanted to deeply understand everything then fix it. I went on a round the world travel and while on it wrote about 70k words on what I then called practical philosophy.  I got home, searched to see if anything similar had been written and found the sequences which contained a superset of what I had already written. Started attending then quickly organising the lesswrong meetups, went on a CFAR course and while staying in the bay areas after that ended going to an EA meetup because I heard there was good conversation and free pizza (nothing to do with the EA part). My transition from rationality to EA was very slow. I don’t even know why it was slow. I thought many of the ideas were true, maybe it was all too much to take in. I still feel overwhelmed by it all ~ 8 years later.

I have made my two closest friends EA ‘adjacent’, one literally stopped me mid conversation to set up a monthly donation to GiveWell the first time I mentioned it. They read Slate Star and Zvi … but would not attend a meetup or call themselves EA or anything like that.

I just had a chat with a student today who got super into the idea of 80k within seconds of me starting to explain it. (my explanation was something like “they are a charity that gives career advice to people who want to make a positive difference in the world, they literally have a page of the most important areas to improve the world and advice on finding one that should work well for you” that’s all I needed to say)

Most conversations about EA to people who have not heard it are a debate about some particular concern or general “that’s seems like a good idea” then the conversation moves on never to return to it.

[1] An earlier draft of this post mischaracterized Scott Alexander’s views. See this comment for details, or read the original in full.

On Radical Reforms, Technocracy and Seeing Like a State

In his latest post, Scott writes on the success of radical top-down reforms, contrary to his previous writing on the dangers of radical top-down reforms.

This leads to confusion when he approaches the Acemoglu et al. paper The Consequences of Radical Reform:

I think my real concern here is that someone might use this paper to support some sort of far-left reform, saying “come on, this shows that reforms work better than leaving institutions in place”, when an alternate lesson is “capitalism works better than not-capitalism”.

But how do we know which lesson is appropriate here? Scott concludes:

maybe the moral of the story is something like - replacing stagnation and entrenched interests with good reform is good, and with bad reform is bad. Which sounds obvious, but I do think that considerations of “is this potentially challenging a carefully evolved system of traditions?” is less important than I originally believed.

This is a deeply unsatisfying conclusion to an otherwise excellent series of posts. As a low-hanging counter example: The Vietnam War was at least ostensibly about fighting off the expansion of communism. That sounds like a good reform, but it went horribly wrong.

Is the lesson that good reform is good, but war is so bad that it’s altogether a net negative? But the original paper is about countries subjected to Napoleon’s conquest, so this same lesson ought to hold. Maybe war just looks better in retrospect.

The more fundamental confusion here is about trying to draw a clear distinction between top-down and bottom-up systems, when in fact the two work together and no clear line can be drawn.

Consider the example Scott gives where France invades top-down and establishes bottom-up free markets in a country that goes on to experience outsized economic growth. Who exactly is this a win for? There are endless examples with arbitrarily convoluted dependencies:

  • A small group of Founding Fathers top-down determine the governance system for the American colonies, and they choose democracy, the most bottom-up system to date.
  • A committee top-down designs the LSAT as an entrance exam, creating a bottom-up process for any applicant who wants to be judged fairly. Those students go on to become technocratic lawyers who pass top-down judgements.
  • Bottom-up competition allows one CEO to emerge as the dominant titan of industry, she then uses her power to top-down determine future product lines

This last example is a riff on Coase’s classic The Nature of the Firm, which asks why all of our free-market companies are run in a totally centralized fashion by an executive leader or small governance board. At the other extreme, one might ask today why today’s communist youth movements seem to favor decentralized governance. There ends up being a good answer in both cases, but the question remains: is this a win for top-down or bottom-up systems?

Or if these abstract cases are boring, consider some more realistic examples closer to home:

Who’s to praise (or blame) here? Which system deserves the credit? Is the whole debate pointless?

Tyler Cowen would say no, we just have to get more specific: “Earlier in history, a strong state was necessary to back the formation of capitalism and also to protect individual rights… Strong states remain necessary to maintain and extend capitalism and markets.”

In this view, it’s not about reforms vs tradition or mechanism vs judgement or anything else. It’s just that very specifically capitalism, markets and individual rights are good, and states are justified in using just enough coercion to ensure those systems remain healthy. Cowen’s own thoughts are a bit broader, suggesting that state capacity is also critical for “infrastructure, science subsidies, nuclear power… and space programs”.

Does this mean that top-down rule is always justified in the service of future bottom-up freedoms? Perhaps, but I would be shocked to find anyone still willing to justify the US invasion of Iraq, although it was ostensibly in the name of promoting democracy. So perhaps it really just is that war is very bad, so much so that it offsets potential gains from imposing improved institutions.

Still, I don’t think any of this can be taken as a totalizing framework.

Consider the entire field of mechanism design, of which Weyl is a prominent member. While “free markets” and “democracy” might just feel obvious, the specific mechanisms involved are subject to judgement. There are many ways to aggregate popular preferences into a collective decision, and voting theory remains an active area of research. If Weyl designs a voting mechanism, is that judgement? Is it mechanism? Bottom-up or top-down? The whole dichotomy falls apart.

Also consider the not-so-distant future where many mechanisms for bottom-up aggregation do not even have the flavor of democracy. Google Search can be thought of as an aggregation mechanism. It takes user-generated data, and synthesizes it into a centralized model which makes decisions. How is that different than voting? Is it less democratic? Less populist?

To sum up my views:

  • Democracy and free markets are generally worth promoting.
  • Some costs, such as the horrors of war and other violations of individual liberty, may be too high a price to pay.
  • Even if the ostensible aims [1] are good, top-down enforcement is simply ineffective in some scenarios (Vietnam War, Invasion of Iraq).
  • In other cases, the distinction between “mechanism” and “judgement” is simply unclear, and there is no guarantee that all forms of bottom-up evolution are as effective as democracy + free markets.

At this point, you might ask: why even make sweeping statements? Why not just get specific and leave it at that?

In the rationalist tradition, Bayes’ rule dictates the consideration of an outside-view. The point is not to say “X is good because of reason Y”, but to say “Things in the class of X have historically gone well, and this is our prior. The specific reason Y counts as evidence, and can be used to update that prior”. This sometimes gets you into trouble when it comes to establishing an appropriate reference class, but it’s still a useful technique.

So if this whole debate sounds silly, that’s fine so long as you don’t take it as a serious and literal attempt to figure out if one style of governance is always right. James C. Scott’s book had the wonderful and appropriate subtitle “How Certain Schemes to Improve the Human Condition Have Failed”. The point isn’t to hit you over the head with one case after another of high modernist failure, it’s to understand the failure modes, understand the prior, and try to do less poorly in the future.

See Also
The Scholar’s Stage – Tradition is Smarter Than You Are
Acemoglu et al. – The Consequences of Radical Reform
Scott Alexander – Book Review: The Secret Of Our Success
Scott Alexander – Book Review: Seeing Like A State
Scott Alexander – The Consequences of Radical Reform
Scott Alexander – Contra Weyl on Technocracy
Glen Wely – Reply to Scott Alexander
Devin Kalish – Weyl Versus the Rationalists

The Scholar’s Stage post is particularly underappreciated. His argument is basically: in the past, tradition was better than technocracy because we had time for slow cultural evolution. Post-industrial revolution, the world is moving too quickly for useful traditions to establish themselves: “The traditions are gone; custom is dying. In the search for happiness, rationalism is the only tool we have left.”

[1] As I understand it, the Invasion of Iraq also had several not-so-good actual aims.

Why Hasn't Effective Altruism Grown Since 2015?

Follow up post here. See discussions on r/scc, LessWrong and EA Forum.

Here’s a chart of GiveWell’s annual money moved. It rose dramatically from 2014 to 2015, then more or less plateaued:

Open Philanthropy doesn’t provide an equivalent chart, but they do have a grants database, so I was able to compile the data myself. It peaks in 2017, then falls a bit and plateaus:

(Note that the GiveWell and Open Philanthropy didn’t formally split until 2017. GiveWell records $70.4m from Open Philanthropy in 2015, which isn’t included in Open Philanthropy’s own records. I’ve emailed them for clarification, but in the meantime, the overall story in the same: A rapid rise followed by several years of stagnation. **Edit: I got a reply from OpenPhil. Basically they say grants are sometimes a year off, so what GiveWell says is 2015 may be listed as 2016 in OpenPhil’s database. See [0] for their full reply.)

Finally, here’s the Google Trends result for “Effective Altruism”. It grows quickly starting in 2013, peaks in 2017, then falls back down to around 2015 levels. Broadly speaking, interest has been about flat since 2015.

If this data isn’t surprising to you, it should be.

Several EA organizations work on actively growing the community, have been funding community growth for years and view it as an active priority:

  • 80,000 Hours: The Problem Profiles page lists “Building effective altruism” as a “highest-priority area”, right up there with AI and existential risk.
  • Open Philanthropy: Effective Altruism is one of their Focus Areas. They write “We’re interested in supporting organizations that seek to introduce people to the idea of doing as much good as possible, provide them with guidance in doing so, connect them with each other, and generally grow and empower the effective altruism community.”
  • EA Funds: One of the four funds is dedicated to Effective Altruism Infrastructure. Part of its mission reads: “Directly increase the number of people who are exposed to principles of effective altruism, or develop, refine or present such principles”

So if EA community growth is stagnating despite these efforts, it should strike you as very odd, or even somewhat troubling. Open Philanthropy decided to start funding EA community growth in 2015/2016 [1]. It’s not as if this is only a very recent effort.

As long as money continues to pour into the space, we ought to understand precisely why growth has stalled so far. The question is threefold:

  • Why was growth initially strong?
  • Why did it stagnate around 2015-2017?
  • Why has the money spent on growth since then failed to make a difference?

Here are some possible explanations.

1. Alienation

Effective Altruism makes large moral demands, and frames things in a detached quantitative manner. Utilitarianism is already alienating, and EA is only more so.

This is an okay explanation, but it doesn’t explain why growth initially started strong, and then tapered off.

2. Decline is the Baseline

Perhaps EA would have otherwise declined, and it is only thanks to the funding that it has even succeeded in remaining flat.

I’m not sure how to disambiguate between these cases, but it might be worth spending more time on. If the goal is merely community maintenance, different projects may be appropriate.

3. The Fall LessWrong and Rise of SlateStarCodex

Several folk sources indicate the LessWrong went through a decline in 2015. A brief history of LessWrong says “In 2015-2016 the site underwent a steady decline of activity leading some to declare the site dead.” The History of Less Wrong writes:

Around 2013, many core members of the community stopped posting on Less Wrong, because of both increased growth of the Bay Area physical community and increased demands and opportunities from other projects. MIRI’s support base grew to the point where Eliezer could focus on AI research instead of community-building, Center for Applied Rationality worked on development of new rationality techniques and rationality education mostly offline, and prominent writers left to their own blogs where they could develop their own voice without asking if it was within the bounds of Less Wrong.

Specifically, some blame the decline on SlateStarCodex:

With the rise of Slate Star Codex, the incentive for new users to post content on Lesswrong went down. Posting at Slate Star Codex is not open, so potentially great bloggers are not incentivized to come up with their ideas, but only to comment on the ones there.

In other words, SlateStarCodex and LessWrong catered to similar audiences, and SlateStarCodex won out. [2]

This view is somewhat supported by Google Trends, which shows a subtle decline in mentions of “Less Wrong” after 2015, until a possible rebirth in 2020.

Except SlateStarCodex also hasn’t been growing since 2015:

The recent data is distorted by the NYT incident, but basically the story is the same. Rapid rise to prominence in 2015, followed by a long plateau. So maybe some users left for Slate Star Codex in 2015, but that doesn’t explain why neither community saw much growth from 2015 - 2020.

And here’s the same chart, omitting the last 12 months of NYT-induced frenzy:

4. Community Stagnation was Caused by Funding Stagnation

One possibility is that there was not a strange hidden cause behind widespread stagnation. It’s just that funding slowed down, and so everything else slowed down with it. I’m not sure what the precise mechanism is, but this seems plausible.

Of course, now the question becomes: why did Open Philanthropy giving slow? This isn’t as mysterious since it’s not an organic process: almost all the money comes from Good Ventures which is the vehicle for Dustin Moskovitz‘s giving.

Did Dustin find another pet cause to pursue instead? It seems unlikely. In 2019, they provided $274 million total, nearly all of which ($245 million) went to Open Philanthropy recommendations.

Let’s go a level deeper and take a look at the Good Ventures grant database aggregated by year:

It looks a lot like the Open Philanthropy chart! They also peaked in 2017, and have been in decline ever since.

So this theory boils down to:

  • The EA community stopped growing because EA finances stopped growing
  • EA finances stopped growing because Good Ventures stopped growing
  • Good Ventures stopped growing because the wills and whims of billionaires are inscrutable?

To be clear, the causal mechanism and direction for the first piece of this argument remains speculative. It could also be:

  • The EA community stopped growing
  • Therefore, there was limited growth in high impact causes
  • Therefore, there was no point in pumping more money into the space

This is plausible, but seems unlikely. Even if you can’t give money to AI Safety, you can always give more money to bed nets.

5. EA Didn’t Stop Growing, Google Trends is Wrong

Google Trends is an okay proxy for actual interest, but it’s not perfect. Basically, it measures the popularity of search queries, but not the popularity of the websites themselves. So maybe instead of searching “effective altruism”, people just went directly to and Google never logged a query.

Are there other datasets we can look at?

Giving What We Can doesn’t release historical results, but I was able to use to see their past numbers, and compiled this dataset of money pledged [3] and member count:

So is the entire stagnation hypothesis disproved? I don’t think so. Google Trends tracks active interest, whereas Giving What We Can tracks cumulative interest. So a stagnant rate of active interest is compatible with increasing cumulative totals. Computing the annual growth rate for Giving What We Can, we see that it also peaks in 2015, and has been in decline ever since:

To sum up:

  • Alienation is not a good explanation, this has always been a factor
  • EA may have declined more if not for the funding
  • SlateStarCodex may have taken some attention, but it also hasn’t grown much since 2015
  • Funding stagnation may cause community stagnation; the causal mechanism is unclear
  • Giving What We Can membership has grown, but it measures cumulative rather than active interest. Their rate of growth has declined since 2015.

A Speculative Alternative: Effective Altruism is Innate

You occasionally hear stories about people discovering LessWrong or “converting” to Effective Altruism, so it’s natural to think that with more investment we could grow faster. But maybe that’s all wrong.

Thing of Things once wrote

I think a formative moment for any rationalist– our “Uncle Ben shot by the mugger” moment, if you will– is the moment you go “holy shit, everyone in the world is fucking insane.” [4]

That’s not exactly scalable. There will be no Open Philanthropy grant for providing experiences of epistemic horror to would-be effective altruists.

Similarly, from John Nerst’s Origin Story:

My favored means of procrastination has often been lurking on discussion forums. I can’t get enough of that stuff …Reading forums gradually became a kind of disaster tourism for me. The same stories played out again and again, arguers butting heads with only a vague idea about what the other was saying but tragically unable to understand this.

….While surfing Reddit, minding my own business, I came upon a link to Slate Star Codex. Before long, this led me to LessWrong. It turned out I was far from alone in wanting to understand everything in the world, form a coherent philosophy that successfully integrates results from the sciences, arts and humanities, and understand the psychological mechanisms that underlie the way we think, argue and disagree.

It’s not that John discovered LessWrong and “became” a rationalist. It’s more like he always has this underlying compulsion, and then eventually found a community where it could be shared and used productively.

In this model, Effective Altruism initially grows quickly as proto-EAs discover the community, then hits a wall as it saturates the relevant population. By 2015, everyone who might be interested in Effective Altruism has already heard about it, and there’s not much more room for growth no matter how hard you push.

One last piece of anecdotal evidence: Despite repeated attempts, I have never been able to “convert” anyone to effective altruism. Not even close. I’ve gotten friends to agree with me on every subpoint, but still fail to sell them on the concept as a whole. These are precisely the kinds of nerdy and compassionate people you might expect to be interested, but they just aren’t. [5]

In comparison, I remember my own experience taking to effective altruism the way a fish takes to water. When I first read Peter Singer, I thought “yes, obviously we should save the drowning child.” When I heard about existential risk, I thought “yes, obvious we should be concerned about the far future”. This didn’t take slogging through hours of blog posts or books, it just made sense. [6]

Some people don’t seem to have that reaction at all, and I don’t think it’s a failure of empathy or cognitive ability. Somehow it just doesn’t take.

While there does seem to be something missing, I can’t express what it is. When I say “innate”, I don’t mean it’s true from birth. It could be the result of a specific formative moment, or an eclectic series of life experiences. Or some combination of all of the above.

Fortunately, we can at least start to figure this out through recollection and introspection. If you consider yourself an effective altruist, a rationalist or anything adjacent, please email me about your own experience. Did Yudkowsky convert you? Was reading LessWrong a grand revelation? Was the real rationalism deep inside of you all along? I want to know.

I’m at, or if you read the newsletter, you can reply to the email directly. I might quote some of these publicly, but am happy to omit yours or share it anonymously if you ask.

Data for Open Philanthropy and Good Ventures is available here. Data for Giving What We Can is here. If you know how Open Philanthropy’s grant database accounts for funding before it formally split off from GiveWell in 2017, please let me know.

Disclosure: I applied for funding from the EA Infrastructure Fund last week for an unrelated project.

[0] From Open Philanthropy over email:

Hi, thanks for reaching out.

Our database’s date field denotes a given grant’s “award date,” which we define as the date when payment was distributed (or, in the case of grants paid out over multiple years, when the first payment was distributed). Particularly in the case of grants to organizations based overseas, there can be a short delay between when a grant is recommended/approved and when it is paid/awarded. (For more detail on this process, including average payment timelines, see our Grantmaking Stages page.) In 2015/2016, these payment delays resulted in top charity grants to AMF, DtWI, SCI, and GiveDirectly totaling ~$44M being paid in January 2016 and falling under 2016 in your analysis even as GiveWell presumably counted those grants in its 2015 “money moved” analysis.

Payment delays and “award date” effects also cause some artificial lumpiness in other years. For example, some of the largest top charity grants from the 2016 giving season were paid in January 2017 (SCI, AMF, DtWI) but many of the largest 2017 giving season grants were paid in December 2017 (Malaria Consortium, No Lean Season, DtWI). This has the effect of artificially inflating apparent 2017 giving relative to 2018. Other multi-year grants are counted as awarded entirely in the month/year the first payment was made – for example, our CSET grant covering 2019-2023 first paid in January 2019. So I wouldn’t read too much into individual year-to-year variation without more investigation.

Hope this helps.

[1] For more on OpenPhil’s stance on EA growth, see this note  from their 2015 progress report:

Effective altruism. There is a strong possibility that we will make grants aimed at helping grow the effective altruist community in 2016. Nick Beckstead, who has strong connections and context in this community, would lead this work. This would be a change from our previous position on effective altruism funding, and a future post will lay out what has changed. [emphasis mine]

[2] For what it’s worth, the vast majority of SlateStarCodex readers don’t actually identify as rationalist or effective altruists.

[3] My Giving What We Can dataset also has a column for money actually donated, though the data only goes back to 2015.

[4] I’m conflating effective altruism with rationalism in this section, but I don’t think it matters for the sake of this argument.

[5] For what it’s worth, I’m typically pretty good at convincing people to do things outside of effective altruism. In every other domain of life, I’ve been fairly successful at getting friends to join clubs, attend events, and so on, even when it’s not something they were initially interested in. I’m not claiming to be exceptionally good, but I’m definitely not exceptionally bad.

But maybe this shouldn’t be too surprising. Effective Altruism makes a much larger demand than pretty much every other cause. Spending an afternoon at a protest is very different from giving 10% of your income.

Analogously, I know a lot of people who intellectually agree with veganism, but won’t actually do it. And even that is (arguably) easier than what effective altruism demands.

[6] In one of my first posts, I wrote:

Before reading A Human’s Guide to Words and The Categories Were Made For Man, I went around thinking “oh god, no one is using language coherently, and I seem to be the only one seeing it, but I cannot even express my horror in a comprehensible way.” This felt like a hellish combination of being trapped in an illusion, questioning my own sanity, and simultaneously being unable to scream. For years, I wondered if I was just uniquely broken, and living in a reality that no one else seemed to see or understand.

It’s not like I was radicalized or converted. When I started reading Lesswrong, I didn’t feel like I was learning anything new or changing my mind about anything really fundamental. It was more like “thank god someone else gets it.”

When did I start thinking this way? I honestly have no idea. There were some formative moments, but as far back as I can remember, there was at least some sense that either I was crazy, or everyone else was.