Become a Billionaire Part II: You're Not Even Trying

Follow up to Life Advice: Become a Billionaire.

On Reddit, the comments are skeptical. Respondents suggest that perhaps there are more important things in life than money, and even if you do start a company, selling it for $10 million is better than risking it all for a chance at a billion dollar exit.

Which is funny, because those are precisely my reasons for optimism.

When you hear that the success rate for startups is low, or that very few founders succeed in hitting billion dollar valuations, remember that you’re including the entire population of people who, it turns out, actively don’t want to become billionaires in the first place.

What you should be asking is, what are the odds of becoming a billionaire, conditional on actually wanting to? Conditional on even trying? Conditional on not machine gunning yourself in the foot?

As it turns out, the mere willingness to not sell matters a lot. Here’s Peter Thiel:

The most important moment, in my mind, in the history of Facebook, occurred in July of 2006. The company had been around for 2 years, it was still just a college site. Maybe 8 or 9 million people on the site. The revenues were tracking to about 30 million, no profits. And we received an acquisition offer from Yahoo for a billion dollars.

…full disclosure, I think that both Breyer and myself thought that on balance we should take the money and run. But Zuckerberg started the meeting, and the first thing he said was “it’s kind of a formality, we have to have a quick board meeting, shouldn’t take more than 10 minutes. We’re obviously not going to sell”.

So sure, we can do the math, write out some probability distributions, factor in conditional risk. But here’s the bottom line: The more unreasonable it is to become a billionaire, the less competition there is, and the easier you should expect it to be.

This is perverse logic, and you could argue that it proves too much and could justify any bad decision. But it applies here anyway. It’s not that becoming a billionaire is actually irrational. It’s that people look at the failure rates, infer that it’s hard, that the rewards aren’t worth it, do some moral or hedonic calculus, and give up prematurely.

You could object that Reddit commenters are not the same population as YC founders, and this is all a tremendously unfair comparison. That’s possible. But I wouldn’t be too surprised if it turned out most people are just looking for an easy exit. After all, that’s the reasonable thing to do, isn’t it?

So stop complaining about the “risk of failure” when you’re not even trying to succeed.


This set of posts is not my most rigorous, but let’s run some numbers anyway.

Y Combinator reports a $400+ billion valuation for its top 100 companies, out of around 2000 it’s ever funded. So at first approximation, that’s $200 million in market cap per startup, or around $100 million per founder. But founders don’t retain all equity. What’s worse, dilution occurs as a function of rounds raised, so the bigger the pie, the less likely you are to own a large share of it. In practice, it’s not that bad. Stripe’s founders reportedly own around 23% of the company, and Airbnb’s founders collectively own somewhere around 40% of their company. That works out to 11.5% and 13% ownership per founder.

Still pretty good!

But remember again that we’re talking about a snapshot at a moment in time. Many of these companies are still growing rapidly. Using archive.org, I was able to go back and compile some data:

I also compiled data on the cumulatively number of startups funded. Taking a simple average gets us:

Again, it’s a power law, so even if the average is really high, the median outcome is probably $0. But if you’re a risk-neutral hits-based utilitarian minded person, that doesn’t matter. It’s pure expected value.

So stop complaining that it’s a poor bet. You have no idea how good it is, and it’s getting better every year.


Finally, let’s take a harder look at the happiness data.

I originally shared this chart from Matthew Killingsworth (2021):

On EA Forum, Julian Hazell and Michael Plant plot the data without the log scale and z-scores, and get a much more pessimistic interpretation:

With a linear scale, it’s easier to see how hard returns drop off.

…but wait a minute, they don’t just plateau harder, they actually dip down! Compare again to the original chart. This isn’t just a matter of axis-choice, it’s a bizarre discrepancy.

Rohin Shah asked the same question, prompting this response from Michael:

there was a discrepancy between the data provided for the paper and the graph in the paper itself. The graph plotted above used the data provided.  I’m not sure what else to say without contacting the journal itself.

Kieran Healy noticed the same problem and produced a similar plot:

As the original paper explains:

Mean levels of experienced well-being (real-time feeling reports on a good–bad continuum) and evaluative well-being (overall life satisfaction) for each income band. Income axis is log transformed. Figure includes only data from people who completed both measures.

Healy is unsure, but offers the following explanation:

The z-score means in the replication package are, presumably, calculated from all the observations for each measure. But if the figure is showing a subset of the two (i.e. only observations from people who answered both questions) then the z-score means across income levels will be slightly different, depending on who is excluded… That might well be just measurement error, given the vagaries of income reporting and small-n noisiness at very high incomes, but it would directly cut against the main claim of the paper.

To summarize:

  • It’s unclear what’s actually happening in the original paper
  • It’s possible z-scores are being calculated at each income band amongst different subpopulations
  • This creates two separate interpretations (“wellbeing continues to increase, just more slowly”, “wellbeing caps out at $400k”) depending on which methodology you choose
  • In either case, Life Satisfaction continues to increase with income

This mirrors the Kahneman and Deaton (2010) finding that measures of wellbeing plateau, but measures of life satisfaction do not.

The supplement to Killingsworth (2021) provides some additional useful context, including an interesting section titled “why the current results might differ from past results showing a plateau in experienced well-being”:

Examining Figure 1 in the 2010 paper finding a plateau (3) shows that positive feelings (“positive affect”) appeared to have been at the response ceiling in slightly more than 70% of responses at the lowest income level, and around 87-88% of responses at upper income levels. Accordingly, the vast majority of participants in that study were indicating the highest possible level of positive feelings the scale allowed at incomes of $75,000, limiting the ability to detect further improvements in people with incomes above $75,00

You could counter-argue that this is not a statistical ceiling effect. People just genuinely have the best lives possible. I firmly disagree. As noted earlier:

The Cantril Ladder depends upon the capacity to imagine a better possible life. At the moment, it is difficult to conceive of a world in which diseases are eradicated, although such a world would make us much happier. Conversely, we can imagine someone from the distant past reporting “I’ve lost two children to disease, lost my wife to childbirth, lost half my friends to war, but the harvest is good and we have a good chance of surviving the winter months, so maybe a 7/10”. We should not take this as strong evidence that their life is nearly as good as possible.

We still have disease, we’re still superstitious and ignorant, still caught up in tribal violence.

So you don’t want to be a billionaire. Fine. But just stop fetishing poverty. Stop acting like you can shield yourself from moral corruption of the market so long as you achieve the right work-life balance. If you’re going to pretend to be anti-capitalist, at least quit your day job and do something with your life.

Life Advice: Become a Billionaire

In a certain view, billionaires are not merely wealthy, they are nearly god-like in their influence. As the New York Times op-ed Abolish Billionaires reads:

Billionaires should not exist — at least not in their present numbers, with their current globe-swallowing power.

One practical upshot of this view is that we ought to increase the marginal tax rate, break up tech monopolies, sharpen the pitchforks and so forth.

Yet an equally valid interpretation is this: if you truly see billionaires as all-powerful oligarchs who exert enormous control over world affairs, you should try very hard to become one of them.

How should you go about it? Conveniently, the NYT provides helpful–if Straussian–advice:

A few superstar corporations, many in tech, account for the bulk of American corporate profits… Artificial intelligence is creating prosperous new industries that don’t employ very many workers; left unchecked, technology is creating a world where a few billionaires control an unprecedented share of global wealth.

So there you have it. Work in tech, preferably artificial intelligence, and you’re well on your way to control an “unprecedented share of global wealth”. From there, the world is your sandbox.


At this point, a host of objections spring forth. I’ll eagerly greet them head on.

Is becoming a billionaire even worth it? Doesn’t wealth stop contributing to happiness after a fairly low threshold?

In a variety of landmark empirical results, happiness is shown to correlate with log(income). So the scaling is poor, but that’s very different from plateauing completely.

Khaneman and Deaton 2010 find that some measures of wellbeing plateau entirely, but Cantril’s Ladder, a measure of life satisfaction, does not.

Even more optimistically, a recent study by Killingsworth finds that both satisfaction and well-being continue to improve well past $75,000. [1]

This seems to bear out across countries as well. Per Stevenson and Wolfers (2008) via Dan Luu:

Note that all of these charts use log scales for income, so there are diminishing returns.

Okay, but even if it doesn’t plateau entirely, log scaling is really poor. What’s the point?

Log scaling is really poor, but as the NYT reminds us, we’re talking about really extreme levels of wealth here. Sure, you only gain a few more points of happiness between an income $75,000 and $160,000, but Jeff Bezos is sitting comfortably at a net worth of $211,000,000,000. Our intuitions just don’t apply very well here.

It’s hard to know how happiness scales at this extraordinary level of wealth, but we can at least make a rough estimate. The Khaneman/Deaton chart shows happiness increasing around 0.45 points (on a 10 point scale) each time income doubles. Naively extrapolating, we get that a $1,000,000,000 income would be 13 more doublings, putting you at around 13.35. Again, it’s a 10 point scale, so that’s incoherent, but the point is you still stand to gain a substantial amount of wellbeing.

Fine, maybe there are real benefits, but what about the cost?

If you’re living below the poverty line, there’s plenty of low hanging fruit you can pick to increase your happiness. Find stable shelter, consume enough calories, avoid illness, etc.

But what if you’re already an upper-middle class yuppie? Say your income is already at $160,000. How much are you sacrificing by striving for billionaire status?

Again, we’ll focus on the NYT’s suggestion that the financially ambitious pursue AI-driven tech companies. In that case, you might get paid below market for a few years while your startup gets off  the ground, but the degree of financial risk is really not that great. Let’s say your income drops down to $80,000. That’s a halving, which loses you 0.45 points, but it’s only a momentary occurrence.

Though the data only reports effects on income, we can expect there to be a substantial contribution from stored wealth as well. So if you’ve been making $160,000 for a few years and have some savings, going a year without income while you pitch VCs doesn’t actually drop your quality of life by that much. The golden handcuffs were inside of you the whole time.

Even if the cost is much lower than the potential benefit, the odds are really bad. Isn’t it exceptionally difficult to become a billionaire? Commensurate, or even over-commensurate with the rewards? Given that capitalism functions as a finely tuned engine precisely to push people into the creation of market value, is getting another ideological shove ever needed or justifiable?

Again, I think this gets the Marxist critique precisely wrong. It’s not that capitalism—taken broadly as a set of socioeconomic and political devices—pushes people to be as wealthy as possible; it’s that it pushes people to become laborers renting their time to generate profits for others.

Or to continue abusing the Marxist jargon: We live in an unprecedented time where more people than ever have access to the means of production. What does it really take to start an AI startup? A laptop, free wifi, access to some open coursewhere and some AWS credits? That’s still some barrier to entry, but it’s easier than owning a factory or being lucky enough to inherit generational wealth.

In practice, we can easily generate a reasonable lower bound for the probability. Just take the number of billionaires and divide by a count of the total human population. You end up with a small number, but one considerably larger than 0.

But that’s unreasonably pessimistic. We can do much better. Surveying the top Y Combinator companies, I find that around the top 50 are valued at over $1,000,000,000. They won’t all exit successfully, and the founders won’t all own enough equity to emerge with tres commas to their net worth, but this already gets us to a much more practical and optimistic heuristic to life:

  1. Try very hard to get into YC
  2. Conditional on acceptance, try very hard to become a billionaire

Y Combinator has funded around 2000 companies ever, so at rough estimate, your odds are 1 in 40. Still low, but not unreasonably so.

But still, we can do better. Remember that many of the Y Combinator companies were very recently funded, and since batch size has increased over time, the total distribution skews young. Instead, let’s look only at companies funded before 2010. Of the top 50 companies, only 4 were founded after 2017. According to the YC Database, there were around 1400 companies founded before 2017. So that gets our odds up to 1 in 30.

You could object that there were other startup accelerators, and we wouldn’t have known at the time that YC was the right one to join. Or more broadly, that there were other viable career paths to become a billionaire, and startup founding was not as obviously among the surest paths to extreme wealth.

That’s all fair, and you should accordingly adjust the odds downwards, but even diluted by a factor of 10, the expected value looks pretty good.

What expected value? Is it even that good to become a billionaire? Maybe you get a few more points of “life satisfaction” or whatever, but it’s still a steep cliff.

Again, it’s true that wealth generates diminishing returns to happiness, but that’s not the whole story. Notably, wealth generates exponential returns to itself! As the NYT helpfully explains, wealth “serves primarily to perpetuate ever-greater wealth”. The upshot is, despite what you’ve seen for a single moment in time, it’s not at all clear what wealth-happiness scaling looks like in the long run.

Per the NYT as well, consider that “tech instills a winner-take-all dynamic across much of the economy”. That means power law returns, which convey exponential returns from startup rank to wealth:

So the relevant function is not really log(wealth). It’s more like log(wealth(wealth(startup rank)), or log(e^x^y). Does that end up being log scale? Linear? Exponential? Without knowing the specific parameters, it’s impossible to tell. Worst case we’re back to the Khaneman/Deaton case of diminishing returns, but it’s entirely possible trying harder actually generates exponential returns to happiness.

Fine fine fine. I’m sold that in some abstract theoretical sense the math works out. But you’re distracting from the much more salient reality that this is all horrible. People shouldn’t be selfishly exploiting economic inequality for personal benefit, they should be working to end it! If utility really is log(wealth), we can massively increase aggregate utility simply through distribution.

Actually, if you care about helping others, the case for becoming a billionaire is dramatically stronger. An egoist is stuck with shitty log returns, but a truly empathetic person can always give to the poor and thus models (aggregate) utility as a linear function of wealth. Barring really extreme cases, there are no diminishing returns.

Or taking the GiveWell analysis literally [2], a billion dollars could save 200,000 lives! It’s very hard to argue that your time could be better spent on any other cause.

Even that is somewhat unimaginative. You could instead try to start a Charter City that pulls a billion people out of poverty, or fund an anti-aging revolution, or give all your money to New Science and fundamentally change the way science is conducted. Whatever cliches you’ve heard about how “power corrupts”, there’s literally no actual reason you cannot do these things.

That’s fine in theory, but in practice you can only become a billionaire by exploiting others. As Anand Giridharadas put it, “the winners of our age must be challenged to do more good. But never, ever tell them to do less harm… The Aspen Consensus holds that capitalism’s rough edges must be sanded and its surplus fruit shared, but the underlying system must never be questioned.”

In a Marxist sense, this is literally and inescapably true. Profits are the result of exploitation, pure and simple.

Still, it’s worth asking who gets exploited. For example, it would obviously not be ethical to run a company based on slave labor, even if you end up donating the proceeds back to the slaves. In the most generous interpretation, this is at best morally neutral.

But what if you’re running a company that “exploits” wealthy tech workers, and the donate the proceeds to stop human trafficking? In some sense you are still guilty of capitalist exploitation, but it’s not a sense that matters.

Practically speaking, there are plenty of tech companies that de facto, albeit through several layers of abstraction, do exploit very poor people. They might be behind an API, but the poor are still exploited for mining your rare earth minerals, categorizing gore for your content filter, and getting disproportionately exposed to the impacts of climate change from your energy consumption.

My view is simply that slavery, including severely underpaid work and various forms of indentured servitude, is categorically wrong and you should not build a business that relies on it.

But even accepting this imperative, and accepting the Marxist ideological framework, it is not true that becoming a billionaire necessitates generating more suffering than you have the capacity to eradicate.

Hold on, that’s all just a shitty excuse. I can already picture the tech entrepreneur who claims they’re only becoming rich to give to the poor, but ends up donating their wealth at a meager trickle.

Look, that’s fine, but you’re not talking about life advice anymore, you’re just debating status.

I don’t care if we worship Elon musk or build statues to him on Mars. I’m making a specific claim about what you ought to do with your life.

If you’re more worried about optics and guilt by association than you are with reasoning about and acting on what’s good, you’re completely missing the point.

Okay, but it’s still not a coincidence that so many billionaires are shitty people right? Either you have to be shitty to get there, or getting there makes you shitty. Either way it’s a bad outcome and claiming higher motives is dishonest.

My view is that most people, in general, are pretty shitty, so it’s not really surprising that this holds true in populations that haven’t specifically been selected for empathy and altruism. That’s why I started with and devoted the bulk of this post to the self-interested case. This whole secondary discussion is predicted precisely on the condition that you want to know how to help other people.

Unless you seriously think there’s something fundamentally incompatible with being a decent person and becoming a billionaire (remembering that the future will be different than the past), this is all just a really stupid version of Newcomb’s paradox. The argument boils down to the thought experiment:

  • Box A is clear, and always contains $100,000
  • Box B is opaque. You’ve been told it contains $1,000,000,000.

Your choice is between taking Box A once a year, or spending, I don’t know, 6 months, to work on a startup, apply to Y Combinator and have a shot at Box B.

And your counterargument, with $1,000,000,000 sitting right in front of you, is that you have a vague sense based on anecdotes and selection bias that maybe taking the money is bad.

Come on. Obviously you should take the box. [3]


Thus far, I’ve mostly considered historical factors, assuming the future looks like the present. But the future could be very different! How does this strategy perform in different scenarios? I’d like to suggest it’s at least a reasonable hedge:

  • If income inequality continues to increase, it’s even more important to make sure you’re part of the oligarchical class
  • On the other hand, if inequality decreases and we move towards a socialist or UBI world, the opportunity cost goes down as the social safety net improves

That’s just the domestic version. A similar argument applies internationally as well:

  • If some countries remain very poor, you’ll always be able to retire with relatively modest savings and still have a very high quality of life
  • If all currently poor countries become rich, such an enormous amount of global wealth will be generated that it’s an even better bet to become a oligarch

I’m sort of kidding. It’s not that you literally have to go out and exert political influence. It’s just that in the latter scenario, there are such good returns on capital and so much wealth to go around that you ought to make sure you’re taking advantage of it.


This is a contentious topic prone to misinterpretation and heightened passions. So let me be clear:

  • The recommendation to become a billionaire is both serious and literal.

  • I understand that being a founder carriers a high risk of being failure, but it carries a low risk of actually ruining your life and personal finances

  • I understand that the odds of becoming a billionaire are low, but it doesn’t matter if you only consider the conditional probabilities. What’s the cost/benefit of taking 6 months off your day job to work on a startup? Conditional on being successful there, what’s the cost/benefit of trying very hard to seek out venture capital? Given that you’ve raised money, what’s the cost/benefit of trying to make a billion dollars? The bet sounds insane to begin with, but at each step you’re taking on a very reasonable level of risk.

  • I don’t endorse all facets of the current economic system, but unless you are actually involved in starting a socialist revolution, I recommend trying to do your best with the situation we have.

  • I’m not making any claim about the correct status of existing billionaires. But the closer your view is to “billionaires are god-like oligarchs who control everything”, the more seriously you should consider joining their class.

All things considered, the burden of proof is in your court. And I’d like to suggest that it will take a lot to make any other career choice even moderately competitive.


Appendix A: Arrogant Base Rates

Alexey Guzey will yell at me if I don’t acknowledge that people who actually succeed in becoming billionaires probably do not think in terms of base rates and conditional goods.

As a post-truth Aristotle might have said: “it is the mark of an educated mind to be able to entertain a thought without letting it become an infohazard.”

Alternatively, the argument laid out here demonstrates precisely that there’s nothing inherently conservative or modest about reasoning from base rates, and so the accusation falls flat on its face.

As for the conditional goods, I’ll add: no one should start a company just because they want to get rich, but it doesn’t hurt to be well positioned in the first place.

Appendix B: Aptitude and Asymmetric Uncertainty

In So Good They Can’t Ignore You, Cal Newport describes Steve Jobs’s humble beginnings:

At one point, he left his job at Atari for several months to make a mendicants’ spiritual journey through India, and on returning home he began to train seriously at the nearby Los Altos Zen Center.

…these are hardly the actions of someone passionate about technology and entrepreneurship, yet this was less than a year before Jobs started Apple Computer. In other words, in the months leading up to the start of his visionary company, Steve Jobs was something of a conflicted young man, seeking spiritual enlightenment and dabbling in electronics only when it promised to earn him quick cash.

Or recall from my review of The Making of Prince of Persia:

In the course of making Prince of Persia, Mechner:

  • Takes 6 months off to write a screenplay.
  • Drives out to Skywalker Ranch, meets George Lucas, fails to get his script acquired.
  • Applies to NYU film school, gets rejected.

In another series of anecdotes, Alexey Guzey illustrates that visionaries are not natural-born leaders:

Musk realized that he could have handled some of the situations with employees better. “I had never really run a team of any sort before,” Musk said. “I’d never been a sports captain or a captain of anything or managed a single person.”

That’s all to say: if your objection to all of the above is that you are not predisposed to becoming a visionary billionaire tech CEO, it doesn’t matter. Again, the billion dollars is sitting on the table and you’re choosing not to take it on the basis of a vague intuition.

Consider as well that the future is likely to undergo continual shifts in what constitutes the “right aptitude”. In Zero-to-One, Peter Thiel describes the PayPal Mafia’s shared childhood interest in building bombs. That probably would not have made for good founder material 20 or 50 years earlier, but in the late 90s it was highly profitable to be precocious, uninterested in personal safety, and willing to disregard legal concerns.

Yesterday’s leaders were charismatic strongmen who could dominate a room. Tomorrow’s leaders may be introverts better at writing than they are at speaking. Yesterday’s crypto billionaires were libertarians who flouted fiat and hated the FED. Tomorrow’s may be whoever has the closest ties to the SEC, or whoever best understands the shifting regulatory environment.

I’m not saying any of this is true, just that it’s possible.

You might object that uncertainty is deleterious to a bet. To a risk averse actor, higher variance is equivalent to lower expected value. But when the base rate of success is low to begin with, uncertainty provides an asymmetric advantage. Say your odds of becoming a billionaire are at 1% with SD of 0.5%. As variance increases, odds can’t go below 0, so the bet actually improves.

For these sorts of low probability high return scenarios, the question isn’t “is it likely”, but rather “is it at all possible?” The kind of uncertainty I’ve laid out here makes it less reasonable to rule out the possibility that you could become a billionaire.

Stop making excuses and take the box already.


Footnotes

[1] Michael Plant and Julian Hazell cast some doubt on the Killingsworth result, but the cause of the data discrepancy is unclear.

[2] GiveWell cautions against taking their estimates too literally as absolute measures, but it’s still an okay estimate.

[3] For what it’s worth, some version of this logic applies to the original problem as well.

Coda
https://www.youtube.com/watch?v=lI5w2QwdYik

Does Moral Philosophy Drive Moral Progress?

In the conclusion to Moral Uncertainty, Krister Bykvist, Toby Ord, and William MacAskill write:

Every generation in the past has committed tremendous moral wrongs on the basis of false moral views. Moral atrocities such as slavery, the subjection of women, the persecution of non-heterosexuals, and the Holocaust were, of course, driven in part by the self-interest of those who were in power. But they were also enabled and strengthened by the common-sense moral views of society at the time about what groups were worthy of moral concern.

Given the importance of figuring out what morality requires of us, the amount of investment by society into this question is astonishingly small. The world currently has an annual purchasing-power-adjusted gross product of about $127 trillion. Of that amount, a vanishingly small fraction—probably less than 0.05%–goes to directly addressing the question: What ought we to do?

They continue:

Even just over the last few hundred years, Locke influenced the American Revolution and constitution, Mill influenced the [women’s] suffrage movement, Marx helped birth socialism and communism, and Singer helped spark the animal rights movement.

This is a tempting view, but I don’t think it captures the actual causes of moral progress. After all, there were many advocates for animal well being thousands of years ago, and yet factory farming has persisted until today. At the very least, it doesn’t seem that discovering the correct moral view is sufficient for achieving moral progress in actuality.

As quoted in the Angulimālīya Sūtra, the Buddha is recorded saying:

There are no beings who have not been one’s mother, who have not been one’s sister through generations of wandering in beginningless and endless saṃsāra. Even one who is a dog has been one’s father, for the world of living beings is like a dancer. Therefore, one’s own flesh and the flesh of another are a single flesh, so Buddhas do not eat meat.

A few hundred years later, in the 3rd century BCE, The Edict of Emperor Ashoka reads:

Here (in my domain) no living beings are to be slaughtered or offered in sacrifice. Nor should festivals be held, for Beloved-of-the-Gods, King Piyadasi, sees much to object to in such festivals

Of course, the slaughter of living beings and consumption of meat would continue for thousands of years. In fact, as I argued previously, the treatment of animals has likely declined since Ashoka’s time, and we now undertake factory farming of unprecedented scale and brutality.

I don’t believe that this is due to our “false moral views”. Unfortunately, we seem unlikely to give up factory farming until we develop cost-competitive lab grown meat, or flavor-competitive plant-based alternatives.

Similarly, the abolition of slavery was plausibly more economically than morally motivated. Per Wikipedia:

…the moral concerns of the abolitionists were not necessarily the dominant sentiments in the North. Many Northerners (including Lincoln) opposed slavery also because they feared that rich slave owners would buy up the best lands and block opportunity for free white farmers using family and hired labor. Free Soilers joined the Republican party in 1854, with their appeal to powerful demands in the North through a broader commitment to “free labor” principles. Fear of the “Slave Power” had a far greater appeal to Northern self-interest than did abolitionist arguments based on the plight of black slaves in the South.

Switching gears, here’s a much more explicit case of moral philosophy failing to enable social change. From Peter Singer:

Jeremy Bentham, before the 1832 Reform Act was passed in Britain, argued for extending the vote to all men. And he wrote to his colleagues that he would have included women in that statement, except that it would be ridiculed, and, therefore, he would lose the chance of getting universal male suffrage. So he was aware of exactly this kind of argument. Bentham also wrote several essays arguing against the criminalization of sodomy, but he never published them in his lifetime, for the same reason.

Here we have a case where a moral philosopher explicitly acknowledges that he has discovered a more progressive moral view, but declines to even publish it. So the fact that Bentham made progress in moral philosophy did not allow him to make any actual moral progress. The two are totally decoupled.

What about gay rights? In the Bykvist et al. narrative, some moral philosopher comes around, determines that homosexuality is okay, and everyone celebrates. But what we’ve seen in the last few decades was not a slow dwindling of homophobia, but a massive resurgence of previously vanquished attitudes. The moral arc has not been monotonic.

So what really did happen? Here’s one alternative narrative:

So rather than being driven by moral philosophy, what we have instead is a societal shift driven by a scientific advance, which subsequently allowed rapid liberalization.

Can we investigate the claim on a more macro scale? There are some charts from Our World in Data comparing human rights to GDP per capita:

You could argue that this is really just tracking some underlying variable like “industrialization” or “western culture”. But here are some more breakdowns by continent:

Admittedly, even assuming there is a causal relationship, I don’t know which way it goes! There are numerous papers demonstrating the link between democracy and economic growth, so there is at least some reason to believe that economic progress is not primary.


Overall, I would guess that “progress” occurs as a confluence of various domains. Perhaps without a social need, or without a moral demand, oral contraceptives would never have been invented in the first place. But I remain skeptical that investing directly in moral philosophy will accelerate humanity’s march out of moral atrocities.

There is, after all, currently concentration camps in China, a famine in Yemen, and a genocide in Myanmar. The bottleneck does not seem to be our “false moral views”.

Finally, I’d like to posit that the perspective set forth by Bykvist et al. is all too compelling. Consider their statement again: “Every generation in the past has committed tremendous moral wrongs on the basis of false moral views.”

It’s a comfortable view, and one that allows us to put a kind of moral distance between ourselves and the horrors of the past. We get to say “yikes, that was bad, but luckily we’ve learned better now, and won’t repeat those mistakes”. This view allows us, in short, to tell ourselves that we are more civilized than our monstrous ancestors. That their mistakes do not reflect badly on ourselves.

As tempting as it is to wash away our past atrocities under the guise of ignorance, I’m worried humanity just regularly and knowingly does the wrong thing.