Every Grant is also a Bounty

The people you see have a reputation. Some of them are high-status, some are low-status. Some have their reputation conferred upon by a higher power, some have cultivated it for themselves.

What you might not realize is that reputation can be taken. Their reputation could one day be yours. When I look out over the internet, I don’t see faces. Just a turbulent ocean of bounties waiting to be harvested.

This is the art of the takedown piece. It can be something highly targeted like Guzey’s masterpiece “Why We Sleep” Is Riddled with Scientific and Factual Errors, or my own Austen Allred is Consistently Deceptive. It can be about misrepresenting research (see Stuart Ritchie on Johann Hari), or just a series of theoretical arguments. By criticizing someone more prominent than yourself, you sap their life force, attempt to take their reputation for your own, and challenge their high standing.

For the modern public intellectual, shelf life is nasty, brutish and short.

This comes to a particularly critical crux at the instance of fundraising. Someone who was previously unknown can overnight become the proxy for a powerful and respected figure. If you take money from Open Philanthropy, you are now fair game for anyone with a bone to pick with EA.

This ranges from ridiculous (If you ever appear within 3 degrees of Peter Thiel, you will one day appear in a tortured op-ed sentence trying to link you to Donald Trump [1].) to be perfectly legitimate. Punching down is wrong, but punching up is what democracy is made of. Once you accept money, especially if it’s from a prominent donor, there is now a target on your back.

Which in most cases, is good actually! That’s the market for ideas at work. So long as the critiques are intellectually honest, adversarial truth-finding is the best strategy we have for figuring out what’s right. As Agnes Callard once put it:

Socrates came up with a method for doing that. His method was — I call it the adversarial division of epistemic labor. If you and I both want to find the truth, we can actually divide up the process of finding the truth or acquiring knowledge into two subordinate jobs. You do one and I do one. If we each do our job, together we can inquire. That’s what I take Socratic philosophy to be doing, and the dialogues present that to us…

The reason why we have adversarial systems for pursuing certain goals is that there’s actually a tension inside the goal itself. The goal threatens to pull itself apart. In the case of justice, we have the goal that we want to convict the guilty, and we want to acquit the innocent. And those are not the same goal.

They pull apart a little bit because if you’re really, really, really committed to acquitting the innocent, you’ll be like, “Look, if there’s any doubt, if there’s any possible doubt of any kind, we should acquit.” Then you’re not going to get to the other goal. It’s that tension inside of the goal itself of justice that’s generating need for the adversarial system.


What I don’t entirely like is that to date, these bounties have been largely reputational. It’s fine to have some status on the line, but for someone in a grant making position, the bounty should be financial.

To take a concrete example, say Open Philanthropy gives a researcher $6M dollars. Presumably, they’ve already done a good amount of due diligence, and they believe that their research is very likely legitimate. But in theory, it might be wrong, and if so we should ask: what would be the value of discovering that error?

If you figured it out ahead of time: at least the $6M that you could save OP. Even if you figured it out after the fact, it would be worth a lot to know that we shouldn’t pump more money into this line of research. Plus, in both cases, the actual value of figuring out that a particular theory is wrong.

In a really ideal world, you might not just want this to be Open Philanthropy’s money. You might want the researcher themselves to say “It would be very valuable to me to know that I’ve made an error. It would both improve the quality of my work, and potentially save me a lot of time if you can show that a research direction is wrong. So please look at my work for errors, and if you find any, I’ll pay you money.”

That sounds absurdly earnest right? It could never happen in academia. But in the blogosphere, it’s not entirely unusual. For years, every Nintil article has opened with the line “Is this article wrong?” linking to a page where you could find bounties going up to $200 for correcting errors in his writing (he doubled it in 2019), and a Mistakes page where he keeps track of payouts. Gavin Leech has a similar page offering $1 to $50 for reporting errors.

Although $200 is laudable, it’s still a fairly small amount of money compared to grant sizes. Since we can’t really expect researchers to put up their own capital, this role should fall onto the grant maker themselves.

For example, I recently pledged to give a bunch of money to Slime Mold Time Mold. Accordingly, I will also place a bounty of their work, with more details in an upcoming post. If you prove that their work is fraudulent, poor science or otherwise wrong, I should pay you. Both because you’re saving me from making a bad grant, and because of the intrinsic value of any knowledge you produce in the course of writing a critique.

Another version of this is subsidized bets. A researcher makes a claim, they admit that they’re not entirely sure if it’s true, but they’re willing to assign concrete credence to it. So you find a partner on the other side, and commit publicly to payout if you’re wrong. Unsurprisingly, the authors above have pages detailing their betting history, as do others like Stephen Malina and Bryan Caplan.

But as with bounties, I don’t think we can expect researchers to bet as much as would be optimal. Grant makers who fund those researchers, and even more importantly, grant makers who rely on their research, should fund bets. For example, GiveWell relies on a lot of findings from development economics. They should ask the authors of those studies to make bets on the probability that their findings will replicate in the future, and fund those bets. (I.e. A study shows that deworming increases income by X%, GiveWell publicly offers to bet anyone $100,000 that deworming does actually increase income by at least X%.)

Nick Whitaker writes about Sane Charity, highlighting the issue that in general, nonprofits are not really accountable to anything except their own donors. There are no market forces, you can’t short a non-profit, etc. I think having donors, nonprofits and researchers place bounties, bets and open prediction markets on their beliefs would be a good start.

–––

Footnotes
[1] See for a particularly egregious example, Torres’:

the billionaire libertarian and Donald Trump supporter Peter Thiel, who once gave the keynote address at an EA conference, has donated large sums of money to the Machine Intelligence Research Institute, whose mission to save humanity from superintelligent machines is deeply intertwined with longtermist values.