Every Grant is also a Bounty

The people you see have a reputation. Some of them are high-status, some are low-status. Some have their reputation conferred upon by a higher power, some have cultivated it for themselves.

What you might not realize is that reputation can be taken. Their reputation could one day be yours. When I look out over the internet, I don’t see faces. Just a turbulent ocean of bounties waiting to be harvested.

This is the art of the takedown piece. It can be something highly targeted like Guzey’s masterpiece “Why We Sleep” Is Riddled with Scientific and Factual Errors, or my own Austen Allred is Consistently Deceptive. It can be about misrepresenting research (see Stuart Ritchie on Johann Hari), or just a series of theoretical arguments. By criticizing someone more prominent than yourself, you sap their life force, attempt to take their reputation for your own, and challenge their high standing.

For the modern public intellectual, shelf life is nasty, brutish and short.

This comes to a particularly critical crux at the instance of fundraising. Someone who was previously unknown can overnight become the proxy for a powerful and respected figure. If you take money from Open Philanthropy, you are now fair game for anyone with a bone to pick with EA.

This ranges from ridiculous (If you ever appear within 3 degrees of Peter Thiel, you will one day appear in a tortured op-ed sentence trying to link you to Donald Trump [1].) to be perfectly legitimate. Punching down is wrong, but punching up is what democracy is made of. Once you accept money, especially if it’s from a prominent donor, there is now a target on your back.

Which in most cases, is good actually! That’s the market for ideas at work. So long as the critiques are intellectually honest, adversarial truth-finding is the best strategy we have for figuring out what’s right. As Agnes Callard once put it:

Socrates came up with a method for doing that. His method was — I call it the adversarial division of epistemic labor. If you and I both want to find the truth, we can actually divide up the process of finding the truth or acquiring knowledge into two subordinate jobs. You do one and I do one. If we each do our job, together we can inquire. That’s what I take Socratic philosophy to be doing, and the dialogues present that to us…

The reason why we have adversarial systems for pursuing certain goals is that there’s actually a tension inside the goal itself. The goal threatens to pull itself apart. In the case of justice, we have the goal that we want to convict the guilty, and we want to acquit the innocent. And those are not the same goal.

They pull apart a little bit because if you’re really, really, really committed to acquitting the innocent, you’ll be like, “Look, if there’s any doubt, if there’s any possible doubt of any kind, we should acquit.” Then you’re not going to get to the other goal. It’s that tension inside of the goal itself of justice that’s generating need for the adversarial system.


What I don’t entirely like is that to date, these bounties have been largely reputational. It’s fine to have some status on the line, but for someone in a grant making position, the bounty should be financial.

To take a concrete example, say Open Philanthropy gives a researcher $6M dollars. Presumably, they’ve already done a good amount of due diligence, and they believe that their research is very likely legitimate. But in theory, it might be wrong, and if so we should ask: what would be the value of discovering that error?

If you figured it out ahead of time: at least the $6M that you could save OP. Even if you figured it out after the fact, it would be worth a lot to know that we shouldn’t pump more money into this line of research. Plus, in both cases, the actual value of figuring out that a particular theory is wrong.

In a really ideal world, you might not just want this to be Open Philanthropy’s money. You might want the researcher themselves to say “It would be very valuable to me to know that I’ve made an error. It would both improve the quality of my work, and potentially save me a lot of time if you can show that a research direction is wrong. So please look at my work for errors, and if you find any, I’ll pay you money.”

That sounds absurdly earnest right? It could never happen in academia. But in the blogosphere, it’s not entirely unusual. For years, every Nintil article has opened with the line “Is this article wrong?” linking to a page where you could find bounties going up to $200 for correcting errors in his writing (he doubled it in 2019), and a Mistakes page where he keeps track of payouts. Gavin Leech has a similar page offering $1 to $50 for reporting errors.

Although $200 is laudable, it’s still a fairly small amount of money compared to grant sizes. Since we can’t really expect researchers to put up their own capital, this role should fall onto the grant maker themselves.

For example, I recently pledged to give a bunch of money to Slime Mold Time Mold. Accordingly, I will also place a bounty of their work, with more details in an upcoming post. If you prove that their work is fraudulent, poor science or otherwise wrong, I should pay you. Both because you’re saving me from making a bad grant, and because of the intrinsic value of any knowledge you produce in the course of writing a critique.

Another version of this is subsidized bets. A researcher makes a claim, they admit that they’re not entirely sure if it’s true, but they’re willing to assign concrete credence to it. So you find a partner on the other side, and commit publicly to payout if you’re wrong. Unsurprisingly, the authors above have pages detailing their betting history, as do others like Stephen Malina and Bryan Caplan.

But as with bounties, I don’t think we can expect researchers to bet as much as would be optimal. Grant makers who fund those researchers, and even more importantly, grant makers who rely on their research, should fund bets. For example, GiveWell relies on a lot of findings from development economics. They should ask the authors of those studies to make bets on the probability that their findings will replicate in the future, and fund those bets. (I.e. A study shows that deworming increases income by X%, GiveWell publicly offers to bet anyone $100,000 that deworming does actually increase income by at least X%.)

Nick Whitaker writes about Sane Charity, highlighting the issue that in general, nonprofits are not really accountable to anything except their own donors. There are no market forces, you can’t short a non-profit, etc. I think having donors, nonprofits and researchers place bounties, bets and open prediction markets on their beliefs would be a good start.

–––

Footnotes
[1] See for a particularly egregious example, Torres’:

the billionaire libertarian and Donald Trump supporter Peter Thiel, who once gave the keynote address at an EA conference, has donated large sums of money to the Machine Intelligence Research Institute, whose mission to save humanity from superintelligent machines is deeply intertwined with longtermist values.

You Can Get Fluvoxamine

[TLDR: I paid $95 for a 10 minute video consultation with a doctor, told them I was depressed and wanted fluvoxamine, and got my prescription immediately.]

I’m not a doctor, and this isn’t medical advice. If you want information on the status of fluvoxamine as a Covid treatment, you can see the evidence base in the appendix, but interpreting those results isn’t my business.

I’m just here to tell you that if you want fluvoxamine, you can get it.

Years ago, some of my friends were into downloading apps that would get you a 10 minute consultation with a doctor in order to quickly acquire a prescription for medical marajuana. Today, similar apps exist for a wide range of medications, and with a bit of Googling, you can find one that will prescribe you fluvoxamine.

What’s required on your end? In my case, $95, 10 minutes of my time, and some white lies about my mental health. Fluvoxamine is only prescribed right now for depression and anxiety, so if you want it, my advice is to say that:

  • You have an ongoing history of moderate depression and anxiety
  • You have taken Fluvoxamine in the past, and it’s helped

And that’s basically it. Because there are many other treatments for depression, you do specifically have to ask for Fluvoxamine by name. If they try to give you something else, say that you’ve tried it before and didn’t like the side effects (weight gain, insomnia, headaches, whatever).

One more note, and this is critical: unless you are actually suicidal, do not tell your doctor that you have plans to commit suicide, to hurt yourself or others, or do anything that sounds like an immediate threat. This puts you at risk of being put involuntarily in an inpatient program, and you don’t want that.

Finally, you might ask: isn’t this super unethical? Aren’t you not supposed to lie to doctors to get drugs? Maybe, I don’t know, this isn’t medical advice, and it’s not really ethical advice either. I think the only real potential harms here are we consume so much fluvoxamine that there isn’t enough for depressed people, or that doctors start taking actual depressed patients who want fluvoxamine less seriously. As far as I can tell, there isn’t currently a shortage, as to the latter concern, I couldn’t really say.

Appendix

Again, this isn’t medical advice. You shouldn’t take any of these results or pieces of news coverage as evidence that fluvoxamine works and that the benefits outweigh the costs. I’m literally only adding this to cover my own ass and make the point that fluvoxamine is a normal mainstream thing and not some weird conspiracy drug.

Here’s the Lancet article, and the JAMA article.

Here’s Kelsey Piper at Vox:

One medication the TOGETHER trial found strong results for, fluvoxamine, is generally used as an antidepressant and to treat obsessive-compulsive disorder. But it appears to reduce the risk of needing hospitalization or medical observation for Covid-19 by about 30 percent, and by considerably more among those patients who stick with the 10-day course of medication. Unlike monoclonal antibodies, fluvoxamine can be taken as a pill at home — which has been an important priority for scientists researching treatments, because it means that patients can take their medication without needing to leave the home and without straining a hospital system that is expected to be overwhelmed.

“We would not expect it to be affected by which variants” a person is sick with, Angela Reiersen, a psychiatrist at Washington University in St. Louis whose research turned up fluvoxamine as a promising anti-Covid candidate, told me.

And here’s a Wall Street Journal article headlined “Is Fluvoxamine the Covid Drug We’ve Been Waiting For?” with subheading “A 10-day treatment costs only $4 and appears to greatly reduce symptoms, hospitalization and death.”:

A small randomized control trial last year by psychiatrists at the Washington University School of Medicine in St. Louis was a spectacular success: None of the 80 participants who started fluvoxamine within seven days of developing symptoms deteriorated. In the placebo group, six of the 72 patients got worse, and four were hospitalized. The results were published in November 2020 in the Journal of the American Medical Association and inspired a real-world experiment.

…The three fluvoxamine trials were conducted while different variants were circulating, so there’s no reason to think the drug wouldn’t work as well against Omicron

Here’s Scott Alexander:

It decreased COVID hospitalizations by about 30%… I and many others take Luvox pretty seriously. At this point I’d give it 60-40 it works.

Here’s Derek Lowe.

Here’s the Johns Hopkins guidlines which recommend fluvoxamine for “Ambulatory Patients Early in Disease at Risk of Developing Severe COVID-19”. It also notes that this might be a bad idea if you’re pregnant.

And that’s it. Again, not medical advice.

I'm Donating 90% of my Recent Income to Slime Mold Time Mold

Since quitting my job to blog full time, I didn’t make a lot of money until recently. As of last month, some readers are paying me to advise/edit their new blog, and the Center for Effective Altruism is paying me to write the EA Newsletter.

So far, I’ve made $1890, with another $1200 due soon. Of the $3090 total, I’m donating $2800 to Slime Mold Time Mold to support their research agenda on the environmental contaminant theory of obesity.

I plan to continue donating 90% of my income to this research until either:
A) The authors are funded by a billionaire patron
B) The authors are funded by a grant endowed by a billionaire patron
C) The authors tell me they don’t need any more money

If you would like to join me, you can get in contact with me at applieddivinitystudies@gmail.com, with the authors directly at slimemoldtimemold@gmail.com, make a recurring donation on Patreon, or make a one time donation through Paypal.

Some of you might ask “why are you giving money to obesity research? Shouldn’t you be spending it on bednets or AI safety, or at least on Give Directly?”

First of all, I’m not giving my money to obesity research in the abstract, which I agree is quite low impact. I’m giving it to specific people (Slime Mold Time Mold) to pursue a specific research agenda (the contamination theory of obesity).

Second, though I’ve previously expressed some doubts about the impact of donations to EA causes already funded by large foundations, I agree that Give Directly is a pretty safe bet to achieve fairly massive impact at a very low cost. There are many very poor people, and transfering wealth to them (either through health interventions or direct giving) is still incredibly low hanging ethical fruit.

Having said all that, at this particular margin, I genuinely think that donating to Slime Mold Time Mold is more important. It’s not just the obesity research, it’s the opportunity to see a genuine scientific revolution play out in real time, and I would pay just about anything to get a front row seat.

[EDIT 01/18/2022] In an earlier version, I included my estimates that the SMTM theory is actually correct. I ommited them for brevity, but it has been recommended that I include them here. So okay:

In 2021, The NIH spent around $1.2 billion on obesity research, with another $280 million on childhood obesity research. By one esimates, the US spends $190 billion on obesity-related health care expenses. And that’s just direct financial costs! That’s not even indirect costs from lost life, or lost happiness or anything else.

So extrememy back of the napkin, the SMTM research is probably worth funding, compared to an NIH baseline, if it has something like a 0.1% chance of resulting in a breakthrough at a cost of under $10 million. Where “breakthrough” basically means that it becomes the dominant paradigm and enables future NIH spending to be much more effective.

Having said that, we should be cautious of pascalian arguments, and I would not fund SMTM if I thought they only had a 0.1% chance of success. My actual view is that the odds of their theory (or something close to it) being correct are around 10%. That’s not exceptionally high, but it’s 100x higher than it needs to be.

What about using Give Directly as a baseline and looking at QALYs instead of financial costs? This depends a lot on how much you think your donation counterfactually enables more research from SMTM. But okay, say 3 million people “die from obesity” each year, such that ending the obesity epidemic a year earlier is worth 3 million QALYs. It’s probably more because you don’t just buy an extra year, but let’s be conservative.

Off the top of my head, the minimal cost per life saved is ~$5000, which is supposed to be equivalent to 35 QALYs, so that’s $143/QALY. But I have some concerns about crowding out large donors, so say I donate to GiveDirectly instead, which is more like $1000/QALY.

So the opportunity cost of $100k is 100 QALYs. Which means to justify a donation of that size to SMTM, we would have to believe that they’ll accelerate the end of the obesity epidemic by… about 15 minutes.

Seriously, that’s how the math works out. It’s 3M QALYs per year, which is 8,200 QALYs per day, or 340 per hour, so 100 QALYs is about 15 minutes. Alternatively, you have to believe that there’s a 0.1% chance that the donation will accelerate the end by about two weeks.

Obviously this is all absurdly abstracted, and you should’t wake QALY estimates too literally and so on. I’m just saying, we should at least try to make up numbers, and when we do so, we get that my decision is not obviously wrong.