Don't Read the News

Following my recent criticism of a Stat article, you may be wondering, who can we trust?

I have written several harsh criticisms in the past, railing against Substack and Lambda School (2). Let me be clear: in none of these cases do I mean to imply that I prefer the alternative. I am merely attempting to correct simple factual errors and reduce the status of what I perceive to be over-hyped institutions in my particular corner of the internet.

So sure, Substack has its problems, but I am not telling you to run off and use Wordpress! [1] Lambda School’s CEO has lied, but that does not mean you should attend a competing bootcamp, or get a 4-year CS degree. [2]

Analogously, there was a bad Stat article, but I am certainly not recommending that you go off and read CNN or Huffpost or whatever. The only reason I don’t critique those other sources is because I already know they’re unreliable, and I assume you do as well. [3]

And yet, presumably, you would like to “stay informed”. So what’s the solution?

One option is to rigorously fact check everything you read, but that’s cumbersome and still bottoms out somewhere. I found errors in the Stat article, but then took reports from the CDC at face value. More importantly, you just don’t have the time.

Instead, I propose a much simpler solution: don’t read the news.

Could it be that simple? Surely there are serious repercussions for being so dangerously and completely uninformed?

Here are some of the headlines on the front page of the New York Times:

  • See the complete list of insults President Trump posted on Twitter from 2015 to 2021.
  • Bryan Cranston tells Kara Swisher why he won’t play Donald Trump.
  • Biden’s Stimulus Plan Will Bring Relief, but There’s One Flaw
  • Joe Did It. But How?
  • Democrats Are About to Control Congress. What Will They Do?
  • Man Lived Undetected at O’Hare Airport for 3 Months, Officials Say
  • Improve Your Life With These Tiny Chores

I compiled those on January 18th when I wrote a first draft of this post. On the 25th as I prepare to publish, it’s not much better:

  • Are We Ready for a Monday Without Trump?
  • I Want to Call the Capitol Rioters ‘Terrorists.’ Here’s Why We Shouldn’t.
  • Something Special Just Happened in Russia
  • Ninja, a Gaming Superstar, Has a Message for Parents
  • Rupert Murdoch, Accepting Award, Condemns ‘Awful Woke Orthodoxy’

Wow! how can you not click those? How did Joe do it? What’s the one flaw of his stimulus plan? What are these tiny chores I can use to improve my life? Why won’t Bryan Cranston play Trump?

This is not news. It’s clickbait, and it’s bad for you. I don’t mean to pick on the NYT. It’s among the best of the popular outlets, and it is still horrible.

Here’s Aaron Swartz writing in 2006:

None of these stories have relevance to my life. Reading them may be enjoyable, but it’s an enjoyable waste of time. They will have no impact on my actions one way or another.

….With the time people waste reading a newspaper every day, they could have read an entire book about most subjects covered and thereby learned about it with far more detail and far more impact than the daily doses they get dribbled out by the paper. But people, of course, wouldn’t read a book about most subjects covered in the paper, because most of them are simply irrelevant.

….I have not followed the news at least since I was 13 (with occasional lapses on particular topics). My life does not seem to be impoverished for it; indeed, I think it has been greatly enhanced.

You might think such a person would be civically disengaged to a slovenly degree, but that couldn’t be further from the truth! In his brief life, Aaron led a successful campaign against SOPA, helped create Creative Commons and attempted to create a proto-Sci-Hub. On a less political note, Aaron is credited with the co-creation of Reddit, RSS and Markdown.

It was not despite, but thanks to his news-aversion that Aaron was able to build projects with continued relevance a decade later. Rather than being caught up in the news of the day, he worked on things that actually matter in the long run.

And so convinced by his arguments and inspired by his life, I also don’t read the news. I stopped in 2013 when I first came across his writing, and have never looked back. Like Aaron, I find this has substantially improved my quality of life.

Frequently Asked Questions

I’m still not convinced, news has a lot of merit, and you haven’t come close to a full refutation of it’s supposed benefits.
For a longer treatment, see Rolf Dobelli’s Avoid News, an excellent and persuasive perspective. There’s more on the harm of news in Aaron’s full piece, as well as his earlier All News is Bad News.

How do you know anything about what’s going on?
I do read, just not the news. I have a long research agenda, and read according to the work I want to publish in the next few months. I do subscribe to a couple regular sources, but only other blogs that publish infrequently. I also skim Marginal Revolution, which takes all of 3 minutes.

But mostly, my friends and family tell me about the news, because they are all reading it. If something truly important happens, I am fairly confident that I will find out.

Isn’t that unfair? Aren’t you just shifting the burden of labor onto your friends, and benefiting from their curation?
Yes, it is unfair. That’s why I have attempted to propose a better scheme: N friends will take turns reading the news and update the others if anything important happens, while each expending just 1/Nth of their current effort. To date, no one has accepted this proposal, or even considered it seriously.

But for the most part, the news simply isn’t important. The 2020 presidential election had no immediate impact on me, nor did the recent inauguration. Rather than anxiously waiting for live updates, I would rather see well-reasoned retrospectives days or even months after the event. I avoided all political news after the 2016 presidential election, but then read Edward Luce’s The Retreat of Western Liberalism. Similarly, I avoided nearly all Covid news once I had already committed to a fairly strict lockdown, then read Apollo’s Arrow.

You don’t know what you’re missing!
I do occasionally sample the news for this exact reason, and regularly find that I am missing approximately nothing of consequence.

What if I have to make a decision informed by current events?
Occasionally, a genuinely important event will surface.

Say you may need to make a decision about whether to flee the country to avoid Covid. In that case, reading the news is still not important. You should identify the matter at hand, consider it carefully, and then make a decision. At this point, you may wish to consult news articles, but that is very different from reading them regularly or following a specific outlet. You are deciding what to view and have a specific purpose in mind, rather than being passively fed content that simply makes you miserable and anxious.

What about your civic duty? It’s important to be an informed voter.
Although it’s a short article, writing The Epistemic Pain of Prop 22 took a week of full-time background research. I am fairly confident that I spent more time thinking seriously about my ballot than 99% of voters.

Again, this has nothing to do with reading the news. When an election comes around, I encourage you to become informed and make the best decisions you can! That may involve reading voter guides, thinking deeply about your values, and yes, maybe even consulting the news. But even here you are free to remain ignorant on every other day.

Note that even this level of engagement is only acceptable if you are a genuinely conflicted voter! If you were pretty sure every day of the last 4 years that you were going to vote against Trump, you have no excuse for trying to “stay informed”, as your decision had already been made.

I read the Swartz/Dobelli articles, and I now think even books and blogs cause the same harms as the actual news.
That’s fair. I’ll admit to sometimes beings sidetracked Marginal Revolution, and can relate to this quote from the Swartz piece:

Edward Tufte notes that when he used to read the New York Times in the morning, it scrambled his brain with so many different topics that he couldn’t get any real intellectual work done the rest of the day.

In the past, I have had to cut down my media diet further to avoid distractions. This choice was easy to execute because I do not receive automatic newsletters, so reading those outlets is an intentional choice every time.

I’ve taken the further action of blocking some sources on my main browser, such that I’m forced to open a different application, wait a few seconds, and then navigate to the site. This is a minor burden, but it’s enough to prevent me from getting locked into compulsive habits.

In considering your media diet, think not only about what value it brings you, but about the potential harm. I can skim today’s posts on MR in just a couple minutes and see if anything catches my interest. There is rarely anything aggravating that will ruin my mood or “scramble my brain”.

What about listening to the news or watching it on TV?
Even worse. It is too easy to be stimulated by things that don’t matter, and too hard to skim or skip ahead.

Reading the news is enjoyable.
It might be stimulating in the moment, but that’s not the point. The point is that it’s detrimental to your overall quality of life, and the tradeoff isn’t worth it.

What about particular news stories with breaking updates?
I’ll admit to neurotically refreshing the NYT map every 30 seconds on election night just like the rest of you. Though I look back on this as a tremendous waste of mental energy, it really was fun to participate in the collective orgy of anxiety and madness.

But think of this as an occasional vice, the way you think about gambling or drinking. It is a fun thing to indulge in on occasion, but it is not a good way to live your life.

But seriously, what do you read?
I read Marginal Revolution, Alexey Guzey’s Twitter, Byrne Hobart’s Medium, Gwern’s newsletter, and a few blogs. I occasionally read Hacker News.

Occasionally, upon finding a great new source, I will binge read the best pieces. When I first found out about Everything Studies, I felt nearly enlightened. But after reading his archives, I feel that I’ve properly internalized the blog’s worldview. I still check it occasionally, but the marginal impact of each new post on my thinking is fairly low.

Sometimes blogs have blogrolls that list other blogs the author likes. These are also great sources of new writing that don’t require you to actually read the news.

For what it’s worth, I’ve have enjoyed the blogs from Nintil, Andy Matuschak, Dormin, Devon Zuegel, Mark Lutter, Vitalik, Dan Luu, Ben Kuhn, Zvi, sam[ ]zdat, Sarah Constantin, The Scholar’s Stage, Aaron Swartz and Scott Alexander.

I would love to see a Best Of compilation for Matt Levine or Andrew Gelman, please let me know if these exist. Both seem like good sources, but the backlogs are simply too big.

What do you read in the morning? How do you start your day?
Because I’m unemployed, I wake up without an alarm and don’t consume caffeine. That means by the time I’m out of bed, I’m ready to do whatever I’ve planned for the day, and do not need to spend the first hour of my morning “waking up” or shaking off grogginess.

What about “dead time”? What do you do while you’re commuting or waiting for water to boil?
Since I’m unemployed and under fairly strict lockdown, I have very little deadtime. When I do have deadtime, I think and let my mind wander.

Why do so many people report having their best thoughts in the shower? Probably because it’s the only time we have without artificial stimulation. If you listen to podcasts in the shower you’re cheating yourself. There’s nothing magic about water, every other piece of “dead time” could be equally valuable if we weren’t so intent on cramming it full of useless trivia.

This isn’t a question, but I’m still not totally convinced.
Seriously, go read the earlier articles:

Aaron Swartz: I Hate the News

Rolf Dobelli: Avoid News

Then go read Andy Matuschak’s Why Books Don’t Work and consider how many of his arguments apply even more strongly to the news.

Should I unsubscribe from Applied Divinity Studies?
I don’t send out emails for all my posts, only the ones I really take pride in. That ends up being about twice a week. If you feel that it’s a serious distraction, you should filter these emails, and read them only when you have time.

I’ve also made an effort to write on things that have lasting importance. Even when I address a recent event, as in Was Vaccine Production Actually DelayeD?, it is intended not as an object-level claim, but as a meta-level warning against getting caught up in broader trend without careful thought.

Having said that, I wouldn’t subscribe to my own blog, nor to I subscribe to many of the blogs I like. I read the backlogs, manually check the domain when it comes to mind, and read new posts when it’s convenient for me, without the stress of watching newsletters pile up in my inbox.

That might sound wasteful, but it’s far less wasteful than the alternative.

[1] Having said that, Ghost really does seem good if you want a paid newsletter with flat fees, your own domain, and customization beyond a theme color.

[2] I am also not telling you not to do those things.

[3] I have occasionally cited a mainstream news source at face value. In these cases I am careful to only use it for illustrative purposes such that the quality of the piece as a whole does not hinge on the reliability of a single source.


For all my complaints about Substack, I was overjoyed to see Scott’s new post today.

Among many other things, he writes:

As I was trying to figure out how this was going to work financially, Substack convinced me that I could make decent money here.

I don’t know exactly what Scott’s calculus was, but it sounds like Substack’s monetization was part of it. If so, we owe them a huge debt of gratitude.

Having said that, it sucks that Substack enforces stylistic homogeneity. It sucks to see Slate Star Codex get sucked into the uniform aesthetic blob.

While the crypto people get to work on true decentralization, end-users already have tremendous control over at least one aspect of their online experience: CSS.

So I installed a Chrome Extension that makes editing CSS easy, copied over some styles from an page, and tada:

You don’t need to know anything about CSS to use these styles. Just follow a few steps:

article div p {
color: #333;
font: 12px/20px Verdana, sans-serif;
} {
font-size: 16px;
line-height: 1.3em;
margin-bottom: 10px;
text-transform: uppercase;
letter-spacing: 1px;
font-family: Georgia, "Bitstream Charter", serif;
} {
color: #888;
font-size: 10px;
font-family: Verdana, sans-serif;
letter-spacing: 1px;
background: #f9f9f9;
border: 1px solid #eee;
padding: 5px 7px;
display: inline;
text-transform: uppercase;
text-shadow: 1px 1px 1px #fff;
} {
content: "Posted "
} {
content: " by Scott Alexander"

.single-post {
border: 1px solid #D5D5D5;
border-radius: 10px;
background: #fff;
padding: 20px 28px;
margin-bottom: 10px;

.single-post-container {
background: rgb(240, 240, 240);
padding: 10px 0px;

.single-post a {
color: #0066cc;
text-decoration: underline;

.post {
padding: 0;

.subtitle {
font-size: 12px;
padding-bottom: 8px;

.main-menu .topbar .container .headline {
text-decoration: none;

.main-menu .topbar .container .headline .name {
font-size: 43px;
max-height: 100px;
color: white;
font-family: 'Raleway', Open Sans, Arial, sans-serif;
text-align: center;
letter-spacing: 2px;
text-decoration: none;

.topbar {
background: linear-gradient(to bottom, rgba(139,171,232,1) 0%, rgba(79,115,193,1) 100%);
text-decoration: none;

button.button.primary.subscribe-cta.subscribe-btn {
display: none;

.container {
justify-content: center;

div.buttons.notification-container {
filter: brightness(3);
transform: scale(.7)

img.logo {
margin-right: 30px;

button.comments-page-sort-menu-button {
background: transparent;

table.comment-content tr td {
border: 1px solid #ddd;
padding: 10px;
border-radius: 10px;
flex-grow: 1;
background: #fafafa;

table.comment-content tr td.comment-head {
border: none;
flex-grow: 0;
background: white;

table tr {
display: flex;
} {
margin-left: 10px;

.comments-page {
background: white;
padding-top: 10px;

.comment-meta span:first-child a {
font-weight: bold;
color: black;
text-decoration: none;

.comment-meta span:first-child a:after {
content: " says";

.comment-meta span:nth-child(2) a {
color: #888;
text-decoration: none;
display: block;
padding-top: 8px;
padding-bottom: 6px;

.comment-actions span a {
color: #888;

.profile-img-wrap img {
border-radius: 0px;
height: 40px;
width: 40px;
position: relative;
right: 8px;

They also work on the main page:

As well as the comments section:

Of course, these won’t work in your email client, you have to actually be on the domain. And if Scott moves over to a custom domain, you’ll have to follow the steps again there.

Thanks to this comment on Hacker News for inspiring the idea.


Edit 01/22: An earlier version of this post recommended more Chrome extensions I enjoy. I’ve since been told that one of them recently became malware. Sorry about that.

Contra StatNews: How Long to Herd Immunity?

Warning: Speculative armchair epidemiology. All emphasis mine.
See also Youyang Gu’s projection.

Summary: In an article for Stat, Dr. Zach Nayer misrepresents research, makes indefensibly flawed assumptions, and fumbles basic arithmetic. Per CDC, actual US Covid cases are 4.6x higher than reported, and currently around 2.4x higher. Using improved parameters, our toy model finds that herd immunity may occur in less than 4 months, although neither estimate should be taken too seriously. It all depends on the transmissibility of the new strain, as well as our ability to ramp up vaccine production, distribution and acceptance.

1) Dr. Nayer Misrepresents the Evidence on Monthly Infection Rates

Last month, Dr. Zach Nayer [1] at Stat published an estimate of time to herd immunity, suggesting that without vaccines it may take as long as 55 months.

The model itself is straightforward. Assume we need to hit 75% immunity, then figure out when we’ll get there based on existing prevalence and monthly infection rate:

Unfortunately, Nayer’s parameters are totally off. Citing a study which found antibody prevalence of 9.3%, Nayer writes:

In late September, a Stanford study estimated that 9.3% of Americans have antibodies against SARS-CoV-2…. If the base prevalence at the end of September — eight months from the onset of the epidemic in the United States on January 21, 2020 — was 9.3%, the coronavirus has an infection rate of approximately 1.2% of the population per month.

But take a closer look. Although the study was published in September, it was based on data collected in July. As the authors make explicit:

Our goal was to provide a nationwide estimate of exposure to SARS-CoV-2 during the first wave of COVID-19 in the USA, up to July, 2020

Instead of dividing 9.3% by an eight month range, Nayer should have used the 6 months from January through July. This yields an estimated monthly infection rate of 1.6% rather than 1.2%.

To his credit, Nayer attempts to confirm this result against another source of data, but fumbles the arithmetic. He writes:

one study [estimates] 52.9 million infections in the U.S. from February 27 to September 30, or an infection rate of 1.3% per month.

52.9 million infections is 16% of the US population. Over a 7 month time period, that’s a monthly infection rate of 2.3% per month, nearly double Nayer’s result.

Of course, the biggest problem with Nayer’s parameters is not even that he’s misinterpreted historical studies, it’s that he naively projects them into the future.

Nayer’s prediction isn’t based on linear growth or exponential growth, it’s based on 0 growth. He assumes that historical cases will be a good proxy for future cases, including the February base rate of 17 total confirmed monthly cases, and then uncritically takes this base rate as a future projection.

2) What is the Actual Monthly Infection Rate?

Rather than start in January, we can consider the monthly infection rate for December, the month Nayer’s article was published. That month, cumulative confirmed cases rose from 13.8 million, up to 20 million, for 6.2 million new cases, or a monthly infection rate of 1.9%.

But remember, confirmed cases are not a good proxy for actual infections. Nayer’s cited research reported 9.3% antibody prevalence in July, equivalent to 31 million total cases. Meanwhile, only 4.56 million cases had actually been confirmed by July 31st, suggesting a confirmed-to-actual multiple of 6.8x. Using this multiple, December’s 6.2 million confirmed cases represent 42.16 million actual cases, for a 12.8% monthly infection rate.

But again, that data is from July, and testing may have improved since such that a greater number of actual cases are correctly reported.

In late November, CDC researchers set out to estimate cumulative incidence by correcting for undercounting. They report 52.9 million total infections through the end of September, even though only “ 6.9 million laboratory-confirmed cases of domestically acquired infections were detected and reported”. That implies a multiple of 7.67x, or as the authors write:

This indicates that 1 in 7.7, or 13% of total infections were identified and reported…. Our preliminary estimates indicate approximately 1 in 8, or 13%, of total SARS-CoV-2 infections were recognized and reported through the end of September

If this multiple held true in December, it would imply 47.7 million new infections, or 14.5% of the population.

Most recently, the CDC reports 83.1 million total infections through December. Since there were 20 million confirmed cases, that’s a multiple of 4.2x, and an actual monthly infection rate for December of 7.8%. [2] They also report a 4.6x multiple for total COVID–19 infections reported.

Having said that, if we were undercounting by 7.7x through September, and by 4.2x overall, that implies we were undercounting by less than 4.2x after September. With 52.9 million actual cumulative cases as of 9/30 and 83.1 as of 12/31, we can infer 30.2 million actual new cases in between. By comparison, confirmed cumulative cases rose from 7.27 million to 20.03 million in the same period, for 12.76 million confirmed new cases. Using this estimate, the confirmed-to-actual multiple since September is 2.4x.

Here’s a table of monthly infection rates, depending on how you measure it:

Estimate Monthly Infection Rate (% of US Population) Source
Dr. Nayer’s Stat Article 1.3%
Anand et al. January - July 1.6%
Reese et al. February – September 2.3%
December, confirmed cases 1.9% Our World in Data
December, 6.8x multiple 12.8%
December, 7.7x multiple 14.5%
December, 4.6x multiple 8.7%
December, 2.4x multiple 4.6% Computed, see previous paragraph.

(Note that “source” does not indicate that the literal claim about monthly infection rate was made, merely that it’s the source of the relevant data used for the estimate.)

Of these, I think 4.6% is the best estimate, though note that there is a lot of uncertainty as to which multiple applies best for December, as well as underlying uncertainty in the original studies. [3]

In any case, Nayer’s 1.3% estimate was substantially off. It was the result of flawed arithmetic, a misreading of his cited study, and the incredibly naive assumption that the January - July average would project into the future with no growth.

3) Conclusion: How Long to Herd Immunity?

Using the CDC’s estimate of 25% base prevalence, a monthly infection rate of 4.6% and Nayer’s original model, we’ll achieve 70% immunity in 8 months.

Incorporating further information about vaccinations, antibody loss and a more pessimistic 80% threshold, my best guess is herd immunity by July 3rd. You can find detailed explanations for these parameters in the appendix.

You should not interpret these estimate too seriously.

Here’s an abbreviated table of results based on vaccine acceleration rate (how many more vaccinations today than yesterday), and herd immunity threshold:

10k 30k 50k
70% 6/4 4/27 4/9
80% 6/27 5/11 4/21
90% 7/19 5/24 5/1

Edit: After talking to Alvaro again, I am less confident about antibody loss. See footnote 6 for a revised table.

I hope this is of interest, but do not let the table of results fool you into thinking this is a rigorous model with well tested assumptions. It assumes, in decreasing order of certainty:

  • Vaccines last several years
  • Antibodies last 8 months [6]
  • One administered dose is “worth” 50% as much as a full infection
  • There is a 2.4x multiple between December’s confirmed cases and actual infections
  • No one who already has antibodies receives a vaccine
  • We administer 50,000 more vaccines each day than the day before [7]
  • Confirmed cases remain at 200,000 / day

In particular, the last two are totally up in the air.

There is a new strain, soon to be a new administration, and we can still do dramatically better than we have done so far. Predictions are helpful, but the important thing is to actually create the future we want.

Even stupid models can be useful. In this case, I hope the findings illustrate how sensitive our timeline is to an accelerated vaccination schedule, and highlight the urgency of ramping up distribution.


Original Article
Models in Google Sheets
Data from Our World in Data on cases and vaccines

  • 6.8x: Anand et al.
  • 7.7x: Reese et al.
  • 4.6x: CDC
  • 4.2x: CDC, computed based on 83 million actual vs 20 confirmed
  • 2.4x: Computed, “With 52.9 million actual cumulative cases as of 9/30 and 83.1 as of 12/31, we can infer 30.2 million actual new cases in between. By comparison, confirmed cumulative cases rose from 7.27 million to 20.03 million in the same period, for 12.76 million confirmed new cases. Using this estimate, the confirmed-to-actual multiple since September is 2.4x.” The 52.9 is from Reese et al, 83.1 from CDC. Confirmed cases from Our World in Data.

Appendix: Details on Parameter Values and Questionable Assumptions

So far, out model has relied on a number of untenable assumptions:

  1. Cases will remain at December levels
  2. Antibodies last indefinitely
  3. There are no vaccinations

Forecasting cases
Cumulative cases have been rising exponentially at a fairly consistent rate since April, so it might feel easy to project into the future.

Having said that, I am not very confident that the trend will hold. Given that we are ramping up vaccine distribution, facing a more transmisible strain, and launching a new administration, there is much more uncertainty to come. [4]

I’ll continue to use December’s estimated rate of 4.6%, and accept that I am making the same mistake as Nayer in assuming no growth, with the hope that I am at least doing so with better reason. Let this be an additional warning that this model is purely for illustrative purposes, and should not be taken too literally.

With regards to antibodies, there appears to be some ongoing controversy. A recent study from Science Immunology found “infection generates long-lasting B cell memory up to 8 months post-infection”; however, another second study suggests it might be shorter. Discussion of the conflict here.

In 8 months, we will start to see more and more re-infections as time goes on. There were 1.5 million confirmed cases 8 months ago, which is 11.6 million using the 7.7x multiple. It is possible all of their antibodies have now “expired”.

If we have to wait another 6.2 months, everyone infected until November 25th could lose their antibodies as well. That’s 12.9 million confirmed cases, or 59.3 million actual cases using the CDC’s 4.6x multiple.

As a first approximation, that’s another 5 month delay, but note that it cascades. As we wait to “make up” for the 59.3 million lost antibodies, more and more people’s antibodies will “expire”.

At a monthly infection rate of 4.6% and 8 month “shelf-life” for antibodies”, we will never be able to hit more than 36.8% immunity at any time. Under this model, we never achieve herd immunity at current infection rates, even for conservative estimates.

In absolute numbers, 70% herd immunity would mean 231 million people with antibodies simultaneously. If antibodies last 8 months, that means we would need to hit 29 million cases per month, and sustain that continuously for 8 months. That’s all assuming that everything immediately clears up on the day we achieve herd immunity.

Given out current growth rate, and the increased transmissibility of a new strain, those numbers might be more achievable than they sound. Our recent high of 0.25 million cases in a single day (7-day rolling average) extrapolates to 7.6 million cases per month. With the 2.4x multiple, that’s 18.2 million cases.

Although I say “achievable”, this would not actually be a good thing. We would defeat the virus, but only through immense human sacrifice.

Okay, so it’s looking quite bad, can vaccines save us? You may have heard that vaccines are 90% or 95% effective, but that’s for preventing symptoms, not preventing transmission through asymptomatic infection.

A Moderna report to the FDA writes:

Amongst baseline negative participants, 14 in the vaccine group and 38 in the placebo group had evidence of SARS-CoV-2 infection at the second dose without evidence of COVID-19 symptoms. There were approximately 2/3 fewer swabs that were positive in the vaccine group as compared to the placebo group at the pre-dose 2 timepoint, suggesting that some asymptomatic infections start to be prevented after the first dose.

More recently, Tyler Cowen cites this article claiming that Pfizer vaccine is very effective in preventing transmission. The author writes “Data from 102 subjects shows 98% of them developed significant presence of antibodies; survey’s editor says participants most likely won’t spread the disease further” I am not sure what “most likely” means, but I’ll take it at face value.

Okay, so we have data on sterilizing immunity and vaccine administration, the problem is we don’t know how much of the latter is first vs. second doses. I also don’t know if being “66% immune” is worth 66% as much as full immunity. So a few simplifying assumptions:

  • Each vaccine dose is “worth” 50% of full immunity
  • No one who already has antibodies receives a vaccination
  • We administer 50,000 more vaccines each day than the day before [5]

Using this model (available here), I estimate 70% immunity on April 2nd, and 90% immunity on April 24th.

With sufficiently high vaccination, it turns out lost antibodies are just not that big a deal. 8 months before April 24th was August 24th, at which point we had 5.73 million confirmed cases. Using the CDC’s 7.7 multiple, that’s 44.1 million actual.

But even if 27 million people lose their antibodies, our model has vaccinations at nearly 6 million / day by April 24th, so the delay isn’t that costly. Incorporating antibody loss, we only get pushed back to April 17th for 70% immunity, and May 6th for 90%.

There is also a cascading loss of antibodies between April 24th and May 6th, but this only pushes out estimates by another day or so.

What if 50,000 more vaccines per day is too optimistic? Alvaro mentions this Metaculus estimate giving 82.5 million by May 13th. Note that the 82.5 million refers not to administered doses, but to people who have completed both vaccinations, so this is 165 million doses total. That’s consistent with around 10,000 more vaccines per day, rather than the 50,000 I suggest.

Frequently Asked Questions

Why do you care? Stat isn’t an academic publication and it’s not peer reviewed.
No, but they are widely acclaimed, and often cited on Marginal Revolution. Until now, I would have felt confident taking their word at face value.

How poorly does this reflect on Stat?
To Stat’s credit, Dr. Nayer is not a regular contributor. His forecast was also not presented as a serious prediction, but was mostly intended to illustrate the importance of vaccines. Even there, it is bad that he made these basic errors, and it is bad that Stat did not fact check his writing.

Anyone can make mistakes. If you’re emboldened by my findings, you should go and run checks against more articles and try to find additional errors. Perhaps this is a one-off mistake, or perhaps there is a more systematic problem.

Why do you use different multiples at different points?
The CDC estimates a 4.6x multiple overall, but previously reported a 7.7x multiple for data up to September. Based on those numbers, I inferred a 2.4x multiple for data after September.

In section 4, I use a 7.7x multiple for cases before September to estimate antibody loss. I also use a but a 2.4x multiple for December’s cases which I’m using as our monthly infection rate. I also use the overall 4.6x multiple in one paragraph referring to data across a broad range of time:

If we have to wait another 6.2 months, everyone infected until November 25th could lose their antibodies as well. That’s 12.9 million confirmed, or 59.3 million actual using the CDC’s 4.6x multiple.

Okay, but really, when can I go outside?
I have no idea. If you put a gun to my head, I would say cases rise more than expected, and vaccinations go worse than expected, but I don’t know how those factors balance out. Maybe early summer, but it is still in our collective power to do better.

This isn’t a question, I just need a reason to feel optimistic.
I have been using a flat rate of infections, but they have been growing quite rapidly historically. If this remains true, the timeline would be greatly accelerated. A new strain might increase infections as well. That’s all bad news for America, but if you’re a cautious introvert taking appropriate precautions, it might be good news for you.

There is also hope on the vaccine side. Biden claims the Trump administration is to blame for distribution delays. I don’t know if this is true, but it could be, and it could mean improved distribution starting today! So far we have seen vaccines administered per day increase rapidly, but there may be a 2nd degree acceleration as well (i.e. the daily increase is itself increasing).

Also note that if you live in a hot spot, your region may achieve herd immunity before the nation as a whole.


[1] I am not an epidemiologist, but for the record, neither is he. As per his bio on StatNews: “Zach Nayer is a transitional year resident physician at Riverside Regional Medical Center in Newport News, Va., and an incoming ophthalmology resident at Harkness Eye Institute at Columbia University in New York City.”

[2] 4.2 is the multiple I get by dividing 83.1 million by 20 million reported cases, but the CDC states a multiple of 4.6 for “total COVID–19 infections were reported”. I don’t know how to explain the discrepancy.

[3] The CDC’s 95% UI for “total COVID–19 infections were reported” is a multiple of 4.0 – 5.4. Anand et al. report 9.3% with a 95% CI of 8.8%–9.9%. Reese et al. does not provide a CI for the 7.7x, but gives an related 7.1x multiple a 95% UI of 5.8-9.0.

[4] If you’re curious, you can look at Zvi’s toy models.

[5] This is really just guesswork. 50,000 is based on the rate of increase from January 5th to January 15th. If you started counting 1/1 you would get 35,000, and if you started 12/21 you would get 30,000. Using 10,000 gets us consistent with the Metaculus estimate.

[6] I expressed confidence after seeing “8 months” cited in multiple reports, but this may be limited by the data we have available. It seems the studies may actually be saying “at least 8 months”. From Dan et al.:

Overall, at 5 to 8 months PSO, almost all individuals were positive for SARS-CoV-2 Spike and RBD IgG…. Notably, memory B cells specific for the Spike protein or RBD were detected in almost all COVID-19 cases, with no apparent half-life at 5 to 8 months post-infection… These data suggest that T cell memory might reach a more stable plateau, or slower decay phase, beyond the first 8 months post-infection.

Thanks to Alvaro for pointing this out. Here’s a revised table of results, removing antibody loss considerations from the model:

10k 30k 50k
70% 5/15 4/16 4/2
80% 6/6 4/30 4/13
90% 6/26 5/13 4/24

[7] I do worry that there’s some kind of logistical maximum rate of vaccinations, and it is not realistic to think we could ever be at 6 million / day. You may have heard that NYC alone did 400,000 vaccines / day in 1947, but that was a very different problem. Note also that this depends on vaccines actually being accepted! As I wrote in the appendix here, trust is still low, though it depends on who you ask, and may increase as more people get the vaccine.