He mentioned R0 which tracks the contagiousness of the disease in a completely susceptible population (it is the average number of people infected by 1 infectious person). It's used to assess an initial outbreak and predict the number of infections and lethality and a bunch of other stuff. You're talking about the IFR which is the lethality. Most of the models are built on R0 but it's not always the best seed for the predictions. (
https://www.the-scientist.com/features/why-r0-is-problematic-for-predicting-covid-19-spread-67690)
Many people are using or citing the numbers calculated from China and that number is most likely not the r0 of the US because of population and behavior differences. The only place that we saw an initial R0 for an outbreak in the US exceeding 3 was New York which had a 3.86. New Jersey is the only other topping 2.5 at 2.62. The average R0 across the US was 2.26. (
https://covid19-projections.com/infections-tracker/)
One of the interesting rates to track is the effective rate of contagion (Re or Rt). This is the average number of people infected by one infectious person after health, policy, and environmental variables have been factored in. In the US as of 11/20/20, the Re (or Rt if you prefer) ranges from .91 in Illinois to 1.31 in New Hampshire with an outlier in Oregon at 1.63. (
https://www.statista.com/statistics/1119412/covid-19-transmission-rate-us-by-state/)
The first article is very good and contains a quote saying something that I've been trying to say about all of this from day 1 when people start pounding the models:
"All models are wrong, but some are useful. You just hope you’re in the useful category." — Benjamin Ridenhour, University of Idaho