Microcovid.org was a vital tool to many of us during the pandemic- I made a whole speech about it back at summer solstice. The back and forth over how finished covid is, plus a dependence entirely on volunteers, has pushed microcovid into something of a limbo. It's not clear what the best next step for it is. One option would be to update microcovid for new problems, but that’s a lot of work and I have a lot of uncertainty about how valuable any given improvement is. So I’d to collect some data.
- How are you using microcovid now?
- What is the minimum viable change that would create value for you, and what would that value be? The more explicit the better here- comments like “feature X would be worth $n to me” or “it enabled me to find a collaborated then enabled a project” are more useful than “I like it a lot”
- What’s your dream microcovid, and what value would that create for you?
- Anything else you’d like to share on this topic?
I’ve asked LW to enable the experimental agree/disagree feature for this post. The benefit of this is that you can boost particular data points without writing anything. The risk is that an individual's preferences get counted repeatedly: 5 people with the same opinion who write five posts and agree with all of the others’ identical points should be counted as 5 people, not 25. So I ask that you:
- Not agree with comments substantially overlapping with a comment you write
- If multiple comments make the same point, only click agree for one of them
This isn’t an exact science because comments will sometimes contain more than one point or make very similar but not totally identical points, but please do your best.
Given that the early data I've seen suggests that efficacy of 3 doses vs. omicron is similar to that of 2 doses vs. delta -- probably a bit lower, but at least in the same universe -- I've been using it largely as is, multiplying the final output by 2 to 3 based on what I've seen about the household transmission rate of Omicron relative to Delta. I know some other boosted people who have used it in a similar fashion. There's so much uncertainty in the model assumptions that its best use in my view is to get very broad-strokes order-of-magnitude idea of the risk, which has been extremely useful for friends and relatives who have just wanted a baseline idea of whether the risk of getting COVID when participating in a particular activity is more like .01% or .1% or 1% or 10%. (Note: I doubt that said friends and relatives would have been able to use it in this way without my help, since it requires a little math and they're not math types.) So I guess my main recommendations would be:
- don't get rid of it even if you aren't confident in the Omicron data - if you can produce results that are probably in the right order of magnitude, it's still useful! If you aren't up for a full Omicron overhaul, but you think there's some back-of-the-envelope adjustment that could give results that are probably the correct order of magnitude, I think applying that -- with suitable caveats about accuracy -- would be preferable to taking the site down or leaving it as is.
- It's easy to forget how many people are not math people whatsoever. Best practice in risk communication is often considered to be communicating numbers as percentages, as well as contextualized frequencies -- not just 'X-in-a-million', but something like "X out of Y people (for context, Y is roughly the number of people living in Z)" -- as there are a lot of people who don't really understand percentages and need a little context to understand frequencies. In my ideal world the output would make the chance of getting COVID from this specific activity clear as a percentage and as a contextualized frequency, as well as the chance of getting COVID from this activity in a year under the assumption that you do this activity every N weeks, where N can be entered by the user.