DavidS
DavidS has not written any posts yet.

Trevor Bedford took a crack at estimating the steady state back in October (so pre-Omicron). He came up with estimates of 20-30% of the population infected annually and deaths of 40K-100K per year in the US. https://twitter.com/trvrb/status/1448297978419101696 . Unfortunately, he didn't show enough of his work for me to understand where the 20-30% number comes from. Deaths is just multiplying number of infections by IFR. The big question mark here is whether high risk people will continue to get boosters; Bedford is guessing yes.
Here is my own attempt to estimate number of infections. I googled "how long does covid acquired immunity last" and looked for useful studies. My impression is that no... (read more)
I am curious what kind of analysis you plan to run on the calibration questions. Obvious things to do:
For each user, compute the correlation between their probabilities and the 0-1 vector of right and wrong answers. Then display the correlations in some way (a histogram?).
For each question, compute the mean (or median) of the probability for the correct answers and for the wrong answers, and see how separated they are.
But neither of those feels like a really satisfactory measure of calibration.
"Naively in the actual Newcombe's problem if omega is only correct 1/999,000+epsilon percent of the time…"
I'd like to argue with this by way of a parable. The eccentric billionaire, Mr. Psi, invites you to his mansion for an evening of decision theory challenges. Upon arrival, Mr. Psi's assistant brings you a brandy and interviews you for hours about your life experiences, religious views, favorite philosophers, ethnic and racial background … You are then brought into a room. In front of you is a transparent box with a $1 bill in it, and an opaque box. Mr. Psi explains:
"You may take just the solid box, or both boxes. If I predicted... (read more)
A few years ago, I tried to write a friendly introduction to this technical part.
The grammar of the sentence is a bit hard to follow. When I am presenting this paradox to friends (I have interesting friends), I hand them a piece of paper with the following words on it:
Take another piece of paper and copy these words:
"Take another piece of paper and copy these words: "QQQ" Then replace the three consecutive capital letters with another copy of those words. The resulting paragraph will make a false claim."
Then replace the three consecutive capital letters with another copy of those words. The resulting paragraph will make a false claim.
I urge you to carry out the task. You should wind up with a paper that has the... (read more)
Well, I was trying to make the simplest possible example. We can of course add the monkey to our pool of experts. But part of the problem of machine learning is figuring out how long we need to watch an expert fail before we go to the monkey.
Suppose there are two experts, and two horses. Expert 1 always predicts horse A, expert 2 always predicts horse B, the truth is that the winning horse cycles ABABABABABA... The frequentist randomizes choice of expert according to weights; the Bayesian always chooses the expert who currently has more successes, and flips a coin when the experts are tied. (Disclaimer: I am not saying that this is the only possible strategies consistent with these philosophies, I am just saying that that these seem like the simplest possible instantiations of "when I actually choose which person to follow on a given round, I randomize according to my weights, whereas a Bayesian would always want... (read more)
I thought it was interesting too. As far as I can tell, your result is special to the situation of two bettors and two events. The description I gave describes a betting method when there are more than two alternatives, and that method is strategy proof, but it is not fair, and I can't find a fair version of it.
I am really stumped about what to do when there are three people and a binary question. Naive approaches give no money to the person with the median opinion.
Here is another attempt to present the same algorithm, with the goal of making it easier to memorize:
"Each puts in the square of their surprise, then swap."
To spell this out, I predict that some event will happen with probability 0.1, you say it is 0.25. When it happens, I am 0.9 surprised and you are only 0.75 surprised. So I put down (0.9)^2 D, you put down (0.75)^2 D, and we swap our piles of money. Since I was more surprised, I come out the loser on the deal.
"Square of the surprise" is a quantity commonly used to measure the failure rate of predicative agents in machine learning; it is also known as Brier score. So we could describe this rule as "each bettor pays the other his or her Brier score." There was some discussion of the merits of various scoring systems in an earlier post of Coscott's.
As a minor addendum, I asked microcovid what it thought about spending 4 hours a day indoors unmasked with 20 people, but with a community incidence of 0.5%, attempting to simulate 2019 living with broad acquired immunity. It thinks this is 19,000 microcovids, suggesting it would still lead to infection in 50 days. This is depressing, I had hoped for a lot more gain than that.