More substantively, can we express mathematically how the correlation between leaked signal and final choice effects the degree of sub optimality in final payouts?
Naively in the actual Newcombe's problem if omega is only correct 1/999,000+epsilon percent of the time then CDT seems to do about as well as whatever theory that solves this problem. Is there a known general case for this reasoning?
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
I am curious what kind of analysis you plan to run on the calibration questions. Obvious things to do:
For each user, compute the correlation between their probabilities and the 0-1 vector of right and wrong answers. Then display the correlations in some way (a histogram?).
For each question, compute the mean (or median) of the probability for the correct answers and for the wrong answers, and see how separated they are.
But neither of those feels like a really satisfactory measure of calibration.