It is somewhat puzzling to me that my PredictionBook evangelizing is well received here, but the fraction of LessWrongers that actually use PredictionBook is vanishingly small. Frankly, it is a scandal to Less Wrong that its high-karma members don't bother to publicly record their own predictions and yet continue to expect others to believe in the efficacy of the techniques taught in its core texts, like The Sequences.
If you want us to believe your beliefs pay rent, why not show us the receipts?
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Yes, this is what I was trying to say. I see how the phrase "conditionality of the reward on your assessed probability" could describe Pascal's Wager, but not how it could describe Pascal's Mugging.
More concisely than the original/gwern: The algorithm used by the mugger is roughly:
Find your assessed probability of the mugger being able to deliver whatever reward, being careful to specify the size of the reward in the conditions for the probability
offer an exchange such that U(payment to mugger) < U(reward) * P(reward)
This is an issue for AI design because if you use a prior based on Kolmogorov complexity than it's relatively straightforward to find such a reward, because even very large numbers have relatively low complexity, and therefore relatively high prior probabilities.