Oscar_Cunningham comments on Bayesian probability as an approximate theory of uncertainty? - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (35)
As an aside, here's a funny variation of the Absent-Minded Driver problem that I just came up with. Nothing really new, but maybe it will make some features sharper.
You're invited to take part in an experiment, which will last for 10 days. Every day you're offered two envelopes, a red one and a blue one. One of the envelopes contains a thousand dollars, the other is empty. You pick one of the envelopes and receive the money. Also, at the beginning of each day you are given an amnesia pill that makes you forget which day it is, what happened before, and how much money you have so far. At the end of the experiment, you go home with the total money received.
The money is distributed between envelopes in this way: on the first day, the money has 60% chance of being in the red envelope. On each subsequent day, the money is in the envelope that you didn't pick on the first day.
Fun features of this problem:
1) It's a one-player game where you strictly prefer a randomized strategy to any deterministic one. This is similar to the AMD problem, and impossible if you're making decisions using Bayesian probability.
2) Your decision affects the contents of the sealed envelopes in front of you. For example, if you pick the red envelope deterministically, it will be empty >90% of the time. Same for the blue one.
3) Even after you decide to pick a certain envelope, the chance of finding money inside depends not just on your decision, but on the decision process that you used. If you used a coinflip, the chance of finding money is higher than if you chose deterministically.
Solution: I choose red with probability (written out and ROT13) avargl bar bire bar uhaqerq naq rvtugl.
EDIT: V'z fhecevfrq ubj pybfr guvf vf gb n unys.
I get that too. More generally, if there are n+1 rounds and on the first round the difference in probability between red and blue is z, then the optimal probability for choosing red is 1/2 + z/2n. It has to be close to 1/2 for large n, because 1/2 is optimal for the game where z=0, and over ten rounds the loss from deviating from 1/2 after the first round dominates the gain from knowing red is initially favoured.
Sure that's not 1/2 + z/4n?
I think he meant "the difference between the probability of red and 1/2" when he said "the difference in probability between red and blue".
Er, right, something like that.
That works too.