You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

cousin_it comments on Bayesian probability as an approximate theory of uncertainty? - Less Wrong Discussion

16 Post author: cousin_it 26 September 2013 09:16AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (35)

You are viewing a single comment's thread. Show more comments above.

Comment author: Tyrrell_McAllister 26 September 2013 08:47:57PM *  4 points [-]

1) It's a one-player game where you strictly prefer a randomized strategy to any deterministic one. This is similar to the AMD problem, and impossible if you're making decisions using Bayesian probability.

Having a random-number generator is equivalent to having a certain very restricted kind of memory.

For example, if you have a pseudo-random number generator in a computer, then the generator requires a seed, and this seed cannot be the same every day. The change of the seed from day to day constitutes a trace in the computer of the days' passing. Therefore, you and the computer, taken together, "remember", in a certain very restricted sense, the passing of the days. Fortunately, this restricted kind of memory turns out to be just enough to let you do far better than you could have done with no memory at all. (I gave this argument in slightly more detail in this old comment thread.)

So, the presence of a random-number generator is just a weakening of the requirement of complete amnesia. However, given this restricted kind of memory, you are making your decisions in accordance with Bayesian probability theory. [ETA: I misunderstood cousin_it's point when I wrote that last sentence.]

Comment author: cousin_it 26 September 2013 09:26:16PM 1 point [-]

However, given this restricted kind of memory, you are making your decisions in accordance with Bayesian probability theory.

It seems to me that if you have a coin, your probability distribution on envelopes should still depend on the strategy you adopt, not just on the coin. Are you sure you're not sneaking in "planning-optimality" somehow? Can you explain in more detail why the decision on each day is separately "action-optimal"?

Comment author: Tyrrell_McAllister 26 September 2013 10:55:36PM 1 point [-]

I think I misunderstood what you meant by "impossible if you're making decisions using Bayesian probability." I wasn't trying to avoid being "planning-optimal". It is not as though the agent is thinking, "The PRNG just output 0.31. Therefore, this envelope is more likely to contain the money today.", which I guess is what "action-optimal" reasoning would look like in this case.

When I said that "you are making your decisions in accordance with Bayesian probability theory", I meant that your choice of plan is based on your beliefs about the distribution of outputs generated by the PRNG. These beliefs, in turn, could be the result of applying Bayesian epistemology to your prior empirical experience with PRNGs.

Comment author: cousin_it 26 September 2013 11:30:47PM 2 points [-]

Yeah. It looks like there's a discontinuity between using a RNG and having perfect memory. Perfect memory lets us get away with "action-optimal" reasoning, but if it's even a little imperfect, we need to go "planning-optimal".