cousin_it comments on Popperian Decision making - Less Wrong

-1 Post author: curi 07 April 2011 06:42AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (100)

You are viewing a single comment's thread. Show more comments above.

Comment author: cousin_it 07 April 2011 04:47:01PM *  5 points [-]

Your last paragraph is wrong. Here's an excruciatingly detailed explanation.

Let's say I am a perfect Bayesian flipping a possibly biased coin. At the outset I have a uniform prior over all possible biases of the coin between 0 and 1. Marginalizing (integrating) that prior, I assign 50% probability to the event of seeing heads on the first throw. Knowing my own neurons perfectly, I believe all the above statements with probability 100%.

The first flip of the coin will still make me update the prior to a posterior, which will have a different mean. Perfect knowledge of myself doesn't stop me from that.

Now skip forward. I have flipped the coin a million times, and about half the results were heads. My current probability assignment for the next throw (obtained by integrating my current prior) is 50% heads and 50% tails. I have monitored my neurons diligently throughout the process, and am 100% confident of their current state.

But it will take much more evidence now to change the 50% assignment to something like 51%, because my prior is very concentrated after seeing a million throws.

The statement "I have perfect knowledge of the current state of my prior" (and its integral, etc.) does not in any way imply that "my current prior is very concentrated around a certain value". It is the latter, not the former, that controls my sensitivity to evidence.

Comment author: Sniffnoy 07 April 2011 10:45:59PM 0 points [-]

Upvoted for giving a good explanation where I failed earlier...

Comment author: timtyler 07 April 2011 05:38:25PM 0 points [-]

Your last paragraph is wrong. Here's an excruciatingly detailed explanation.

That does clarify what you originally meant. However, this still seems "rather suspicious" - due to the 1.0:

if a Bayesian computer program assigns probability 0.87 to proposition X, then obviously it ought to assign probability 1 to the fact that it assigns probability 0.87 to proposition X.

Comment author: cousin_it 07 April 2011 05:43:03PM *  2 points [-]

I'm willing to bite the bullet here because all hell breaks loose if I don't. We don't know how a Bayesian agent can ever function if it's allowed (and therefore required) to doubt arbitrary mathematical statements, including statements about its own algorithm, current contents of memory, arithmetic, etc. It seems easier to just say 1.0 as a stopgap. Wei Dai, paulfchristiano and I have been thinking about this issue for some time, with no results.