You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

potato comments on Naming the Highest Virtue of Epistemic Rationality - Less Wrong Discussion

-3 Post author: potato 24 October 2011 11:00PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (28)

You are viewing a single comment's thread. Show more comments above.

Comment author: potato 25 October 2011 12:10:15AM 0 points [-]

No wait, of course not sorry. If P(A) = 1, then P(~A) = 0, and A is false, then your score goes down to negative infinity (kind of, I think).

Comment author: VincentYu 25 October 2011 12:18:27AM *  0 points [-]

If P(A) = 1, then P(~A) = 0

That only works if the agent's beliefs have that kind of consistency. If it is taken for granted that this scoring only applies for agents with completely consistent beliefs (including complete satisfaction of Bayes' theorem), then I don't think this scoring can be applied to any human.

Comment author: potato 25 October 2011 12:22:45AM *  0 points [-]

Hmm, idk bout that. P(a) + P(~a) = 1 seems like something humans do alright with. But of course humans don't really use numbers in the first place. but that does not matter. bayes has been formalized with simple degrees of confidence like: lots, all, not very much, none.

But if you're right then we I'll give up the point and simply penalize for false claims.

But take note that if humans don't have the consistency to satisfy P(a) + P(~a) = 1 they most certainly don't have the consistency to satisfy P(a) = 1 either. So no you could not get a perfect score by setting all your beliefs to 1 because you can't set all your beliefs to 1.

Comment author: VincentYu 25 October 2011 01:32:32AM *  1 point [-]

But take note that if humans don't have the consistency to satisfy P(a) + P(~a) = 1 they most certainly don't have the consistency to satisfy P(a) = 1 either. So no you could not get a perfect score by setting all your beliefs to 1 because you can't set all your beliefs to 1.

I don't follow the argument. Perhaps we mean different things by 'consistency'? By consistent beliefs, I meant a set of beliefs that cannot be used to derive a contradiction with the usual probability axioms. I was not making a claim about how humans come to believe things.

ETA: About this:

P(a) + P(~a) = 1 seems like something humans do alright with.

I think you place too much trust in the consistency of human beliefs. In fact, I wouldn't trust myself with that. Suppose you ask me to assign subjective probabilities to 50 statements. Immediately afterwards, you give me a list of the negations of these 50 statements. I'm pretty sure I'll violate P(a) + P(~a) = 1 at least once.

Comment author: potato 25 October 2011 02:06:51AM 0 points [-]

But you'll probably violate it within some reasonable error range. I doubt you would ever get anything as high as 150% given to (a or ~a) if you actually performed this test. And still 1/50 isn't bad.