Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Pick a username and password for your Less Wrong and Less Wrong Wiki accounts. You will receive an email to verify your account.

Eliezer, I agree that exactly even balances of evidence are rare. However, I would think suspending judgment to be rational in many situations where the balance of evidence is not exactly even. For example, if I roll a die, it would hardly be rational to believe "it will not come up 5 or 6", despite the balance of evidence being in favor of such a belief. If you are willing to make >50% the threshold of rational belief, you will hold numerous false and contradictory beliefs.

Also, I have some doubt about your claim that when "there is no evidence in favor of a complicated proposed belief, it is almost always correct to reject it". If you proposed a complicated belief of 20th century physics (say, Bell's theorem) to Archimedes, he would be right to say he has no evidence in its favor. Nonetheless, it would not be correct for Archimedes to conclude that Bell's theorem is therefore false.

Perhaps I am misunderstanding you.

*-1 points [-]It would be irrational to believe "it will not come up 5 or 6" because P(P(5 or 6) = 0) = 0, so you know for certain that its false. As you said "Claims about the probability of a given claim being true, helpful as they may be in many cases, are distinct from the claim itself." Before taking up any belief (if the situation demands taking up a belief, like in a bet, or living life), a Bayesian would calculate the likelihood of it being true vs the likelihood of it being false, and will favour the higher likelihood. In this case, the likelihood that "it will not come up 5 or 6" is true is 0, so a Bayesian would not take up that position. Now, you might observe that the belief that "1,2,3 or 4 will come up" is true also holds holds the likelihood of zero. In the case of a dice role, any statement of this form will be false, so a Bayesian will take up beliefs that talk probabilities and not certainties . (As Bigjeff explains, "At the most basic level, the difference between Bayesian reasoning and traditional rationalism is a Bayesian only thinks in terms in likelihoods")

Ofcourse, one can always say "I don't know", but saying "I don't know" would have an inferior utility in life than being a Bayesian. So, for example, assume that your life depends on a series of dice rolls. You can take two positions: 1) You say "I believe I don't know what the outcome would be" on every roll. 2) You bet on every dice roll according to the information you have (in other words, You say "I believe that outcome X has Y chance of turning up". Both positions would be of course be agreeable, but the second position would give you a higher payoff in life. Or so Bayesians believe.