Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Ty-Guy9 comments on Making Beliefs Pay Rent (in Anticipated Experiences) - Less Wrong

110 Post author: Eliezer_Yudkowsky 28 July 2007 10:59PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (246)

Sort By: Old

You are viewing a single comment's thread. Show more comments above.

Comment author: Ty-Guy9 20 March 2015 09:05:27AM 0 points [-]

While I fully agree with the principle of the article, something stuck out to me about your comment:

In principle there are many true beliefs for which I have no evidence, but in practice I can never know what these true beliefs are, or even focus on them enough to think them explicitly, because they are so vastly outnumbered by false beliefs for which I can find no evidence.

What I noticed was that you were basically defining a universal prior for beliefs, as much more likely false than true. From what I've read about Bayesian analysis, a universal prior is nearly undefinable, so after thinking about it a while, I came up with this basic counterargument:

You say that true beliefs are vastly outnumbered by false beliefs, but I say, how could you know of the existence of all these false beliefs, unless each one had a converse, a true belief opposing it that you first had some evidence for? For otherwise, you wouldn't know whether it was true or false.

You may then say that most true beliefs don't just have a converse. They also have many related false beliefs opposing them. But I would say, those are merely the converses that spring from the connections of that true belief with its many related true beliefs.

By this, I hope I've offered evidence that a fifty-fifty universal T/F prior is at least as likely as one considering most unconsidered ideas to be false. (And I would describe my further thoughts if I thought they would be useful here, but, silly me, I'm replying to a post from almost 8 years ago.)

Comment author: CBHacking 18 January 2016 11:31:39PM 0 points [-]

I don't think "converse" is the word you're looking for here - possibly "complement" or "negation" in the sense that (A || ~A) is true for all A - but I get what you're saying. Converse might even be the right word for that; vocabulary is not my forte.

If you take the statement "most beliefs are false" as given, then "the negation of most beliefs is true" is trivially true but adds no new information. You're treating positive and negative beliefs as though they're the same, and that's absolutely not true. In the words of this post, a positive belief provides enough information to anticipate an experience. A negative belief does not (assuming there are more than two possible beliefs). If you define "anything except that one specific experience" as "an experience", then you can define a negative belief as a belief, but at that point I think you're actually falling into exactly the trap expressed here.

If you replace "belief" with "statement that is mutually incompatible with all other possible statements that provide the same amount of information about its category" (which is a possibly-too-narrow alternative; unpacking words is hard sometimes) then "true statements that are mutually incompatible with all other possible statements that provide the same amount of information about their category are vastly outnumbered by false statements that are mutually incompatible with all other possible statements that provide the same amount of information about their category" is something the I anticipate you would find true. You and Eliezer do not anticipate a different percentage of possible "statements that are mutually incompatible with all other possible statements that provide the same amount of information about their category" being true.

As for universal priors, the existence of many incompatible possible (positive) beliefs in one space (such that only one can be true) gives a strong prior that any given such belief is false. If I have only two possible beliefs and no other information about them, then it only takes one bit of evidence - enough to rule out half the options - to decide which belief is likely true. If I have 1024 possible beliefs and no other evidence, it takes 10 bits of evidence to decide which is true. If I conduct an experiment that finds that belief 216 +/- 16 is true, I've narrowed my range of options from 1024 to 33, a gain of just less than 5 bits of evidence. Ruling out one more option gives the last of that 5th bit. You might think that eliminating ~96.8% of the possible options sounds good, but it's only half of the necessary evidence. I'd need to perform another experiment that can eliminate just as large a percentage of the remaining values to determine the correct belief.

Comment author: gjm 19 January 2016 11:36:55AM 1 point [-]

If you have an arbitrary proposition -- a random sequence of symbols constrained only by the grammar of whatever language you're using -- then perhaps it's about equally likely to be true or false, since for each proposition p there's a corresponding proposition not p of similar complexity.

But the "beliefs" people are mostly interested in are things like these:

  • There is exactly one god, who created the universe and watches over us; he likes forgiveness, incense-burning, and choral music, and hates murder, atheism and same-sex marriage.
  • Two nearby large objects, whatever they are, will exert an attractive force on one another proportional to the mass of each and inversely proportional to the square of the distance between them.

and the negations of these are much less interesting because they say so much less:

  • Either there is no god or there are multiple gods, or else there is one god but it either didn't create the universe or doesn't watch over us -- or else there is one god, who created the universe and watches over us, but its preferences are not exactly the ones stated above.
  • If you have two nearby objects, whatever force there may be between them is not perfectly accurately described by saying it's proportional to their masses, inversely proportional to the square of the distance, and unaffected by exactly what they're made of.

So: yeah, sure, there are ways to pick a "random" belief and be pretty sure it's correct (just say "it isn't the case that" followed by something very specific) but if what you're picking are things like scientific theories or religious doctrines or political parties then I think it's reasonable to say that the great majority of possible beliefs are wrong, because the only beliefs we're actually interested in are the quite specific ones.