Elithrion comments on Personal Evidence - Superstitions as Rational Beliefs - Less Wrong

3 Post author: OrphanWilde 22 March 2013 05:24PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (135)

You are viewing a single comment's thread. Show more comments above.

Comment author: gwern 22 March 2013 07:50:37PM *  10 points [-]

Objectively, no; as previously mentioned, it shouldn't surprise us that somebody won the lottery. Subjectively, yes; I would certainly update my odds that something other than pure chance is at work if I happened to win the lottery.

Again, why? Suppose we are comparing two models: in one world, there are 1000 haunted houses which are all explained by gaslamping and sleepwalking etc; in the second world, there are 1000 haunted houses and they are all supernatural etc. Upon encountering a haunted house, would you update in favor of 'I am in world two and houses are supernatural'? Would someone reading your experience update? I propose that neither would update, because the evidence is equally consistent with both worlds; so far so good.

Now, if in world 1 there are 1000 frightening houses with the mundane explanations mentioned, and in world 2 there are 1000 frightening houses with the mundane explanations (human biology and mentality and the laws of probability etc having not changed) plus 1000 frightening houses due to supernatural influences, upon encountering a frightening house would you update?

Of course; in world 2 there are more frightening houses and you have encountered a frightening house, which is twice as likely in world 2 than in world 1 (2000 houses versus 1000 houses), and so you are now more inclined to think you are in world 2 from whatever you were thinking before. But so would an observer reading your experience!

So where does this unique unconveyable evidence (that your post claims your experience has given you) come from?

And simulation is coming from Robin Hanson's assertion that if you're an important person in the world, you should probably update your priors to suggest you are being simulated; it's a related argument.

Ah. It's coming from anthropics. You're making the claim that Aumannian agreement cannot convey anthropic information.

You realize that both the SIA and SSA are hotly debated because either seems to lead to absurd conclusions, right? While Aumann just leads to the conclusion 'people are irrational', which certainly doesn't seem absurd to me.

And since one man's modus ponens is another man's modus tollens, why isn't your post just further evidence that anthropic reasoning as currently understood by most people is completely broken and cannot be trusted in anything?

Comment author: Elithrion 23 March 2013 02:18:29AM 3 points [-]

I think there are non-anthropic problems with even rational!humans communicating evidence.

One is that it's difficult to communicate that you're not lying, and it is also difficult to communicate that you're competent at assessing evidence. A rational agent may have priors saying that OrphanWilde is an average LW member, including the associated wide distribution in propensity to lie and competence at judging evidence. On the other hand, rational!OrphanWilde would (hopefully) have a high confidence assessment of himself (herself?) along both dimensions. However, this assessment is difficult to communicate, since there are strong incentives to lie about these assessments (and also a lot of potential for someone to turn out to not be entirely rational and just get these assessments wrong). So, the rational agent may read this post and update to believing it's much more likely that OrphanWilde either lies to people for fun (just look at all those improbable details!) or is incompetent at assessing evidence and falls prey to apophenia a lot.

This might not be an issue were it not for the second problem, which is that communication is costly. If communication were free, OrphanWilde could just tell us every single little detail about his life (including in this house and in other houses), and we could then ignore the problem of him potentially being a poor judge of evidence. Alternatively, he could probably perform some very large volume evidence-assessment test to prove that he is, in fact, competent. However, since communication is costly, this seems to be impractical in reality. (The lying issue is slightly different, but could perhaps be overcome with some sort of strong precommitment or an assumption constraining possible motivations combined with a lot of evidence.)

This doesn't invalidate Aumann agreement as such, but certainly seems to limit its practical applications even for rational agents.