PhilGoetz comments on Sufficiently Advanced Sanity - Less Wrong

6 Post author: Eliezer_Yudkowsky 20 December 2009 06:11PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (24)

You are viewing a single comment's thread. Show more comments above.

Comment author: Eliezer_Yudkowsky 20 December 2009 11:53:25PM 5 points [-]

And that was the Bayesian flaw, though no, the story wasn't about AI.

The probability that you see some amazingly sane things mixed with some apparently crazy things, given that the speaker is much saner (and honest), is not the same as the probability that you're dealing with a much saner and honest speaker, given that you see a mix of some surprising and amazing sane things with apparently crazy things.

For example, someone could grab some of the material from LW, use it without attribution, and mix it with random craziness.

Comment author: PhilGoetz 23 December 2009 05:12:50AM *  0 points [-]

P(sane things plus crazy things | speaker is saner) * P(speaker is saner) = P(speaker is saner | sane things plus crazy things) * P(sane things plus crazy things)

The fact that P(sane things plus crazy things | speaker is saner) <> P(speaker is saner | sane things plus crazy things) isn't a problem, if you deal with your priors correctly.

I think I misinterpreted your original question as meaning "Why is this problem fundamentally difficult even for Bayesians?", when it was actually, "What's wrong with the reasoning used by the speaker in addressing this problem?"