I find it a convincing description of what someone saner than oneself looks like, i.e. P(Looks Sane|Sane) is high. However, there may be many possible entities which Look Sane but are not. For example, a flawed or malicious genius who produces genuine gems mixed with actual crackpottery or poisoned, seemingly-sane ideas beyond one's capacity to detect. If a lot more things Look Sane than are, then you get turned into paperclips for neglecting to consider P(Sane|Looks Sane).
Given the history of your thinking, I guess that in the story the entity in question is an AI, and the speaker is arguing why he is sure that the AI is super-wise? Hence the importance you now attach to the matter.
And that was the Bayesian flaw, though no, the story wasn't about AI.
The probability that you see some amazingly sane things mixed with some apparently crazy things, given that the speaker is much saner (and honest), is not the same as the probability that you're dealing with a much saner and honest speaker, given that you see a mix of some surprising and amazing sane things with apparently crazy things.
For example, someone could grab some of the material from LW, use it without attribution, and mix it with random craziness.
Mencius Moldbug is a great current example, IMHO, of not-sane but says some surprising and amazingly sane things.
But
But no one could counterfeit the wonderfully and surprisingly sane parts; they would need to be that sane themselves.
is an assertion that P(Sane|Looks Sane) ~ 1, so it seems that this isn't a Bayesian flaw per se.
P(sane things plus crazy things | speaker is saner) P(speaker is saner) = P(speaker is saner | sane things plus crazy things) P(sane things plus crazy things)
The fact that P(sane things plus crazy things | speaker is saner) <> P(speaker is saner | sane things plus crazy things) isn't a problem, if you deal with your priors correctly.
I think I misinterpreted your original question as meaning "Why is this problem fundamentally difficult even for Bayesians?", when it was actually, "What's wrong with the reasoning used by the speaker in addressing this problem?"
For example, someone could grab some of the material from LW, use it without attribution, and mix it with random craziness.
news.ycombinator.com?
One problem is the assumption that being right and novel on some things implies being consistently right/sane. An important feature that separates "insanity" and stupidity is that "insanity" doesn't preclude domain-specific brilliance. Certainly a person being unusually right on some things is evidence for them being consistently right on others, but not overwhelmingly strong evidence.
Spot the Bayesian problem, anyone?
Sure. If some parts of the message contain novel correct insight but the rest is incomprehensible, the simple hypothesis that the whole message is correct gets likelihood-privileged over other similarly simple hypotheses.
But I don't think you're talking about the stuff Shalmanese wants to talk about. :-) "Wisdom" isn't knowledge of facts; wisdom is possession of good heuristics. A good heuristic may be easy to apply, but disproportionately hard to prove/justify to someone who hasn't amassed enough experience - which includes younger versions of ourselves. I've certainly adopted a number of behavioral heuristics that younger versions of me would've labeled as obviously wrong. For some of them I can't offer any justification even now, beyond "this works".
This is the shock-level problem. If you let T1, T2, ... be the competing theories, and O be the observations, and you choose Ti by maximizing P(Ti | O), and you do this by choosing Ti that maximizes P(O | Ti) * P(Ti),
... then P(O | Ti) can be at most 1; but P(Ti), the prior you assign to theory i, can be arbitrarily low.
In theory, this should be OK. In practice, P(O | Ti) is always near zero, because no theory accounts for all of the observations, and because any particular series of observations is extremely unlikely. Our poor little brains have an underflow error. So in place of P(O | Ti) we put an approximation that is scaled so that P(O | T0), where T0 is our current theory, is pretty large. Given that restriction, there's no way for P(O | Ti) to be large enough to overcome the low prior P(Ti).
This means that there's a maximum degree of dissimilarity between Ti and your current theory T0, beyond which the prior you assign Ti will be so low that you should dismiss it out of hand. "Truth" may lie farther away than that from T0.
(I don't think anyone really thinks this way; so the observed shock level problem must have a non-Bayesian explanation. But one key point, of rescaling priors so your current beliefs look reasonable, may be the same.)
So you need to examine the potentially saner theory a piece at a time. If there's no way to break the new theory up into independent parts, you may be out of luck.
Consider society transitioning from Catholicism in 1200AD to rationalist materialism. It would have been practically impossible for 1200AD Catholics to take the better theory one piece at a time and verify it, even if they'd been Bayesians. Even a single key idea of materialism would have shattered their entire worldview. The transition was made only through the noise of the Protestant Reformation, which did not move directly towards the eventual goal, but sideways, in a way that fractured Europe's religioius power structure and shook it out of a local minimum.
The karma score for this much saner person on LW would be low, wouldn't it? He wouldn't be able to post.
The karma score for this much saner person on LW would be low, wouldn't it? He wouldn't be able to post.
It would reach posting range within a day. Thereafter he would have to be somewhat careful with how he presents his contrarian knowledge and be sure to keep a steady stream of posts on topics that are considered insightful by this audience. If karma balance is still a problem he can start browsing quote encyclopaedias.
Assuming he was that eager to post at all. My point was mainly, that one doesn't want to listen or to read a much saner person. One downvotes this person's comments on LW, if you want.
I don't understand why I wouldn't want to listen to a saner person. I thought that was the whole reason I was even here. What am I missing?
I think the difference here is that science is still operating under the same conceptual framework as it was 100 years ago. As a result, scientists between different eras can put themselves into each others heads and come to mutual agreement.
Sufficiently advanced wisdom to me has always been a challenging of the very framing of the problem itself.
A bit OT, but it makes me wonder whether the scientific discoveries of the 21st century are likely to appear similarly insane to a scientist of today? Or would some be so bold as to claim that we have crossed a threshold of knowledge and/or immunity to science shock, and there are no surprises lurking out there bad enough to make us suspect insanity?
I am not quite sure what you mean - but usually if you have the ability to recognise a system with some property P then you can conduct a search through the space of such systems for ones that you recognise have property P.
An exhaustive search may be laborious and slow sometimes, but often you can use optimisation strategies to speed things up.
Here, P would be: "future history elements that appear to be obviously right to someone from 1900".
"Spot the Bayesian problem, anyone?"
Hmm, would this be that you need priors for both the relative frequency of people saner than you, and the relative frequency of monkey-at-typewriter random apparent sanity, before you know whether this is evidence of sanity or insanity?
The above comment is one more piece of evidence that I'm unreliable in real-time, and shouldn't take actions without explicitly rethinking them.
You can require priors in order to make a computation. Your requirement doesn't cause the priors to magically appear; but you still require them.
I think Benquo means that you need enough observations to have reliable priors.
That's right -- the problem isn't that you need priors and don't have them, but that you need those two particular priors, and getting them probably involves enough work to raise your own sanity level quite a bit.
In a real-world context, you could also wonder why someone that much "saner" than yourself was talking to you in the first place.
Reply to: Shalmanese's Third Law
From an unpublished story confronting Vinge's Law, written in 2004, as abstracted a bit:
Spot the Bayesian problem, anyone? It's obvious to me today, but not to the me of 2004. Eliezer2004 would have seen the structure of the Bayesian problem the moment I pointed it out to him, but he might not have assigned it the same importance I would without a lot of other background.