ciphergoth comments on Reply to Holden on The Singularity Institute - Less Wrong

46 Post author: lukeprog 10 July 2012 11:20PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (213)

You are viewing a single comment's thread. Show more comments above.

Comment author: ciphergoth 11 July 2012 07:48:24PM 12 points [-]

I am coming to the conclusion that "extraordinary claims require extraordinary evidence" is just bad advice, precisely because it causes people to conflate large consequences and prior improbability. People are fond of saying it about cryonics, for example.

Comment author: Viliam_Bur 12 July 2012 12:29:21PM 6 points [-]

We need two new versions of the advice, to satisfy everyone.

Version for scientists: "improbable claims require extraordinary evidence".

Versions for politicians: "inconvenient claims require extraordinary evidence".

Comment author: fubarobfusco 11 July 2012 08:15:31PM 13 points [-]

At least sometimes, people may say "extraordinary claims require extraordinary evidence" when they mean "your large novel claim has set off my fraud risk detector; please show me how you're not a scam."

In other words, the caution being expressed is not about prior probabilities in the natural world, but rather the intentions and morals of the claimant.

Comment author: private_messaging 12 July 2012 12:46:47PM *  0 points [-]

Well, consider strategic point of view. Suppose that a system (humans) is known for it's poor performance at evaluating the claims without performing direct experimentation. Long, long history of such failures.

Consider also that a false high-impact claim can ruin ability of this system to perform it's survival function, with again a long history of such events; the damage is proportionally to the claimed impact. (Mayans are a good example, killing people so that the sun will rise tomorrow; great utilitarian rationalists they were; believing that their reasoning is perfect enough to warrant such action. Note that donating to a wrong charity instead of a right one kills people)

When we anticipate that a huge percentage of the claims will be false, we can build the system to require evidence that if the claim was false the system would be in a small probability world (i.e. require that for a claim evidence was collected so that p(evidence | ~claim)/p(evidence | claim) is low), to make the system, once deployed, fall off the cliffs less often. The required strength of the evidence is then increasing with impact of the claim.

It is not an ideal strategy, but it is the one that works given the limitations. There are other strategies and it is not straightforward to improve performance (and easy to degrade performance by making idealized implicit assumptions).