Eliezer_Yudkowsky comments on Contrarianism and reference class forecasting - Less Wrong

26 Post author: taw 25 November 2009 07:41PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (90)

You are viewing a single comment's thread.

Comment author: Eliezer_Yudkowsky 27 November 2009 12:31:42AM 16 points [-]

'Tis remarkable how many disputes between would-be rationalists end in a game of reference class tennis. I suspect this is because our beliefs are partially driven by "intuition" (i.e. subcognitive black boxes giving us advice) (not that there's anything wrong with that), and when it comes time to try and share our intuition with other minds, we try to point to cases that "look similar", or the examples whereby our brain learned to pattern-recognize and judge "that sort" of case.

My own cached rule for such cases is to try and look inside the thing itself, rather than comparing it to other things - to drop into causal analysis, rather than trying to hit the ball back into your own preferred concept boundary of similar things. Focus on the object level, rather than the meta; and try to argue less by similarity, for the universe itself is not driven by Similarity and Contagion, after all.

Comment author: whpearson 27 November 2009 01:16:37AM 3 points [-]

How should we unpack black boxes we don't have yet? For example a non-neural language capable self-maintaining goal-oriented system*.

We have a surfeit of potential systems (with different capabilities of self-inspection and self-modification) with no way to test whether they will fall into the above category or how big the category actually is.

*I'm trying to unpack AGI here somewhat

Comment author: CronoDAS 27 November 2009 01:33:12AM *  4 points [-]

Sometimes "looking at the thing itself" is too costly or too difficult. How can the proverbial "bright sixteen-year-old" sitting in a high school classroom figure out the truth about, say, the number of protons in an atom of gold, without having to accept the authority of his textbooks and instructors? If there were a bunch of well-funded nutcases dedicated to arguing that gold atoms have seventy-eight protons instead of seventy-nine, the only way you can really judge who's correct is to judge the relative credibility of the people presenting the evidence. After all, one side's evidence could be completely fraudulent and you'd have no way of knowing that.

Far too often, reference classes and meta-level discussions are all we have.

Comment author: Eliezer_Yudkowsky 27 November 2009 01:46:26AM 6 points [-]

Then let us try to figure out whose authority is to be trusted about experimental results and work from there. Cases where you can reduce it to a direct conflict about easily observable facts, and then work from there, are much more likely to have one dramatically more trustworthy party.

Comment author: taw 27 November 2009 10:17:53AM 2 points [-]

I estimate that even fairly bad reference class / outside view analysis is still far more reliable than the best inside view that can be realistically expected. People are just spectacularly bad at inside view analysis, and reference class analysis puts hard boundaries within which truth is almost always found.

Comment author: Eliezer_Yudkowsky 27 November 2009 11:56:40AM 0 points [-]
Comment author: taw 27 November 2009 05:54:14PM 3 points [-]

Is there any evidence that in cases that where neither "outside view" nor "strong inside view" can be applied, "weak inside view" is at least considerably better than pure chance? I have strong doubts about it.

Comment author: RobinHanson 29 November 2009 01:26:18AM 6 points [-]

Yes, it would be good to have a clearer data set of topics at dates, the views suggested by different styles of analysis, and what we think now about who was right. I'm pretty skeptical about this weak inside view claim, but will defer to some more systematic data. Of course that is my suggesting we take an outside view to evaluating this claim about which view is more reliable.

Comment author: RobinZ 27 November 2009 03:17:10PM *  5 points [-]

If I may attempt to summarize the link: Eliezer maintains that, while the quantitative inside view is likely to fail in cases where the underlying causes are not understood or planning biases are likely to be in effect, the outside view cannot be expected to work when the underlying causes undergo sufficiently severe alterations. Rather, he proposes what he calls the "weak inside view" - an analysis of underlying causes noting the most extreme of changes and stating qualitatively their consequences.