whpearson comments on Contrarianism and reference class forecasting - Less Wrong

26 Post author: taw 25 November 2009 07:41PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (90)

You are viewing a single comment's thread. Show more comments above.

Comment author: Eliezer_Yudkowsky 27 November 2009 12:31:42AM 16 points [-]

'Tis remarkable how many disputes between would-be rationalists end in a game of reference class tennis. I suspect this is because our beliefs are partially driven by "intuition" (i.e. subcognitive black boxes giving us advice) (not that there's anything wrong with that), and when it comes time to try and share our intuition with other minds, we try to point to cases that "look similar", or the examples whereby our brain learned to pattern-recognize and judge "that sort" of case.

My own cached rule for such cases is to try and look inside the thing itself, rather than comparing it to other things - to drop into causal analysis, rather than trying to hit the ball back into your own preferred concept boundary of similar things. Focus on the object level, rather than the meta; and try to argue less by similarity, for the universe itself is not driven by Similarity and Contagion, after all.

Comment author: whpearson 27 November 2009 01:16:37AM 3 points [-]

How should we unpack black boxes we don't have yet? For example a non-neural language capable self-maintaining goal-oriented system*.

We have a surfeit of potential systems (with different capabilities of self-inspection and self-modification) with no way to test whether they will fall into the above category or how big the category actually is.

*I'm trying to unpack AGI here somewhat