aaronsw comments on Reply to Holden on 'Tool AI' - Less Wrong

94 Post author: Eliezer_Yudkowsky 12 June 2012 06:00PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (348)

You are viewing a single comment's thread. Show more comments above.

Comment author: Eliezer_Yudkowsky 19 July 2012 05:52:40PM 10 points [-]

And atheism is a religion, and bald is a hair color.

The three distinguishing characteristics of "reference class tennis" are (1) that there are many possible reference classes you could pick and everyone engaging in the tennis game has their own favorite which is different from everyone else's; (2) that the actual thing is obviously more dissimilar to all the cited previous elements of the so-called reference class than all those elements are similar to each other (if they even form a natural category at all rather than having being picked out retrospectively based on similarity of outcome to the preferred conclusion); and (3) that the citer of the reference class says it with a cognitive-traffic-signal quality which attempts to shut down any attempt to counterargue the analogy because "it always happens like that" or because we have so many alleged "examples" of the "same outcome" occurring (for Hansonian rationalists this is accompanied by a claim that what you are doing is the "outside view" (see point 2 and 1 for why it's not) and that it would be bad rationality to think about the "individual details").

I have also termed this Argument by Greek Analogy after Socrates's attempt to argue that, since the Sun appears the next day after setting, souls must be immortal.

Comment author: aaronsw 04 August 2012 10:37:44AM *  -1 points [-]

Then it does seem like your AI arguments are playing reference class tennis with a reference class of "conscious beings". For me, the force of the Tool AI argument is that there's no reason to assume that AGI is going to behave like a sci-fi character. For example, if something like On Intelligence turns out to be true, I think the algorithms it describes will be quite generally intelligent but hardly capable of rampaging through the countryside. It would be much more like Holden's Tool AI: you'd feed it data, it'd make predictions, you could choose to use the predictions.

(This is, naturally, the view of that school of AI implementers. Scott Brown: "People often seem to conflate having intelligence with having volition. Intelligence without volition is just information.")