whpearson comments on In defense of the outside view - Less Wrong

14 Post author: cousin_it 15 January 2010 11:01AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (26)

You are viewing a single comment's thread.

Comment author: whpearson 15 January 2010 11:56:42AM *  0 points [-]

Hmm, I wonder what would be appropriate outside views to give a good estimate of the dangers of AI?

Number of species made extinct by competition rather than natural disaster? (Assume AI is something like a new species)

How well humans can control and predict technologies?

Comment author: cousin_it 15 January 2010 12:05:39PM *  2 points [-]

I'm willing to believe that if AI-roughly-as-described-by-Eliezer gets developed, it will be able to exterminate humanity, because we apparently have already invented weapons that can exterminate humanity. As for the chance of such AI getting developed at all, why not apply the usual reference classes of futuristic technology?

ETA: or, more specifically, futuristic software.

Comment author: whpearson 15 January 2010 12:41:55PM *  3 points [-]

Judging from peoples previous predictions about when we will get futuristic software I am quite happy to push the likelihood of them being on the right track to quite low levels*. Which is why I am interested in way of ruling out approaches experimentally if at all possible.

However even if we eliminate their approaches, we still don't know the chances of non-Eliezer (or Goertzelian etc) like futuristic software wiping out humanity. So we are back to square one.

*Any work on possible AIs I want to explore, I mainly view as trying to rule out a possible angle of implementation. And AI I consider a part a multi-generational humanity wide work to understand the human brain.