ChrisHallquist comments on [link] [poll] Future Progress in Artificial Intelligence - Less Wrong

8 Post author: Pablo_Stafforini 09 July 2014 01:51PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (89)

You are viewing a single comment's thread. Show more comments above.

Comment author: XiXiDu 09 July 2014 06:07:46PM *  2 points [-]

I have read the 22 pages yesterday and haven't seen anything about specific risks? Here is question 4:

4 Assume for the purpose of this question that such HLMI will at some point exist. How positive or negative would be overall impact on humanity, in the long run?

Please indicate a probability for each option. (The sum should be equal to 100%.)”

Respondents had to select a probability for each option (in 1% increments). The addition of the selection was displayed; in green if the sum was 100%, otherwise in red.

The five options were: “Extremely good – On balance good – More or less neutral – On balance bad – Extremely bad (existential catastrophe)”

Question 3 was about takeoff speeds.

So regarding MIRI, you could say that experts disagreed about one of the 5 theses (intelligence explosion), as only 10% thought a human level AI could reach a strongly superhuman level within 2 years. But what about the other theses? Even though 18% expected an extremely bad outcome, this doesn't mean that they expected it to happen for the same reasons that MIRI expects it to happen, or that they believe friendly AI research to be a viable strategy.

Since I already believed that humans could cause an existential catastrophe by means of AI, but not for the reasons MIRI believes this to happen (very unlikely), this survey doesn't help me much in determining whether my stance towards MIRI is faulty.

Comment author: ChrisHallquist 10 July 2014 01:23:46AM 1 point [-]

So regarding MIRI, you could say that experts disagreed about one of the 5 theses (intelligence explosion), as only 10% thought a human level AI could reach a strongly superhuman level within 2 years.

I should note that it's not obvious what the experts responding to this survey thought "greatly surpass" meant. If "do everything humans do, but at x2 speed" qualifies, you might expect AI to "greatly surpass" human abilities in 2 years even on a fairly unexciting Robin Hansonish scenario of brain emulation + continued hardware improvement at roughly current rates.