XiXiDu comments on What I would like the SIAI to publish - Less Wrong

27 Post author: XiXiDu 01 November 2010 02:07PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (218)

You are viewing a single comment's thread. Show more comments above.

Comment author: XiXiDu 02 November 2010 06:42:41PM 0 points [-]

Yes, as I said, you seem to assume that it is very likely to succeed on all the hard problems but yet fail on the scope boundary. The scary idea states that it is likely that if we create self-improving AI it will consume humanity. I believe that is a rather unlikely outcome and haven't seen any good reason to believe something else yet.

Comment author: pjeby 02 November 2010 06:58:15PM 3 points [-]

The scary idea states that it is likely that if we create self-improving AI it will consume humanity.

No, it states that we run the risk of accidentally making something that will consume (or exterminate, subvert, betray, make miserable, or otherwise Do Bad Things to) humanity, that looks perfectly safe and correct, right up until it's too late to do anything about it... and that this is the default case: the case if we don't do something extraordinary to prevent it.

This doesn't require self-improvement, and it doesn't require wiping out humanity. It just requires normal, every-day human error.

Comment author: timtyler 02 November 2010 09:36:06PM 2 points [-]

Here is Ben's phrasing:

SIAI's "Scary Idea", which is the idea that: progressing toward advanced AGI without a design for "provably non-dangerous AGI" (or something closely analogous, often called "Friendly AI" in SIAI lingo) is highly likely to lead to an involuntary end for the human race.