Giles comments on Muehlhauser-Goertzel Dialogue, Part 1 - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (161)
I don't quite understand Goertzel's position on the "big scary idea". He appears to accept that
"(2) if human-level AI is created, there is a good chance vastly superhuman AI will follow via an "intelligence explosion," and that (3) an uncontrolled intelligence explosion could destroy everything we value, but a controlled intelligence explosion would benefit humanity enormously if we can achieve it."
and even goes as far as to say that (3) is "almost obvious".
(btw I'm not entirely sold on this particular way of framing the argument, just trying to understand what Goertzel is actually saying)