Giles comments on Muehlhauser-Goertzel Dialogue, Part 1 - Less Wrong

28 Post author: lukeprog 16 March 2012 05:12PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (161)

You are viewing a single comment's thread.

Comment author: Giles 17 March 2012 07:16:58PM 5 points [-]

I don't quite understand Goertzel's position on the "big scary idea". He appears to accept that

"(2) if human-level AI is created, there is a good chance vastly superhuman AI will follow via an "intelligence explosion," and that (3) an uncontrolled intelligence explosion could destroy everything we value, but a controlled intelligence explosion would benefit humanity enormously if we can achieve it."

and even goes as far as to say that (3) is "almost obvious".

  • Does he believe that he understands the issues well enough that he can be almost certain that his particular model for AI will trigger the "good" kind of intelligence explosion?
  • Or does he accept that there's a significant probability this project might "destroy everything we value" but not understand why anyone might be alarmed at this?
  • Or does he think that someone is going to make a human-level AI anyway and that his one has the best chance of creating a good intelligence explosion instead of a bad one?
  • Or something else which doesn't constitute a big scary idea?

(btw I'm not entirely sold on this particular way of framing the argument, just trying to understand what Goertzel is actually saying)