Warrigal comments on Evaluating the feasibility of SI's plan - Less Wrong

25 Post author: JoshuaFox 10 January 2013 08:17AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (186)

You are viewing a single comment's thread. Show more comments above.

Comment author: [deleted] 14 January 2013 03:55:49AM 0 points [-]

So you believe that a non-self-improving AI could not go foom?

Comment author: timtyler 14 January 2013 11:57:34AM 1 point [-]

The short answer is "yes" - though this is more a matter of the definition of the terms than a "belief".

In theory, you could have System A improving System B which improves System C which improves System A. No individual system is "self-improving" (though there's a good case for the whole composite system counting as being "self-improving").

Comment author: [deleted] 15 January 2013 02:13:36AM 0 points [-]

I guess I feel like the entire concept is too nebulous to really discuss meaningfully.