timtyler comments on Evaluating the feasibility of SI's plan - Less Wrong

25 Post author: JoshuaFox 10 January 2013 08:17AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (186)

You are viewing a single comment's thread. Show more comments above.

Comment author: timtyler 13 January 2013 02:19:15AM *  2 points [-]

A non-self-modifying AI wouldn't have any of the above problems. It would, of course, have some new problems. If it encounters a bug in itself, it won't be able to fix itself (though it may be able to report the bug). The only way it would be able to increase its own intelligence is by improving the data it operates on. If the "data it operates on" includes a database of useful reasoning methods, then I don't see how this would be a problem in practice.

The problem is that it would probably be overtaken by, and then be left behind by, all-machine self-improving systems. If a system is safe, but loses control over its own future, its safely becomes a worthless feature.

Comment author: [deleted] 14 January 2013 03:55:49AM 0 points [-]

So you believe that a non-self-improving AI could not go foom?

Comment author: timtyler 14 January 2013 11:57:34AM 1 point [-]

The short answer is "yes" - though this is more a matter of the definition of the terms than a "belief".

In theory, you could have System A improving System B which improves System C which improves System A. No individual system is "self-improving" (though there's a good case for the whole composite system counting as being "self-improving").

Comment author: [deleted] 15 January 2013 02:13:36AM 0 points [-]

I guess I feel like the entire concept is too nebulous to really discuss meaningfully.