timtyler comments on Should I believe what the SIAI claims? - Less Wrong

23 Post author: XiXiDu 12 August 2010 02:33PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (600)

You are viewing a single comment's thread. Show more comments above.

Comment author: timtyler 31 December 2010 11:52:40AM *  2 points [-]

I just don't see that AGI implies self-improvement beyond learning what it can while staying in scope of its resources. You'd have to deliberately implement such an intention.

The usual cite given in this area is the paper The Basic AI Drives.

It suggests that open-ended goal-directed systems will tend to improve themselves - and to grab resources to help them fulfill their goals - even if their goals are superficially rather innocent-looking and make no mention of any such thing.

The paper starts out like this:

  1. AIs will want to self-improve - One kind of action a system can take is to alter either its own software or its own physical structure. Some of these changes would be very damaging to the system and cause it to no longer meet its goals. But some changes would enable it to reach its goals more effectively over its entire future. Because they last forever, these kinds of self-changes can provide huge benefits to a system. Systems will therefore be highly motivated to discover them and to make them happen. If they do not have good models of themselves, they will be strongly motivated to create them though learning and study. Thus almost all AIs will have drives towards both greater self-knowledge and self-improvement.