You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Dmytry comments on Yet another safe oracle AI proposal - Less Wrong Discussion

2 Post author: jacobt 26 February 2012 11:45PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (33)

You are viewing a single comment's thread. Show more comments above.

Comment author: Dmytry 28 February 2012 02:55:02PM *  2 points [-]

with regards to AI not caring about the real world, for example the h sapiens cares about the 'outside' world and wants to maximize number of paperclips, err, souls in heaven, without ever having been given any cue that outside even exists. It seems we assume that AI is some sciencefiction robot dude that acts all logical and doesn't act creatively, and is utterly sane. Sanity is NOT what you tend to get from hill climbing. You get 'whatever works'.

Comment author: jacobt 01 March 2012 12:05:29AM 0 points [-]

That's a good point. There might be some kind of "goal drift": programs that have goals other than optimization that nevertheless lead to good optimization. I don't know how likely this is, especially given that the goal "just solve the damn problems" is simple and leads to good optimization ability.