timtyler comments on AI indifference through utility manipulation - Less Wrong

4 Post author: Stuart_Armstrong 02 September 2010 05:06PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (53)

You are viewing a single comment's thread. Show more comments above.

Comment author: timtyler 03 September 2010 08:21:51PM *  1 point [-]

But it seems to me that if you were able to calculate the utility of world-outcomes modularly, then you wouldn't need an AI in the first place; you would instead build an Oracle, give it your possible actions as input, and select the action with the greatest utility.

That sounds as though it is just an intelligent machine which has been crippled by being forced to act through a human body.

You suggest that would be better - but how?