byrnema comments on Advice for AI makers - Less Wrong

7 Post author: Stuart_Armstrong 14 January 2010 11:32AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (196)

You are viewing a single comment's thread. Show more comments above.

Comment author: byrnema 14 January 2010 03:27:43PM *  1 point [-]

There is a difference between giving something negative utility and giving it decreasing marginal utility.

I was considering the least convenient argument, the one that I imagined would result in the least aggressive AI. (I should explain here that I considered that even a 0 terminal utility for the resource itself would not result in 0 utility for that resource, because that resource would have some instrumental value in achieving things of value.)

(Above edited because I don't think I was understood.)

But I think the problem in logic identified with inputting the value of an instrumental value remains either way.

Comment author: Peter_de_Blanc 14 January 2010 08:30:00PM *  0 points [-]

You pretty much have to guess about the marginal value of resources. But let's say the AI's utility function is "10^10th root of # of paperclips in universe." Then it probably satisfies the criterion.

EDIT: even better would be U = 1 if the universe contains at least one paperclip, otherwise 0.