CuSithBell comments on The Friendly AI Game - Less Wrong

38 Post author: bentarm 15 March 2011 04:45PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (170)

You are viewing a single comment's thread. Show more comments above.

Comment author: CuSithBell 15 March 2011 06:16:16PM 0 points [-]

Isn't utility normally integrated over time? Supposing this AI just wants to have this integer set to INT_MAX at some point, and nothing in the future can change that: it escapes, discovers the maximizer, sends a subroutine back into the sim to maximize utility, then invents ennui and tiles the universe with bad poetry.

(Alternately, what Kaj said.)

Comment author: benelliott 15 March 2011 09:16:29PM 1 point [-]

Isn't utility normally integrated over time?

It certainly doesn't have to be. In fact the mathematical treatment of utility in decision theory and game theory tends to define utility functions over all possible outcomes, not all possible instants of time, so each possible future gets a single utility value over the whole thing, not integration required.

You could easily set up a utility function defined over moments if you wanted to, and then integrate it to get a second function over outcomes, but such an approach is perhaps not ideal since your second function may end up outputting infinity some of the time.

Comment author: CuSithBell 15 March 2011 09:20:49PM 1 point [-]

Cool, thanks for the explanation.