Lumifer comments on Stupid Questions June 2015 - Less Wrong

5 Post author: Gondolinian 31 May 2015 02:14AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (195)

You are viewing a single comment's thread. Show more comments above.

Comment author: chaosmage 03 June 2015 05:17:18PM *  0 points [-]

I guess the difference is that disutility has a lower bound: 0. So there's a point where an expected disutility minimizer can actually stop, and it is always trying to get to that point.

It seems to me expected utility maximizers can never logically stop because utility has no upper bound. Am I wrong?

This inability to stop is a big part of why expected utility maximizers are creepy. Am I wrong?

I'm not even sure two utility maximizers can coexist peacefully at all. Two disutility minimizers could certainly get along unless their disutility functions overlap in specific ways.

Of course minimizing disutility, like maximizing utility, is extremely broad and most tasks could probably be described as either - including ones that go spectacularly wrong.

My stupid question is whether I'm overlooking something here. Because this inherent drive towards a point where inaction is okay seems like a great trait for future AIs to have and yet everybody keeps talking of maximizing expected utility.

Edit: clarity.

Comment author: Lumifer 03 June 2015 05:31:53PM -2 points [-]

So there's a point where an expected disutility minimizer can actually stop, and it is always trying to get to that point.

Getting to this point is trivially easy: you do absolutely nothing.