polymathwannabe comments on Stupid Questions June 2015 - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (195)
I guess the difference is that disutility has a lower bound: 0. So there's a point where an expected disutility minimizer can actually stop, and it is always trying to get to that point.
It seems to me expected utility maximizers can never logically stop because utility has no upper bound. Am I wrong?
This inability to stop is a big part of why expected utility maximizers are creepy. Am I wrong?
I'm not even sure two utility maximizers can coexist peacefully at all. Two disutility minimizers could certainly get along unless their disutility functions overlap in specific ways.
Of course minimizing disutility, like maximizing utility, is extremely broad and most tasks could probably be described as either - including ones that go spectacularly wrong.
My stupid question is whether I'm overlooking something here. Because this inherent drive towards a point where inaction is okay seems like a great trait for future AIs to have and yet everybody keeps talking of maximizing expected utility.
Edit: clarity.
Getting to this point is trivially easy: you do absolutely nothing.