Stuart_Armstrong comments on Satisficers want to become maximisers - Less Wrong

21 Post author: Stuart_Armstrong 21 October 2011 04:27PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (67)

You are viewing a single comment's thread. Show more comments above.

Comment author: Stuart_Armstrong 21 October 2011 08:43:26PM *  2 points [-]

Interesting idea, but I think it reduces to the single case. If you want to satisfice, say, E(U1) > N1 and E(U2) > N2, then you could set U3=min(U1-N1,U2-N2) and satisfice E(U3) > 0.

Comment author: antigonus 21 October 2011 09:05:47PM *  0 points [-]

Sure, but there are vastly more constraints involved in maximizing E(U3). It's easy to maximize E(U(A)) in such a way as to let E(U(B)) go down the tubes. But if C is positively correlated with either A or B, it's going to be harder to max-min A and B while letting E(U(C)) plummet. The more accounted-for sources of utility there are, the likelier a given unaccounted-for source of utility X is to be entangled with one of the former, so the harder it will be for the AI to max-min in such a way that neglects X. Perhaps it gets exponentially harder! And humans care about hundreds or thousands of things. It's not clear that a satisficer that's concerned with a significant fraction of those would be able to devise a strategy that fails utterly to bring the others up to snuff.

Comment author: Stuart_Armstrong 22 October 2011 08:04:10AM 0 points [-]

I'm simply pointing out that a multi-satisficer is the same as a single-satisficer. Putting all the utilities together is a sensible thing; but making a satisficer out of that combination isn't.