Vaniver comments on Satisficers want to become maximisers - Less Wrong

21 Post author: Stuart_Armstrong 21 October 2011 04:27PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (67)

You are viewing a single comment's thread.

Comment author: Vaniver 23 October 2011 05:06:29PM 0 points [-]

Um, the standard AI definition of a satisficer is:

"optimization where 'all' costs, including the cost of the optimization calculations themselves and the cost of getting information for use in those calculations, are considered."

That is, a satisficer explicitly will not become a maximizer, because it is consciously aware of the costs of being a maximizer rather than a satisficer.

A maximizer might have a utility function like "p", where p is the number of paperclips, while a satisficer would have a utility function like "p-c", where p is the number of paperclips and c is the cost of the optimization process. The maximizer is potentially unbounded; the satisficer stops when marginal reward equals marginal cost (which could also be unbounded, but is less likely to be so).

Comment author: timtyler 24 October 2011 02:09:15PM *  0 points [-]

That is, a satisficer explicitly will not become a maximizer, because it is consciously aware of the costs of being a maximizer rather than a satisficer.

According to the page you cite, satisficers are a subset of maximisers. Satisficers are just maximisers whose utility functions factor in constraints.

Comment author: Vaniver 24 October 2011 09:30:16PM 0 points [-]

Yes for some definitions of maximizers. The article Stuart_Armstrong wrote seems have to differing definitions: maximizers are agents that seek to get as much X as possible, and his satisficers want to get as much E(X) as possible. Then, trivially, those reduce to agents that want to get as much X as possible.

I don't see that as novel or relevant since what I would call satisficers are those that try to set marginal gain equal to marginal cost. Those generally do not reduce to agents that seek to get as much X as possible.