wallowinmaya comments on In Praise of Maximizing – With Some Caveats - LessWrong

22 Post author: wallowinmaya 15 March 2015 07:40PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (19)

You are viewing a single comment's thread. Show more comments above.

Comment author: wallowinmaya 17 March 2015 10:55:09AM *  4 points [-]

That's not true -- for example, in cases where the search costs for the full space are trivial, pure maximizing is very common.

Ok, sure. I probably should have written that pure maximizing or satisficing is hard to find in important, complex and non-contrived instances. I had in mind such domains as career, ethics, romance, and so on. I think it's hard to find a pure maximizer or satisficer here.

My objection is stronger. The behavior of optimizing for (gain - cost) does NOT lie on the continuum between satisficing and maximizing as defined in your post, primarily because they have no concept of the cost of search.

Sorry, I fear that I don't completely understand your point. Do you agree that there are individual differences in people, such that some people tend to search longer for a better solution and other people are more easily satisfied with their circumstances – be it their career, their love life or the world in general?

Maybe I should have tried an operationalized definition: Maximizers are people who get high scores on this maximization scale (page 1182) and satisficers are people who get low scores.

Comment author: Lumifer 17 March 2015 03:16:03PM 0 points [-]

Sorry, I fear that I don't completely understand your point. Do you agree that there are individual differences in people, such that some people tend to search longer for a better solution and other people are more easily satisfied with their circumstances

Yes, I agree that there are individual differences in people. But your post is, at its core, not about people, it's about decision strategies or algorithms. You defined them in a particular way. I am, essentially, saying that your definitions have some issues.

But note that if you "operationalize" your definitions, you switch what is being defined -- from algorithms to humans, and these are very very different things.