You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Transfuturist comments on Satisficers' undefined behaviour - Less Wrong Discussion

3 Post author: Stuart_Armstrong 05 March 2015 05:03PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (12)

You are viewing a single comment's thread. Show more comments above.

Comment author: Transfuturist 07 March 2015 11:46:20PM *  1 point [-]

Maximizers don't take the proven optimal path, they take action when the EV of analyzing actions becomes lower than the current most valuable path. There are no guarantees that there is such a thing as an optimal path in many given situations, and spending resources and opportunities on proving that you will take the best path is not how you maximize at all. The situation changes as you search for the optimal path to that situation.

Comment author: Vaniver 08 March 2015 12:25:15AM 0 points [-]

Maximizers don't take the proven optimal path, they take action when the EV of analyzing actions becomes lower than the current most valuable path.

This is a conception of maximizers that I generally like, and is true if "cost of analysis" is part of the objective function, but it's important to note that this is not the most generic class of maximizers, but a subset of that class. Note that any maximizer that comes up with a proof that it's found an optimal solution implicitly knows that the EV of continuing to analyze actions is lower than going ahead with that solution.

I think what you have in mind is more typically referred to as an "optimizer," like in "metaheuristic optimization." Tabu search isn't guaranteed to find you a globally optimal solution, but it'll get you a better solution than you started with faster than other approaches, and that's what people generally want. There's no use taking five years to produce an absolute best plan for assigning packages to trucks going out for delivery tomorrow morning.

But the distinction that Stuart_Armstrong cares about holds: maximizers (as I defined them, without taking analysis costs into consideration) seem easy to analyze and optimizers seem hard to analyze: I can figure out the properties that an absolute best solution has, and there's a fairly small set of those, but I might have a much harder time figuring out the properties that a solution returned by running tabu search overnight will have. But that might just be a perspective thing; I can actually run tabu search overnight a bunch of times, but I might not be able to actually figure out the set of absolute best solutions.

Comment author: Transfuturist 08 March 2015 01:43:49AM *  0 points [-]

My intuition is telling me that resource costs are relevant to an agent whether it has a term in the objective function or not. Omohundro's instrumental goal of efficiency...?

Comment author: Vaniver 08 March 2015 02:16:20AM 0 points [-]

My intuition is telling me that resource costs are relevant to an agent whether it has a term in the objective function or not. Omohundro's instrumental goal of efficiency...?

Ah; I'm not requiring a maximizer to be a general intelligence, and my intuitions are honed on things like CPLEX.