grobstein comments on Rationality is Systematized Winning - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (252)
I'm not sure how you can implement an admonition to Win and not just to (truly, sincerely) try. What is the empirical difference?
I suppose you could use an expected regret measure (that is, the difference between the ideal result and the result of the decision summed across the distribution of probable futures) instead of an expected utility measure.
Expected regret tends to produce more robust strategies than expected utility. For instance, in Newcomb's problem, we could say that two-boxing comes from expected utility but one-boxing comes from regret-minimizing (since a "failed" two-box gives $1,000,000-$1,000=$999,000 of regret, if you believe Omega would have acted differently if you had been the type of person to one-box, where a "failed" one-box gives $1000-$0=$1,000 of regret).
Using more robust strategies may be a way to more consistently Win, though perhaps the true goal should be to know when to use expected utility and when to use expected regret (and therefore to take advantage both of potential bonanzas and of risk-limiting mechanisms).
Here's a functional difference: Omega says that Box B is empty if you try to win what's inside it.
Yes! This functional difference is very important!
In Logic, you begin with a set of non-contradicting assumptions and then build a consistent theory based on those assumptions. The deductions you make are analogous to being rational. If the assumptions are non-contradicting, then it is impossible to deduce something false in the system. (Analogously, it is impossible for rationality not to win.) However, you can get a paradox by having a self-referential statement. You can prove that every sufficiently complex theory is not closed -- there are things that are true that you can't prove from within the system. Along the same lines, you can build a paradox by forcing the system to try to talk about it itself.
What Grobstein has presented is a classic paradox and is the closest you can come to rationality not winning.
I understand all that, but I still think it's impossible to operationalize an admonition to Win. If
then you simply cannot implement a strategy that will give you the proceeds of Box B (unless you're using some definition of "try" that is inconsistent with "choose a strategy that has a particular expected result").
I think that falls under the "ritual of cognition" exception that Eliezer discussed for a while: when Winning depends directly on the ritual of cognition, then of course we can define a situation in which rationality doesn't Win. But that is perfectly meaningless in every other situation (which is to say, in the world), where the result of the ritual is what matters.