komponisto comments on Desirable Dispositions and Rational Actions - Less Wrong

13 Post author: RichardChappell 17 August 2010 03:20AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (180)

You are viewing a single comment's thread. Show more comments above.

Comment author: Perplexed 17 August 2010 05:23:10AM *  9 points [-]

Thanks for posting. Your analysis is an improvement over the LW conventional wisdom, but you still doesn't get it right, where right, to me, means the way it is analyzed by the guys who won all those Nobel prizes in economics. You write:

First, let's note that there definitely are possible cases where it would be "beneficial to be irrational".

But in every example you supply, what you really want is not exactly to be irrational; rather it is to be believed irrational by the other player in the game. But you don't notice this because in each of your artificial examples, the other player is effectively omniscient, so the only way to be believed irrational is to actually be irrational. But then, once the other player really believes, his strategies and actions are modified in such a way the your expected behavior (which would have been irrational if the other player had not come to believe you irrational) is now no longer irrational!

But, better yet, lets Taboo the word irrational. What you really want him to believe is that you will play some particular strategy. If he does, in fact, believe, then he will choose a particular strategy, and your own best response is to use the strategy he believes you are going to use. To use the technical jargon, you two are in a Nash equilibrium.

So, the standard Game Theory account is based on the beliefs each player has about the other player's preferences and strategies. And, because it deals with (Bayesian) belief, it is an incredibly flexible explanatory framework. Pick up a standard textbook or reference and marvel at the variety of applications that are covered rigorously, quantitatively, and convincingly.

I suspect that the LW interest in scenarios involving omniscient agents arises from considerations of one AI program being able to read another program's source code. However, I don't understand why there is an assumption of determinism. For example, in a Newcomb-type problem, suppose I decide to resolve the question of one box or two by flipping a coin? Unless I am supposed to believe that Omega can foretell the results of future coin flips, I think the scenario collapses. Has anyone written anything on LW about responding to Omega by randomizing? [Edited several times for minor cleanups]

Comment author: komponisto 17 August 2010 08:53:04AM *  5 points [-]

For example, in a Newcomb-type problem, suppose I decide to resolve the question of one box or two by flipping a coin? Unless I am supposed to believe that Omega can foretell the results of future coin flips, I think the scenario collapses. Has anyone written anything on LW about responding to Omega by randomizing?

It's not from LW, but here's Scott Aaronson:

(Incidentally, don’t imagine you can wiggle out of this by basing your decision on a coin flip! For suppose the Predictor predicts you’ll open only the first box with probability p. Then he’ll put the $1,000,000 in that box with the same probability p. So your expected payoff is 1,000,000p^2 + 1,001,000p(1-p) + 1,000(1-p)^2 = 1,000,000p + 1,000(1-p), and you’re stuck with the same paradox as before.)