komponisto comments on Desirable Dispositions and Rational Actions - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (180)
Thanks for posting. Your analysis is an improvement over the LW conventional wisdom, but you still doesn't get it right, where right, to me, means the way it is analyzed by the guys who won all those Nobel prizes in economics. You write:
But in every example you supply, what you really want is not exactly to be irrational; rather it is to be believed irrational by the other player in the game. But you don't notice this because in each of your artificial examples, the other player is effectively omniscient, so the only way to be believed irrational is to actually be irrational. But then, once the other player really believes, his strategies and actions are modified in such a way the your expected behavior (which would have been irrational if the other player had not come to believe you irrational) is now no longer irrational!
But, better yet, lets Taboo the word irrational. What you really want him to believe is that you will play some particular strategy. If he does, in fact, believe, then he will choose a particular strategy, and your own best response is to use the strategy he believes you are going to use. To use the technical jargon, you two are in a Nash equilibrium.
So, the standard Game Theory account is based on the beliefs each player has about the other player's preferences and strategies. And, because it deals with (Bayesian) belief, it is an incredibly flexible explanatory framework. Pick up a standard textbook or reference and marvel at the variety of applications that are covered rigorously, quantitatively, and convincingly.
I suspect that the LW interest in scenarios involving omniscient agents arises from considerations of one AI program being able to read another program's source code. However, I don't understand why there is an assumption of determinism. For example, in a Newcomb-type problem, suppose I decide to resolve the question of one box or two by flipping a coin? Unless I am supposed to believe that Omega can foretell the results of future coin flips, I think the scenario collapses. Has anyone written anything on LW about responding to Omega by randomizing? [Edited several times for minor cleanups]
It's not from LW, but here's Scott Aaronson: