orthonormal comments on Decision Theories: A Less Wrong Primer - Less Wrong

69 Post author: orthonormal 13 March 2012 11:31PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (172)

You are viewing a single comment's thread. Show more comments above.

Comment author: ksvanhorn 13 March 2012 06:54:34PM 0 points [-]

I don't understand the need for this "advanced" decision theory. The situations you mention -- Omega and the boxes, PD with a mental clone -- are highly artificial; no human being has ever encountered such a situation. So what relevance do these "advanced" decision theories have to decisions of real people in the real world?

Comment author: orthonormal 13 March 2012 09:54:15PM 2 points [-]

They're no more artificial than the rest of Game Theory- no human being has ever known their exact payoffs for consequences in terms of utility, either. Like I said, there may be a good deal of advanced-decision-theory-structure in the way people subconsciously decide to trust one another given partial information, and that's something that CDT analysis would treat as irrational even when beneficial.

One bit of relevance is that "rational" has been wrongly conflated with strategies akin to defecting in the Prisoner's Dilemma, or being unable to geniunely promise anything with high enough stakes, and advanced decision theories are the key to seeing that the rational ideal doesn't fail like that.

Comment author: ksvanhorn 14 March 2012 02:09:25AM 0 points [-]

They're no more artificial than the rest of Game Theory-

That's an invalid analogy. We use mathematical models that we know are ideal approximations to reality all the time... but they are intended to be approximations of actually encountered circumstances. The examples given in the article bear no relevance to any circumstance any human being has ever encountered.

there may be a good deal of advanced-decision-theory-structure in the way people subconsciously decide to trust one another given partial information, and that's something that CDT analysis would treat as irrational even when beneficial.

That doesn't follow from anything said in the article. Care to explain further?

One bit of relevance is that "rational" has been wrongly conflated with strategies akin to defecting in the Prisoner's Dilemma,

Defecting is the right thing to do in the Prisoner's Dilemma itself; it is only when you modify the conditions in some way (implicitly changing the payoffs, or having the other player's decision depend on yours) that the best decision changes. In your example of the mental clone, a simple expected-utility maximization gives you the right answer, assuming you know that the other player will make the same move that you do.

Comment author: wedrifid 14 March 2012 03:36:58AM 1 point [-]

a simple expected-utility maximization gives you the right answer, assuming you know that the other player will make the same move that you do.

A simple expected utility maximization does. A CDT decision doesn't. Formally specifying a maximization algorithm that behaves like CDT is, from what I understand, less simple than making it follow UDT.

Comment author: ksvanhorn 14 March 2012 05:03:23AM 0 points [-]

If all we need to do is maximize expected utility, then where is the need for an "advanced" decision theory?

From Wikipedia: "Causal decision theory is a school of thought within decision theory which maintains that the expected utility of actions should be evaluated with respect to their potential causal consequences."

It seems to me that the source of the problem is in that phrase "causal consequences", and the confusion surrounding the whole notion of causality. The two problems mentioned in the article are hard to fit within standard notions of causality.

It's worth mentioning that you can turn Pearl's causal nets into plain old Bayesian networks by explicitly modeling the notion of an intervention. (Pearl himself mentions this in his book.) You just have to add some additional variables and their effects; this allows you to incorporate the information contained in your causal intuitions. This suggests to me that causality really isn't a fundamental concept, and that causality conundrums results from failing to include all the relevant information in your model.

[The term "model" here just refers to the joint probability distribution you use to represent your state of information.]

Where I'm getting to with all of this is that if you model your information correctly, the difference between Causal Decision Theory and Evidential Decision Theory dissolves, and Newcomb's Paradox and the Cloned Prisoner's Dilemma are easily resolved.

I think I'm going to have to write this up as an article of my own to really explain myself...

Comment author: Will_Newsome 14 March 2012 05:14:29AM 2 points [-]

See my comment here--though if this problem keeps coming up then a post should be written by someone I guess.