I am not aware of any systematic attempt to study these things. My own opinion is formed from a somewhat casual reading of Matt Ridley's Rational Optimist, Jared Diamond's Guns Germs and Steel and Collapse, and probably a few other books that don't leap to mind. These books have plenty of citation of studies if you are interested.
I think you would be hard pressed to find any existing "significant" country that does not engender a strong belief in patriotism among its populace, which does not lionize especially those who have given their lives in wars on behalf of the country. If you can think of any significant counter examples among the 50 richest or 50 most populous countries, please let me know. I am essentially hypothesizing that the scarcity of genteel foreigner-loving pacifist countries among the richest and most populous is not a mere coincidence.
I think you would be hard pressed to find any existing "significant" country that does not engender a strong belief in patriotism among its populace, which does not lionize especially those who have given their lives in wars on behalf of the country.
You're begging the question here, by slipping in the assumption that these wars are "on behalf of the country," rather than on behalf of the executive (e.g. president), on behalf of some vested interest, or just colossal f*-ups. To repeat what the author said,
"If a death is just a tragedy... [y]ou have to acknowledge that yes, really, ... thousands of people -- even the Good Guy's soldiers! -- might be dying for no good reason at all."
Learning about useful models helps people escape anthropomorphizing human society or the economy or government. The latter is particularly salient. I think most people slip up occasionally in assuming that say something like the United States government can be successfully modelled as a single agent to explain most of its "actions".
As an interesting (to me, at least) aside, Gene Sharp's research on nonviolent resistance indicates that successful nonviolent resistance invariably involves taking to heart this little idea -- that governments are not single agents but systems of many agents pursuing their own ends -- and exploiting it to the max.
Actually, mere possibilities can make a difference... if you have effects that propagate backwards in time.
It still has to happen. It might happen in the future instead of the past, but it still has to happen.
No, it doesn't have to happen. Consider the Elitzur-Vaidman bomb tester. The outcome depends on whether or not the bomb could have exploded, regardless of whether or not it actually does. You might object that in the Many Worlds Interpretation of quantum mechanics both happen, but the situation can equally well be described using Cramer's Transactional Interpretation of quantum mechanics, which involves waves that propagate backwards in time, and in which only one of the two possibilities (explode or don't explode) occurs. Whether MWI or TI or some other interpretation is the correct one, this demonstrates that backward-in-time signalling allows a "mere possibility", that does not actually occur, to have measurable effects.
Jaynes died in 1998. How did he never hear of MWI? I'd heard of MWI in 1998, and I was just a kid.
EDIT: Google Books turns up the following snippet in The Many Worlds of Hugh Everett III:
All possibilities 'actually realized,' with corresp. observer states.15 In May 1957, Everett wrote a critical letter to ET Jaynes, a physicist at Stanford University who was pioneering the use of von Neumann-Shannontype information...
I have been unable to find any copies online, Amazon wants $30 for any copy, and neither my local library nor university nor county catalog hold it. Man!
EDITEDIT: Google Books gives more access to The Everett Interpretation of Quantum Mechanics: Collected Works 1955-1980, which gives an entire chapter 18 to 'Correspondence: Everett and Jaynes (1957)': http://books.google.com/books?id=dowpli7i6TgC&pg=PA261&dq=jaynes+everett&hl=en&sa=X&ei=N9CdT9PSIcLOgAf-3vTxDg&ved=0CDYQ6AEwAQ#v=onepage&q&f=false
I'm not sure what they are discussing, and many-worlds doesn't seem to come up (at least under that name) and most of the chapter is inaccessible, but it's clear from the chapter summary Jaynes doesn't think much of Everett's position.
Heck, I'd heard of MWI in the mid-70's, and I was just a kid.
a simple expected-utility maximization gives you the right answer, assuming you know that the other player will make the same move that you do.
A simple expected utility maximization does. A CDT decision doesn't. Formally specifying a maximization algorithm that behaves like CDT is, from what I understand, less simple than making it follow UDT.
If all we need to do is maximize expected utility, then where is the need for an "advanced" decision theory?
From Wikipedia: "Causal decision theory is a school of thought within decision theory which maintains that the expected utility of actions should be evaluated with respect to their potential causal consequences."
It seems to me that the source of the problem is in that phrase "causal consequences", and the confusion surrounding the whole notion of causality. The two problems mentioned in the article are hard to fit within standard notions of causality.
It's worth mentioning that you can turn Pearl's causal nets into plain old Bayesian networks by explicitly modeling the notion of an intervention. (Pearl himself mentions this in his book.) You just have to add some additional variables and their effects; this allows you to incorporate the information contained in your causal intuitions. This suggests to me that causality really isn't a fundamental concept, and that causality conundrums results from failing to include all the relevant information in your model.
[The term "model" here just refers to the joint probability distribution you use to represent your state of information.]
Where I'm getting to with all of this is that if you model your information correctly, the difference between Causal Decision Theory and Evidential Decision Theory dissolves, and Newcomb's Paradox and the Cloned Prisoner's Dilemma are easily resolved.
I think I'm going to have to write this up as an article of my own to really explain myself...
OK, ignore those examples for a second, and ignore the word "advanced."
The OP is drawing a distinction between CDT, which he claims fails in situations where competing agents can predict one another's behavior to varying degrees, and other decision theories, which don't fail. If he's wrong in that claim, then articulating why would be helpful.
If, instead, he's right in that claim, then I don't see what's useless about theories that don't fail in that situation. At least, it certainly seems to me that competing agents predicting one another's behavior is something that happens all the time in the real world. Does it not seem that way to you?
But the basic assumption of standard game theory, which I presume he means to include in CDT, is that the agents can predict each other's behavior -- it is assumed that each will make the best move they possibly can.
I don't think that predicting behavior is the fundamental distinction here. Game theory is all about dealing with intelligent actors who are trying to anticipate your own choices. That's why the Nash equilibrium is generally a probabilistic strategy -- to make your move unpredictable.
Of course, in these two problems we know which causal links to draw. They were written to be simple enough. The trick is to have a general theory that draws the right links here without drawing wrong links in other problems, and which is formalizable so that it can answer problems more complicated than common sense can handle.
Among human beings, the relevant distinction is between decisions made before or after the other agent becomes aware of your decision- and you can certainly come up with examples where mutual ignorance happens.
Finally, situations with iterated moves can be decided differently by different decision theories as well: consider Newcomb's Problem where the big box is transparent as well! A CDT will always find the big box empty, and two-box; a UDT/ADT will always find the big box full, and one-box. (TDT might two-box in that case, actually.)
Of course, in these two problems we know which causal links to draw. [...] The trick is to have a general theory that draws the right links here without drawing wrong links in other problems,
If you don't know that Omega's decision depends on yours, or that the other player in a Prisoner's Dilemma is your mental clone, then no theory can help you make the right choice; you lack the crucial piece of information. If you do know this information, then simply cranking through standard maximization of expected utility gives you the right answer.
Among human beings, the relevant distinction is between decisions made before or after the other agent becomes aware of your decision
No, the relevant distinction is whether or not your decision is relevant to predicting (postdicting?) the other agent's decision. The cheat in Newcomb's Problem and the PD-with-a-clone problem is this:
you create an unusual situation where X's decision is clearly relevant to predicting Y's decision, even though X's decision does not precede Y's,
then you insist that X must pretend that there is no connection, even though he knows better, due to the lack of temporal precedence.
Let's take a look at what happens in Newcomb's problem if we just grind through the math. We have
P(box 2 has $1 million | you choose to take both boxes) = 0
P(box 2 has $1 million | you choose to take only the second box) = 1
E[money gained | you choose to take both boxes] = $1000 + 0 * $1e6 = $1000
E[money gained | you choose to take only the second box] = $1000 + 1 * $1e6 = $1001000
So where's the problem?
They're no more artificial than the rest of Game Theory- no human being has ever known their exact payoffs for consequences in terms of utility, either. Like I said, there may be a good deal of advanced-decision-theory-structure in the way people subconsciously decide to trust one another given partial information, and that's something that CDT analysis would treat as irrational even when beneficial.
One bit of relevance is that "rational" has been wrongly conflated with strategies akin to defecting in the Prisoner's Dilemma, or being unable to geniunely promise anything with high enough stakes, and advanced decision theories are the key to seeing that the rational ideal doesn't fail like that.
They're no more artificial than the rest of Game Theory-
That's an invalid analogy. We use mathematical models that we know are ideal approximations to reality all the time... but they are intended to be approximations of actually encountered circumstances. The examples given in the article bear no relevance to any circumstance any human being has ever encountered.
there may be a good deal of advanced-decision-theory-structure in the way people subconsciously decide to trust one another given partial information, and that's something that CDT analysis would treat as irrational even when beneficial.
That doesn't follow from anything said in the article. Care to explain further?
One bit of relevance is that "rational" has been wrongly conflated with strategies akin to defecting in the Prisoner's Dilemma,
Defecting is the right thing to do in the Prisoner's Dilemma itself; it is only when you modify the conditions in some way (implicitly changing the payoffs, or having the other player's decision depend on yours) that the best decision changes. In your example of the mental clone, a simple expected-utility maximization gives you the right answer, assuming you know that the other player will make the same move that you do.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Your understanding of mathematical expectation seems accurate, though the wording could be simplified a bit. I don't think that you need the "many worlds" style exposition to explain it.
One common way of thinking of expected values is as a long-run average. So If I keep playing a game with an expected loss of $10, that means that in the long run it becomes more and more probable that I'll lose an average of about $10 per game.
You could write a whole book about what's wrong with this "long-run average" idea, but E. T. Jaynes already did: Probability Theory: The Logic of Science. The most obvious problem is that it means you can't talk about the expected value of a one-off event. I.e., if Dick is pondering the expected value of (time until he completes his doctorate) given his specific abilities and circumstances... well, he's not allowed to if he's a frequentist who treats probabilities and expected values as long-run averages; there is no ensemble here to take the average of.
Expected values are weighted averages, so I would recommend explaining expected values in two parts:
Explain the idea of probabilities as degree of confidence in an outcome (the Bayesian view);
Explain the idea of a weighted average, and note that the expected value is a weighted average with outcome probabilities as the weights.
You could explain the idea of a weighted average using the standard analogy of balancing a rod with weights of varying masses attached at various points, and note that larger masses "pull the balance point" towards themselves more strongly than do smaller masses.