Comment author: mwengler 31 May 2012 06:52:21AM 0 points [-]

I am not aware of any systematic attempt to study these things. My own opinion is formed from a somewhat casual reading of Matt Ridley's Rational Optimist, Jared Diamond's Guns Germs and Steel and Collapse, and probably a few other books that don't leap to mind. These books have plenty of citation of studies if you are interested.

I think you would be hard pressed to find any existing "significant" country that does not engender a strong belief in patriotism among its populace, which does not lionize especially those who have given their lives in wars on behalf of the country. If you can think of any significant counter examples among the 50 richest or 50 most populous countries, please let me know. I am essentially hypothesizing that the scarcity of genteel foreigner-loving pacifist countries among the richest and most populous is not a mere coincidence.

Comment author: ksvanhorn 10 June 2012 04:27:37AM 1 point [-]

I think you would be hard pressed to find any existing "significant" country that does not engender a strong belief in patriotism among its populace, which does not lionize especially those who have given their lives in wars on behalf of the country.

You're begging the question here, by slipping in the assumption that these wars are "on behalf of the country," rather than on behalf of the executive (e.g. president), on behalf of some vested interest, or just colossal f*-ups. To repeat what the author said,

"If a death is just a tragedy... [y]ou have to acknowledge that yes, really, ... thousands of people -- even the Good Guy's soldiers! -- might be dying for no good reason at all."

Comment author: ksvanhorn 10 June 2012 03:56:48AM 7 points [-]

Learning about useful models helps people escape anthropomorphizing human society or the economy or government. The latter is particularly salient. I think most people slip up occasionally in assuming that say something like the United States government can be successfully modelled as a single agent to explain most of its "actions".

As an interesting (to me, at least) aside, Gene Sharp's research on nonviolent resistance indicates that successful nonviolent resistance invariably involves taking to heart this little idea -- that governments are not single agents but systems of many agents pursuing their own ends -- and exploiting it to the max.

Comment author: DanielLC 01 April 2012 07:13:28PM 0 points [-]

Actually, mere possibilities can make a difference... if you have effects that propagate backwards in time.

It still has to happen. It might happen in the future instead of the past, but it still has to happen.

Comment author: ksvanhorn 01 May 2012 03:45:08AM 1 point [-]

No, it doesn't have to happen. Consider the Elitzur-Vaidman bomb tester. The outcome depends on whether or not the bomb could have exploded, regardless of whether or not it actually does. You might object that in the Many Worlds Interpretation of quantum mechanics both happen, but the situation can equally well be described using Cramer's Transactional Interpretation of quantum mechanics, which involves waves that propagate backwards in time, and in which only one of the two possibilities (explode or don't explode) occurs. Whether MWI or TI or some other interpretation is the correct one, this demonstrates that backward-in-time signalling allows a "mere possibility", that does not actually occur, to have measurable effects.

Comment author: gwern 29 April 2012 11:11:05PM *  6 points [-]

Jaynes died in 1998. How did he never hear of MWI? I'd heard of MWI in 1998, and I was just a kid.

EDIT: Google Books turns up the following snippet in The Many Worlds of Hugh Everett III:

All possibilities 'actually realized,' with corresp. observer states.15 In May 1957, Everett wrote a critical letter to ET Jaynes, a physicist at Stanford University who was pioneering the use of von Neumann-Shannontype information...

I have been unable to find any copies online, Amazon wants $30 for any copy, and neither my local library nor university nor county catalog hold it. Man!

EDITEDIT: Google Books gives more access to The Everett Interpretation of Quantum Mechanics: Collected Works 1955-1980, which gives an entire chapter 18 to 'Correspondence: Everett and Jaynes (1957)': http://books.google.com/books?id=dowpli7i6TgC&pg=PA261&dq=jaynes+everett&hl=en&sa=X&ei=N9CdT9PSIcLOgAf-3vTxDg&ved=0CDYQ6AEwAQ#v=onepage&q&f=false

I'm not sure what they are discussing, and many-worlds doesn't seem to come up (at least under that name) and most of the chapter is inaccessible, but it's clear from the chapter summary Jaynes doesn't think much of Everett's position.

Comment author: ksvanhorn 30 April 2012 02:09:02AM 0 points [-]

Heck, I'd heard of MWI in the mid-70's, and I was just a kid.

Comment author: wedrifid 14 March 2012 03:36:58AM 1 point [-]

a simple expected-utility maximization gives you the right answer, assuming you know that the other player will make the same move that you do.

A simple expected utility maximization does. A CDT decision doesn't. Formally specifying a maximization algorithm that behaves like CDT is, from what I understand, less simple than making it follow UDT.

Comment author: ksvanhorn 14 March 2012 05:03:23AM 0 points [-]

If all we need to do is maximize expected utility, then where is the need for an "advanced" decision theory?

From Wikipedia: "Causal decision theory is a school of thought within decision theory which maintains that the expected utility of actions should be evaluated with respect to their potential causal consequences."

It seems to me that the source of the problem is in that phrase "causal consequences", and the confusion surrounding the whole notion of causality. The two problems mentioned in the article are hard to fit within standard notions of causality.

It's worth mentioning that you can turn Pearl's causal nets into plain old Bayesian networks by explicitly modeling the notion of an intervention. (Pearl himself mentions this in his book.) You just have to add some additional variables and their effects; this allows you to incorporate the information contained in your causal intuitions. This suggests to me that causality really isn't a fundamental concept, and that causality conundrums results from failing to include all the relevant information in your model.

[The term "model" here just refers to the joint probability distribution you use to represent your state of information.]

Where I'm getting to with all of this is that if you model your information correctly, the difference between Causal Decision Theory and Evidential Decision Theory dissolves, and Newcomb's Paradox and the Cloned Prisoner's Dilemma are easily resolved.

I think I'm going to have to write this up as an article of my own to really explain myself...

Comment author: TheOtherDave 14 March 2012 03:09:43AM 0 points [-]

OK, ignore those examples for a second, and ignore the word "advanced."

The OP is drawing a distinction between CDT, which he claims fails in situations where competing agents can predict one another's behavior to varying degrees, and other decision theories, which don't fail. If he's wrong in that claim, then articulating why would be helpful.

If, instead, he's right in that claim, then I don't see what's useless about theories that don't fail in that situation. At least, it certainly seems to me that competing agents predicting one another's behavior is something that happens all the time in the real world. Does it not seem that way to you?

Comment author: ksvanhorn 14 March 2012 04:40:22AM 0 points [-]

But the basic assumption of standard game theory, which I presume he means to include in CDT, is that the agents can predict each other's behavior -- it is assumed that each will make the best move they possibly can.

I don't think that predicting behavior is the fundamental distinction here. Game theory is all about dealing with intelligent actors who are trying to anticipate your own choices. That's why the Nash equilibrium is generally a probabilistic strategy -- to make your move unpredictable.

Comment author: orthonormal 13 March 2012 10:01:11PM *  0 points [-]

Of course, in these two problems we know which causal links to draw. They were written to be simple enough. The trick is to have a general theory that draws the right links here without drawing wrong links in other problems, and which is formalizable so that it can answer problems more complicated than common sense can handle.

Among human beings, the relevant distinction is between decisions made before or after the other agent becomes aware of your decision- and you can certainly come up with examples where mutual ignorance happens.

Finally, situations with iterated moves can be decided differently by different decision theories as well: consider Newcomb's Problem where the big box is transparent as well! A CDT will always find the big box empty, and two-box; a UDT/ADT will always find the big box full, and one-box. (TDT might two-box in that case, actually.)

Comment author: ksvanhorn 14 March 2012 02:33:55AM 0 points [-]

Of course, in these two problems we know which causal links to draw. [...] The trick is to have a general theory that draws the right links here without drawing wrong links in other problems,

If you don't know that Omega's decision depends on yours, or that the other player in a Prisoner's Dilemma is your mental clone, then no theory can help you make the right choice; you lack the crucial piece of information. If you do know this information, then simply cranking through standard maximization of expected utility gives you the right answer.

Among human beings, the relevant distinction is between decisions made before or after the other agent becomes aware of your decision

No, the relevant distinction is whether or not your decision is relevant to predicting (postdicting?) the other agent's decision. The cheat in Newcomb's Problem and the PD-with-a-clone problem is this:

  • you create an unusual situation where X's decision is clearly relevant to predicting Y's decision, even though X's decision does not precede Y's,

  • then you insist that X must pretend that there is no connection, even though he knows better, due to the lack of temporal precedence.

Let's take a look at what happens in Newcomb's problem if we just grind through the math. We have

P(box 2 has $1 million | you choose to take both boxes) = 0

P(box 2 has $1 million | you choose to take only the second box) = 1

E[money gained | you choose to take both boxes] = $1000 + 0 * $1e6 = $1000

E[money gained | you choose to take only the second box] = $1000 + 1 * $1e6 = $1001000

So where's the problem?

Comment author: orthonormal 13 March 2012 09:54:15PM 2 points [-]

They're no more artificial than the rest of Game Theory- no human being has ever known their exact payoffs for consequences in terms of utility, either. Like I said, there may be a good deal of advanced-decision-theory-structure in the way people subconsciously decide to trust one another given partial information, and that's something that CDT analysis would treat as irrational even when beneficial.

One bit of relevance is that "rational" has been wrongly conflated with strategies akin to defecting in the Prisoner's Dilemma, or being unable to geniunely promise anything with high enough stakes, and advanced decision theories are the key to seeing that the rational ideal doesn't fail like that.

Comment author: ksvanhorn 14 March 2012 02:09:25AM 0 points [-]

They're no more artificial than the rest of Game Theory-

That's an invalid analogy. We use mathematical models that we know are ideal approximations to reality all the time... but they are intended to be approximations of actually encountered circumstances. The examples given in the article bear no relevance to any circumstance any human being has ever encountered.

there may be a good deal of advanced-decision-theory-structure in the way people subconsciously decide to trust one another given partial information, and that's something that CDT analysis would treat as irrational even when beneficial.

That doesn't follow from anything said in the article. Care to explain further?

One bit of relevance is that "rational" has been wrongly conflated with strategies akin to defecting in the Prisoner's Dilemma,

Defecting is the right thing to do in the Prisoner's Dilemma itself; it is only when you modify the conditions in some way (implicitly changing the payoffs, or having the other player's decision depend on yours) that the best decision changes. In your example of the mental clone, a simple expected-utility maximization gives you the right answer, assuming you know that the other player will make the same move that you do.

Comment author: TheOtherDave 13 March 2012 09:28:59PM 0 points [-]

If your goal is to figure out what to have for breakfast, not much relevance at all.
If your goal is to program an automated decision-making system to figure out what breakfast supplies to make available to the population of the West Coast of the U.S., perhaps quite a lot.
If your goal is to program an automated decision-making system to figure out how to optimize all available resources for the maximum benefit of humanity, perhaps even more.

There are lots of groups represented on LW, with different perceived needs. Some are primarily interested in self-help threads, others primarily interested in academic decision-theory threads, and many others. Easiest is to ignore threads that don't interest you.

Comment author: ksvanhorn 14 March 2012 01:29:52AM 0 points [-]

If your goal is to program an automated decision-making system to figure out what breakfast supplies to make available to the population of the West Coast of the U.S., perhaps quite a lot.

This example has nothing like the character of the one-box/two-box problem or the PD-with-mental-clone problem described in the article. Why should it require an "advanced" decision theory? Because people's consumption will respond to the supplies made available? But standard game theory can handle that.

There are lots of groups represented on LW, with different perceived needs. [...]Easiest is to ignore threads that don't interest you.

It's not that I'm not interested; it's that I'm puzzled as to what possible use these "advanced" decision theories can ever have to anyone.

Comment author: ksvanhorn 13 March 2012 07:13:02PM 0 points [-]

Are you sure that you need an advanced decision theory two handle the one-box/two-box problem, or the PD-with-mental-clone problem? You write that

a CDT agent assumes that X's decision is independent from the simultaneous decisions of the Ys- that is, X could decide one way or another and everyone else's decisions would stay the same.

Well, that's a common situation analyzed in game theory, but it's not essential to CDT. Consider playing a game of chess: your choice clearly affects the choice of your opponent. Or consider the decision of whether to punch a 6'5", 250 lb. muscle-man who has just insulted you -- your choice again has a strong influence on his choice of action. CDT is adequate for analyzing both of these situations.

It is true that in my two examples the other agent's choice is made after X's choice, rather than being simultaneous with his. But of what relevance is the stipulation of simultaneity? It's only relevance is that it leads one to assume that the other decisions are independent of X's decision! That is, the root of the difficulty is simply that you're analyzing the problem using an assumption that you know to be false!

It seems to me that you can analyze the one-box/two-box problem or the PD-with-a-mental-clone problem perfectly well using CDT; you just have to use the right causal graph. The causal graph needs an arc from your decision to Omega's prediction for the first problem, and an arc from your decision to the clone's decision in the second problem. Then you do the usual maximization of expected utility.

View more: Prev | Next