In response to Decision Theory FAQ
Comment author: incogn 28 February 2013 05:33:54PM *  9 points [-]

I don't really think Newcomb's problem or any of its variations belong in here. Newcomb's problem is not a decision theory problem, the real difficulty is translating the underspecified English into a payoff matrix.

The ambiguity comes from the the combination of the two claims, (a) Omega being a perfect predictor and (b) the subject being allowed to choose after Omega has made its prediction. Either these two are inconsistent, or they necessitate further unstated assumptions such as backwards causality.

First, let us assume (a) but not (b), which can be formulated as follows: Omega, a computer engineer, can read your code and test run it as many times as he would like in advance. You must submit (simple, unobfuscated) code which either chooses to one- or two-box. The contents of the boxes will depend on Omega's prediction of your code's choice. Do you submit one- or two-boxing code?

Second, let us assume (b) but not (a), which can be formulated as follows: Omega has subjected you to the Newcomb's setup, but because of a bug in its code, its prediction is based on someone else's choice than yours, which has no correlation with your choice whatsoever. Do you one- or two-box?

Both of these formulations translate straightforwardly into payoff matrices and any sort of sensible decision theory you throw at them give the correct solution. The paradox disappears when the ambiguity between the two above possibilities are removed. As far as I can see, all disagreement between one-boxers and two-boxers are simply a matter of one-boxers choosing the first and two-boxers choosing the second interpretation. If so, Newcomb's paradox is not as much interesting as poorly specified. The supposed superiority of TDT over CDT either relies on the paradox not reducing to either of the above or by fiat forcing CDT to work with the wrong payoff matrices.

I would be interested to see an unambiguous and nontrivial formulation of the paradox.

Some quick and messy addenda:

  • Allowing Omega to do its prediction by time travel directly contradicts box B contains either $0 or $1,000,000 before the game begins, and once the game begins even the Predictor is powerless to change the contents of the boxes. Also, this obviously make one-boxing the correct choice.
  • Allowing Omega to accurately simulate the subject reduces to problem to submit code for Omega to evaluate; this is not exactly paradoxical, but then the player is called upon to choose which boxes to take actually means the code then runs and returns its expected value, which clearly reduces to one-boxing.
  • Making Omega an imperfect predictor, with an accuracy of p<1.0 simply creates a superposition of the first and second case above, which still allows for straightforward analysis.
  • Allowing unpredictable, probabilistic strategies violates the supposed predictive power of Omega, but again cleanly reduces to payoff matrices.
  • Finally, the number of variations such as the psychopath button are completely transparent, once you decide between choice is magical and free will and stuff which leads to pressing the button, and the supposed choice is deterministic and there is no choice to make, but code which does not press the button is clearly the most healthy.
In response to comment by incogn on Decision Theory FAQ
Comment author: Amanojack 03 March 2013 05:21:10AM *  2 points [-]

I agree; wherever there is paradox and endless debate, I have always found ambiguity in the initial posing of the question. An unorthodox mathematician named Norman Wildberger just released a new solution by unambiguously specifying what we know about Omega's predictive powers.

Comment author: shminux 10 May 2012 03:47:21PM *  1 point [-]

Anti-epistemology is a huge actual danger of actual life,

So it is, but I'm wondering if anyone can suggest a (possibly very exotic) real-life example where "epistemic rationality gives way to instrumental rationality."? Just to address the "hypothetical scenario" objection.

EDIT: Does the famous Keynes quote "Markets can remain irrational a lot longer than you and I can remain solvent." qualify?

Comment author: Amanojack 10 May 2012 06:49:01PM 3 points [-]

Any time you have a bias you cannot fully compensate for, there is a potential benefit to putting instrumental rationality above epistemic.

One fear I was unable to overcome for many years was that of approaching groups of people. I tried all sorts of things, but the best piece advice turned out to be: "Think they'll like you." Simply believing that eliminates the fear and aids in my social goals, even though it sometimes proves to have been a false belief, especially with regard to my initial reception. Believing that only 3 out of 4 groups will like or welcome me initially and 1 will rebuff me, even though this may be the case, has not been as useful as believing that they'll all like me.

Comment author: Jack 04 May 2012 07:10:26AM *  2 points [-]

In fact, Mises explains exactly why probability is in the mind in his works almost a century ago, and he's not even a mathematician.

Claiming Ludwig in the Bayesian camp is really strange and wrong. His mathematician brother Richard, from whom he takes his philosophy of probability, is literally the arch-frequentist of the 20th century.

And your quote has him taking Richard's exact position:

The present epistemological situation in the field of quantum mechanics would be correctly described by the statement: We know the various patterns according to which atoms behave and we know the proportion in which each of these patterns becomes actual. This would describe the state of our knowledge as an instance of class probability: We know all about the behavior of the whole class; about the behavior of the individual members of the class we know only that they are members.

When he says "class probability" he is specifically talking about this. ...

They do not lead to results that would tell us anything about the actual singular events.

Which is the the precise opposite of the position of the subjectivist.

Comment author: Amanojack 04 May 2012 10:23:55AM 3 points [-]

I didn't say he was in the Bayesian camp, I said he had the Bayesian insight that probability is in the mind.

In the final quote he is simply saying that mathematical statements of probability merely summarize our state of knowledge; they do not add anything to it other than putting it in a more useful form. I don't see how this would be interpreted as going against subjectivism, especially when he clearly refers to probabilities being expressions of our ignorance.

Comment author: Bart119 03 May 2012 08:05:38PM 0 points [-]

Maybe setting the bounds of the problem would help some. I'm assuming:

  1. Some form of representative democracy as political context, in the absence of any better systems.

  2. A system of law protecting most property rights -- no arbitrary expropriations.

  3. Socialism no more extreme than in (say) postwar Scandinavian countries.

  4. Libertarianism no more extreme than (say) late 19th century USA.

  5. Regulated capitalism. The question is how much regulation or taxation.

Given those parameters, I don't need the Communist Manifesto or any radical anarchist works. North Korea, the USSR, pre-1980 China aren't so relevant.

If people disagree with any of those limits on the problem, I suppose just stating that would be of interest, perhaps with a link or two. I realize getting into arguments about such things could be counterproductive, but knowing of the existence of views outside of those bounds would be helpful.

Comment author: Amanojack 04 May 2012 08:34:49AM *  2 points [-]

I think you'll find the extreme cases (totalitarian economic controls vs. complete laissez faire) to be helpful to look at so as to challenge the way you're framing the spectrum.

Also, politics and economics go hand in hand, economics being - in terms of what it is usually actually used for - the study of how political actions affect the economy. For example, David Friedman argues that courts would produce better rulings if they were not run as a monopoly, and that the same is true with laws and regulations themselves. So at the limit it is not easy to separate them.

Another example is the libertarian argument that pollution is largely enable by weakened property rights due to laws passed in the 19th century (in the US case) preventing torts against polluters, and from the basic fact that the government essentially owns the waters and airways. These types of arguments tend to undercut the whole divide between economics and politics.

View more: Next