Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

In response to Decision Theory FAQ
Comment author: incogn 28 February 2013 05:33:54PM *  9 points [-]

I don't really think Newcomb's problem or any of its variations belong in here. Newcomb's problem is not a decision theory problem, the real difficulty is translating the underspecified English into a payoff matrix.

The ambiguity comes from the the combination of the two claims, (a) Omega being a perfect predictor and (b) the subject being allowed to choose after Omega has made its prediction. Either these two are inconsistent, or they necessitate further unstated assumptions such as backwards causality.

First, let us assume (a) but not (b), which can be formulated as follows: Omega, a computer engineer, can read your code and test run it as many times as he would like in advance. You must submit (simple, unobfuscated) code which either chooses to one- or two-box. The contents of the boxes will depend on Omega's prediction of your code's choice. Do you submit one- or two-boxing code?

Second, let us assume (b) but not (a), which can be formulated as follows: Omega has subjected you to the Newcomb's setup, but because of a bug in its code, its prediction is based on someone else's choice than yours, which has no correlation with your choice whatsoever. Do you one- or two-box?

Both of these formulations translate straightforwardly into payoff matrices and any sort of sensible decision theory you throw at them give the correct solution. The paradox disappears when the ambiguity between the two above possibilities are removed. As far as I can see, all disagreement between one-boxers and two-boxers are simply a matter of one-boxers choosing the first and two-boxers choosing the second interpretation. If so, Newcomb's paradox is not as much interesting as poorly specified. The supposed superiority of TDT over CDT either relies on the paradox not reducing to either of the above or by fiat forcing CDT to work with the wrong payoff matrices.

I would be interested to see an unambiguous and nontrivial formulation of the paradox.

Some quick and messy addenda:

  • Allowing Omega to do its prediction by time travel directly contradicts box B contains either $0 or $1,000,000 before the game begins, and once the game begins even the Predictor is powerless to change the contents of the boxes. Also, this obviously make one-boxing the correct choice.
  • Allowing Omega to accurately simulate the subject reduces to problem to submit code for Omega to evaluate; this is not exactly paradoxical, but then the player is called upon to choose which boxes to take actually means the code then runs and returns its expected value, which clearly reduces to one-boxing.
  • Making Omega an imperfect predictor, with an accuracy of p<1.0 simply creates a superposition of the first and second case above, which still allows for straightforward analysis.
  • Allowing unpredictable, probabilistic strategies violates the supposed predictive power of Omega, but again cleanly reduces to payoff matrices.
  • Finally, the number of variations such as the psychopath button are completely transparent, once you decide between choice is magical and free will and stuff which leads to pressing the button, and the supposed choice is deterministic and there is no choice to make, but code which does not press the button is clearly the most healthy.
In response to comment by incogn on Decision Theory FAQ
Comment author: Amanojack 03 March 2013 05:21:10AM *  2 points [-]

I agree; wherever there is paradox and endless debate, I have always found ambiguity in the initial posing of the question. An unorthodox mathematician named Norman Wildberger just released a new solution by unambiguously specifying what we know about Omega's predictive powers.

Comment author: ArisKatsaris 10 October 2012 09:47:08AM *  -1 points [-]

You said "truth=opinion", but to defend that you ask people not to do something true to you that isn't a matter of opinion, but to "give you a statement that does not resolve to opinion".

That's false reasoning. You didn't originally say "all true statements are produced by people's opinions" which is trivially true according to some definition of "opinions", as all statements people can make are by necessity produced by their minds.

But if e.g. you get in an accident and you lose your leg, nobody will have offered you an opinion, but nonetheless it'll be true that you'll be missing a leg. If you then say it's only a matter of opinion that you'll have lost your leg, I direct you to the well-known Monty Python sketch....

Your failure seems to arise from a very basic confusion between map and territory, where you think that because statements about reality derive from opinion, then reality itself must derive from opinion. That doesn't follow at all. In truth: F(x)-> y and Mind(Reality) -> "Statements about Reality". -- you didn't disprove the existence of x, just by illustrating that all y can be mapped from x through a function F.

Comment author: Amanojack 11 October 2012 08:42:25AM -2 points [-]

truth=opinion

I'd phrase it as "truth is subjective," but I agree in principle. Truth is a word for everyday talk, not for precise discourse. This may sound pretty off-the-wall, but stepping back for a second it should be no surprise that holding to everyday English phrasing would interfere with our efforts to speak precisely. I'll put this more specifically below.

But if e.g. you get in an accident and you lose your leg, nobody will have offered you an opinion, but nonetheless it'll be true that you'll be missing a leg.

This is actually begging the question in that you tacitly assume objective truth by using the standard English phrasing. That there is such a thing as an objective truth is precisely the conclusion you hope to establish. Unfortunately English all but forces you to start by assuming it. Again, carrying over the habits of everyday talk into a precise discussion is a recipe for confusion. We'll have to be a little more careful with phrasing to get at what's going on.

I'd first point out that when you say, "you lose your leg," you are speaking as if there is some omniscient narrator who knows "the objective facts of reality." Parent's point is exactly that there is no such omniscience. There are only individuals, including you and I, who have [subjective] experiences.

To get specific, we would have to identify who it is that witnesses the loss of Parent's leg. If you had said, "e.g. you find that you get in an accident and that you lose your leg," it would not be convincing to follow up with, "but nonetheless it'll be true that you'll be missing a leg."

We could all have witnessed (what we experience as) Parent losing a leg. It will be "true" for us (everyday talk), but none among us is an omniscient narrator qualified to state any more than what we experienced. Nowhere is any objective truth to be found. If we were to call it an "objective truth," we would simply be referencing the fact that all three of our experiences seem to match up. It would be at best an inter-subjective "truth," but this "truth" is a lie to someone else who thinks they see Parent with both legs still attached. To avoid confusion, we had best call it a subjective report or something. Hence, while perhaps not ideal, "truth=opinion" is not too bad a way to put it after all.

Comment author: shminux 10 May 2012 03:47:21PM *  1 point [-]

Anti-epistemology is a huge actual danger of actual life,

So it is, but I'm wondering if anyone can suggest a (possibly very exotic) real-life example where "epistemic rationality gives way to instrumental rationality."? Just to address the "hypothetical scenario" objection.

EDIT: Does the famous Keynes quote "Markets can remain irrational a lot longer than you and I can remain solvent." qualify?

Comment author: Amanojack 10 May 2012 06:49:01PM 3 points [-]

Any time you have a bias you cannot fully compensate for, there is a potential benefit to putting instrumental rationality above epistemic.

One fear I was unable to overcome for many years was that of approaching groups of people. I tried all sorts of things, but the best piece advice turned out to be: "Think they'll like you." Simply believing that eliminates the fear and aids in my social goals, even though it sometimes proves to have been a false belief, especially with regard to my initial reception. Believing that only 3 out of 4 groups will like or welcome me initially and 1 will rebuff me, even though this may be the case, has not been as useful as believing that they'll all like me.

Torture Simulated with Flipbooks

9 Amanojack 26 May 2011 01:00AM

What if the brain of the person you most care about were scanned and the entirety of that person's mind and utility function at this moment were printed out on paper, and then several more "clock ticks" of their mind as its states changed exactly as they would if the person were being horribly tortured were printed out as well, into a gigantic book? And then the book were flipped through, over and over again. Fl-l-l-l-liiiiip! Fl-l-l-l-liiiiip!

Would this count as simulated torture? If so, would you care about stopping it, or is it different from computer-simulated torture?

Comment author: Peterdjones 25 May 2011 11:25:29PM 0 points [-]

As for this collectivism, though, I don't go for it. There is no way to know another's utility function, no way to compare utility functions among people, etc. other than subjectively.

That's very contestable. It has frequently argued here that preferences can be inferred from behaviour; it's also been argued that introspection (if that is what you mean by "subjectively") is not a reliable guide to motivation.

Comment author: Amanojack 26 May 2011 12:42:29AM 0 points [-]

This is the whole demonstrated preference thing. I don't buy it myself, but that's a debate for another time. What I mean by subjectively is that I will value one person's life more than another person's life, or I could think that I want that $1,000,000 more than a rich person wants it, but that's just all in my head. To compare utility functions and work from demonstrated preference usually - not always - is a precursor to some kind of authoritarian scheme. I can't say there is anything like that coming, but it does set off some alarm bells. Anyway, this is not something I can substantiate right now.

Comment author: ArisKatsaris 26 May 2011 12:25:29AM 0 points [-]

I'm getting a bad vibe here, and no longer feel we're having the same conversation

"Person or group that decides"? Who said anything about anyone deciding anything? And my point was that this perhaps this is the meta-ethical position that every rational agent individually converges to. So nobody "decides", or everyone does. And if they don't reach the same decision, then there's no single objective morality -- but even i so perhaps there's a limited set of coherent metaethical positions, like two or three of them.

I personally think all this collectivism is a carryover from the idea of (collective) democracy and other silly ideas.

I think my post was inspired more by TDT solutions to Prisoner's dilemma and Newcomb's box, a decision theory that takes into account the copies/simulations of its own self, or other problems that involve humans getting copied and needing to make a decision in blind coordination with their copies.

I imagined system that are not wholly copied, but rather just the module that determines the meta-ethical constraints, and tried to figure out to which directions would such system try to modify themselves, in the knowledge that other such system would similarly modify themselves.

Comment author: Amanojack 26 May 2011 12:37:48AM 0 points [-]

You're right, I think I'm confused about what you were talking about, or I inferred too much. I'm not really following at this point either.

One thing, though, is that you're using meta-ethics to mean ethics. Meta-ethics is basically the study of what people mean by moral language, like whether ought is interpreted as a command, as God's will, as a way to get along with others, etc. That'll tend to cause some confusion. A good heuristic is, "Ethics is about what people ought to do, whereas meta-ethics is about what ought means (or what people intend by it)."

Comment author: Peterdjones 25 May 2011 11:15:04PM *  1 point [-]

I'll just decide not to follow the advice, or I'll try it out and then after experiencing pain I will decide not to follow the advice again. I might tell you that, too, but I don't need to use the word "true" or any equivalent to do that. I can just say it didn't work.

Any word can be eliminated in favour of a definitions or paraphrase. Not coming out with an equivalent -- showing that you have dispensed with the concept -- is harder. Why didn't it work? You're going to have to paraphrase "Because it wasn't true" or refuse to answer.

Comment author: Amanojack 26 May 2011 12:29:29AM *  -1 points [-]

The concept of truth is for utility, not utility for truth. To get them backwards is to merely be confused by the words themselves. It's impossible to show you've dispensed with any concept, except to show that it isn't useful for what you're doing. That is what I've done. I'm non-cognitive to God, truth, and objective value (except as recently defined). Usually they all sound like religion, though they all are or were at one time useful approximate means of expressing things in English.

Comment author: TimFreeman 25 May 2011 05:30:46PM 0 points [-]

What about beliefs being justified by non-beliefs? If you're a traditional foundationalist, you think everything is ultimately grounded in sense-experience, about which we cannot reasonably doubt.

If a traditional foundationalist believes that beliefs are justified by sense-experience, he's a justificationalist. The argument in the OP works. How can he justify the belief that beliefs are justified by sense-experience without first assuming his conclusion?

Also, what about externalism? This is one of the major elements of modern epistemology, as a response to such skeptical arguments.

I had to look it up. It is apparently the position that the mind is a result of both what is going on inside the subject and outside the subject. Some of them seem to be concerned about what beliefs mean, and others seem to carefully avoid using the word "belief". In the OP I was more interested in whether the beliefs accurately predict sensory experience. So far as I can tell, externalism says we don't have a mind that can be considered as a separate object, so we don't know things, so I expect it to have little to say about how we know what we know. Can you explain why you brought it up?

I don't mean to imply that either of these is correct, but it seems that if one is going to attempt to use disjunctive syllogism to argue for anti-justificationism, you ought to be sure you've partitioned the space of reasonable theories.

I don't see any way to be sure of that. Maybe some teenage boy sitting alone in his bedroom in Iowa figured out something new half an hour ago; I would have no way to know. Given the text above, do think there are alternatives that are not covered?

Perhaps it is so structured that it is invulnerable to being changed after it is adopted, regardless of the evidence observed.

This example seems anomalous. If there exists some H such that, if P(H) > 0.9, you lose the ability to choose P(H), you might want to postpone believing in it for prudent reasons. But these don’t really bear on what the epistemically rational level of belief is (Assuming remaining epistemically rational is not part of formal epistemic rationality).

Furthermore, if you adopted a policy of never raising P(H) above 0.9, it’d be just like you were stuck with P(H) < 0.9 !

The point is that if a belief will prevent you from considering alternatives, that is a true and relevant statement about the belief that you should know when choosing whether to adopt it. The point is not that you shouldn't adopt it. Bayes' rule is probably one of those beliefs, for example.

Without a constraining external metric, there are many consistent sets [of preferences], and the only criticism you can ultimately bring to bear is one of inconsistency.

I presently believe there are many consistent sets of preferences, and maybe you do too. If that's true, we should find a way to live with it, and the OP is proposing such a way.

I don't know what the word "ultimately" means there. If I leave it out, your statement is obviously false -- I listed a bunch of criticisms of preferences in the OP. What did you mean?

Comment author: Amanojack 25 May 2011 11:09:22PM 0 points [-]

How can he justify the belief that beliefs are justified by sense-experience without first assuming his conclusion?

I don't know what exactly "justify" is supposed to mean, but I'll interpret it as "show to be useful for helping me win." In that case, it's simply that certain types of sense-experience seem to have been a reliable guide for my actions in the past, for helping me win. That's all.

To think of it in terms of assumptions and conclusions is to stay in the world of true/false or justified/unjustified, where we can only go in circles because we are putting the cart before the horse. The verbal concepts of "true" and "justified" probably originated as a way to help people win, not as ends to be pursued for their own sake. But since they were almost always correlated with winning, they became ends pursued for their own sake - essential ones! In the end, if you dissolve "truth" it just ends up meaning something like "seemingly reliable guidepost for my actions."

Comment author: endoself 25 May 2011 07:44:12PM 1 point [-]

Are you losing sleep over the daily deaths in Iraq? Are most LWers? . . . If we cared as much as we signal we do, no one would be able go to work, or post on LW. We'd all be too grief-stricken.

That is exactly what I was talking about when I said "There's a difference between mental distress and action-motivating desire.". Utility functions are about choices, not feelings, so I assumed that, in a discussion about utility we would be using the word 'care' (as in "If we cared as much as we signal we do") to refer to motives for action, not mental distress. If this isn't clear, I'm trying to refer to the same ideas discussed here.

And it also isn't immediately clear that anyone would really want their utility function to be unbounded (unless I'm misinterpreting the term).

It does not make sense to speak of what someone wants their utility function to be; utility functions just describe actual preferences. Someone's utility function is unbounded if and only if there are consequences with arbitrarily high utility differences. For every consequence, you can identify one that is over twice as good (relative to some zero point, which can be arbitrarily chosen. This doesn't really matter if you're not familiar with the topic, it just corresponds to the fact that if every consequence were 1 utilon better, you would make the same choices because relative utilities would not have changed.) Whether a utility function has this property is important in many circumstances and I consider it an open problem whether humans' utility functions are unbounded, though some would probably disagree and I don't know what science doesn't know.

Comment author: Amanojack 25 May 2011 10:52:21PM 0 points [-]

Is this basically saying that you can tell someone else's utility function by demonstrated preference? It sounds a lot like that.

Comment author: TimFreeman 25 May 2011 05:41:07PM 0 points [-]

However, if my seeing one black swan doesn't justify my belief that there is at least one black swan, how can I refute "all swans are white"?

Refuting something is justifying that it is false. The point of the OP is that you can't justify anything, so it's claiming that you can't refute "all swans are white". A black swan is simply a criticism of the statement "all swans are white". You still have a choice -- you can see the black swan and reject "all swans are white", or you can quibble with the evidence in a large number of ways which I'm sure you know of too and keep on believing "all swans are white". People really do that; searching Google for "Rapture schedule" will pull up a prominent and current example.

Comment author: Amanojack 25 May 2011 10:46:53PM *  0 points [-]

Why not just phrase it in terms of utility? "Justification" can mean too many different things.

Seeing a black swan diminishes (and for certain applications, destroys) the usefulness of the belief that all swans are white. This seems a lot simpler.

Putting it in terms of beliefs paying rent in anticipated experiences, the belief "all swans are white" told me to anticipate that if I knew there was a black animal perched on my shoulder it could not be a swan. Now that belief isn't as reliable of a guidepost. If black swans are really rare I could probably get by with it for most applications and still use it to win at life most of the time, but in some cases it will steer me wrong - that is, cause me to lose.

So can't this all be better phrased in more established LW terms?

View more: Next