Comment author: shminux 29 June 2015 03:07:20PM *  -2 points [-]

Your "Newcomb-like" problem isn't. In the original Newcomb problem there is no situation where both boxes contain a reward, yet the naive CDT makes you act as though there were. In your setup there is such a possibility, so 2-boxing is the strictly better strategy. Any decision theory better make you 2-box.

EDIT: Thanks to those who pointed out my brain fart. Of course both boxes contain a reward in the one boxing case. It just doesn't help you any. I maintain that this is not a Newcomb-like problem, since here 2-boxing is a strictly better strategy. No one would one-box if they can help it.

Comment author: Unknowns 29 June 2015 05:01:38PM 1 point [-]

Re: the edit. Two boxing is strictly better from a causal decision theorist point of view, but that is the same here and in Newcomb.

But from a sensible point of view, rather than the causal theorist point of view, one boxing is better, because you get the million, both here and in the original Newcomb, just as in the AI case I posted in another comment.

Comment author: Khoth 29 June 2015 04:54:38PM -1 points [-]

There is a difference - with the gene case, there is a causal pathway via brain chemistry or whatnot from the gene to the decision. In the original Newcomb problem, omega's prediction does not cause the decision.

Comment author: Unknowns 29 June 2015 04:57:05PM 3 points [-]

Even in the original Newcomb's problem there is presumably some causal pathway from your brain to your decision. Otherwise Omega wouldn't have a way to predict what you are going to do. And there is no difference here between "your brain" and the "gene" in the two versions.

In neither case does Omega cause your decision, your brain causes it in both cases.

Comment author: Unknowns 29 June 2015 04:52:08PM 3 points [-]

The general mistake that many people are making here is to think that determinism makes a difference. It does not.

Let's say I am Omega. The things that are playing are AIs. They are all 100% deterministic programs, and they take no input except an understanding of the game. They are not allowed to look at their source code.

I play my part as Omega in this way. I examine the source code of the program. If I see that it is a program that will one-box, I put the million. If I see that it is a program that will two-box, I do not put the million.

Note that determinism is irrelevant. If a program couldn't use a decision theory or couldn't make a choice, just because it is a determinate program, then no AI will ever work in the real world, and there is no reason that people should work in the real world either.

Also note that the only good decision in these cases is to one-box, even though the programs are 100% determinate.

Comment author: philh 29 June 2015 04:22:11PM *  1 point [-]

I think we need to remember here the difference between logical influence and causal influence?

My genes can cause me to be inclined towards smoking, and my genes can cause me to get lesions. If I choose to smoke, not knowing my genes, then that's evidence for what my genes say, and it's evidence about whether I'll get lesions; but it doesn't actually causally influence the matter.

My genes can incline me towards one-boxing, and can incline Omega towards putting $1M in the box. If I choose to two-box despite my inclinations, then that provides me with evidence about what Omega did, but it doesn't causally influence the matter.

If I don't know which of two worlds I'm in, I can't increase the probability of one by saying "in world A, I'm more likely to do X than in world B, so I'm going to do X". If nothing else, if I thought that worked, then I would do it whatever world I was in, and it would no longer be true.

In standard Newcomb, my inclination to one-box actually does make me one-box. In this version, my inclination to one-box is just a node that you've labelled "inclination to one-box", and you've said that Omega cares about the node rather than about whether or not I one-box. But you're still permitting me to two-box, so that node might just as well be "inclination to smoke".

Comment author: Unknowns 29 June 2015 04:37:45PM 2 points [-]

In the original Newcomb's problem, am I allowed to say "in the world with the million, I am more likely to one-box than in the world without, so I'm going to one-box"? If I thought this worked, then I would do it no matter what world I was in, and it would no longer be true...

Except that it is still true. I can definitely reason this way, and if I do, then of course I had the disposition to one-box, and of course Omega put the million there; because the disposition to one-box was the reason I wanted to reason this way.

And likewise, in the genetic variant, I can reason this way, and it will still work, because the one-boxing gene is responsible for me reasoning this way rather than another way.

Comment author: Khoth 29 June 2015 04:20:58PM -1 points [-]

I would one-box if I had the one-boxing gene, and two-box if I had the two-boxing gene. I don't know what decision-making theory I'm using, because the problem statement didn't specify how the gene works.

I don't really see the point of asking people with neither gene what they'd do.

Comment author: Unknowns 29 June 2015 04:33:55PM 2 points [-]

This is no different from responding to the original Newcomb's by saying "I would one-box if Omega put the million, and two-box if he didn't."

Both in the original Newcomb's problem and in this one you can use any decision theory you like.

Comment author: ike 29 June 2015 04:22:56PM *  0 points [-]

The gene causes you to make the choice, just like in the standard Newcomb your disposition causes your choices.

OP here said (emphasis added)

A study shows that most people

Which makes your claim incorrect. My beliefs about the world are that no such choice can be predicted by only genes with perfect accuracy; if you stipulate that they can, my answer would be different.

In the genetic Newcomb, if you one-box, then you had the gene to one-box, and Omega put the million.

Wrong; it's perfectly possible to have the gene to one-box but two-box.

(If the facts were as stated in the OP, I'd actually expect conditioning on certain aspects of my decision-making processes to remove the correlation; that is, people who think similarly to me would have less correlation with choice-gene. If that prediction was stipulated away, my choice *might* change; it depends on exactly how that was formulated.)

Comment author: Unknowns 29 June 2015 04:32:27PM 1 point [-]

This is confusing the issue. I would guess that the OP wrote "most" because Newcomb's problem sometimes is put in such a way that the predictor is only right most of the time.

And in such cases, it is perfectly possible to remove the correlation in the same way that you say. If I know how Omega is deciding who is likely to one-box and who is likely to two-box, I can purposely do the opposite of what he expects me to do.

But if you want to solve the real problem, you have to solve it in the case of 100% correlation, both in the original Newcomb's problem and in this case.

Comment author: ike 29 June 2015 03:51:47PM -1 points [-]

Wait, you think I have the two-boxing gene? If that's the case, one-boxing won't help me; there's no causal link between my choice and which gene I have, unlike standard Newcomb, in which there is a causal link between my choice and the contents of the box, given TDT's definition of "causal link".

Comment author: Unknowns 29 June 2015 04:01:08PM 1 point [-]

Sure there is a link. The gene causes you to make the choice, just like in the standard Newcomb your disposition causes your choices.

In the standard Newcomb, if you one-box, then you had the disposition to one-box, and Omega put the million.

In the genetic Newcomb, if you one-box, then you had the gene to one-box, and Omega put the million.

Comment author: Manfred 29 June 2015 03:24:04PM *  4 points [-]

Hm, this is a really interesting idea.

The trouble is that it's tricky to apply a single decision theory to this problem, because by hypothesis, this gene actually changes which decision theory you use! If I'm a TDT agent, then this is good evidence I have the "TDT-agent gene," but in this problem I don't actually know whether the TDT-gene is the one-box gene or the two-box gene. If TDT leads to one-boxing, then it recommends two-boxing - but if it provably two-boxes it is the "two-box gene" and gets the bad outcome. This is to some extent an "evil decision problem." Currently I'd one-box, based on some notion of resolving these sorts of problems through more UDT-ish proof-based reasoning (though it has some problems). Or in TDT-language, I'd be 'controlling' whether the TDT-gene was the two-box gene by picking the output of TDT.

However, this problem becomes a lot easier if most people are not actually using any formal reasoning, but are just doing whatever seems like a good idea at the time. Like, the sort of reasoning that leads to people actually smoking. If I'm dropped into this genetic Newcomb's problem, or into the smoking lesion problem, and I learn that almost all people in the data set I've seen were either bad at decision theory or didn't know the results of the data, then those people no longer have quite the same evidential impact about my current situation, and I can just smoke / two-box. It's only when those people and myself are in symmetrical situations (similar information, use similar decision-making processes) that I have to "listen" to them.

Comment author: Unknowns 29 June 2015 03:42:21PM 4 points [-]

Yes, all of this is basically correct. However, it is also basically the same in the original Newcomb although somewhat more intuitive. In the original problem Omega decides to put the one million or not depending on its estimate of what you will do, which likely depends on "what kind of person" you are, in some sense. And being this sort of person is also going to determine what kind of decision theory you use, just as the gene does in the genetic version. The original Newcomb is more intuitive, though, because we can more easily accept that "being such and such a kind of person" could make us use a certain decision theory, than that a gene could do the same thing.

Even the point about other people knowing the results or using certain reasoning is the same. If you find an Omega in real life, but find out that all the people being tested so far are not using any decision theory, but just choosing impulsively, and Omega is just judging how they would choose impulsively, then you should take both boxes. It is only if you know that Omega tends to be right no matter what decision theory people are using, that you should choose the one box.

Comment author: OrphanWilde 29 June 2015 02:53:23PM -2 points [-]

Anybody who one-boxes in the genetic-determinant of Omega are reversing causal flow.

Comment author: Unknowns 29 June 2015 03:24:06PM 1 point [-]

Why? They one-box because they have the gene. So no reversal. Just as in the original Newcomb problem they choose to one-box because they were the sort of person who would do that.

Comment author: shminux 29 June 2015 03:07:20PM *  -2 points [-]

Your "Newcomb-like" problem isn't. In the original Newcomb problem there is no situation where both boxes contain a reward, yet the naive CDT makes you act as though there were. In your setup there is such a possibility, so 2-boxing is the strictly better strategy. Any decision theory better make you 2-box.

EDIT: Thanks to those who pointed out my brain fart. Of course both boxes contain a reward in the one boxing case. It just doesn't help you any. I maintain that this is not a Newcomb-like problem, since here 2-boxing is a strictly better strategy. No one would one-box if they can help it.

Comment author: Unknowns 29 June 2015 03:21:56PM 3 points [-]

What are you talking about? In the original Newcomb problem both boxes contain a reward whenever Omega predicts that you are going to choose only one box.

View more: Prev | Next