I am currently learning about the basics of decision theory, most of which is common knowledge on LW. I have a question, related to why EDT is said not to work.

Consider the following Newcomblike problem: A study shows that most people who two-box in Newcomblike problems as the following have a certain gene (and one-boxers don't have the gene). Now, Omega could put you into something like Newcomb's original problem, but instead of having run a simulation of you, Omega has only looked at your DNA: If you don't have the "two-boxing gene", Omega puts $1M into box B, otherwise box B is empty. And there is $1K in box A, as usual. Would you one-box (take only box B) or two-box (take box A and B)? Here's a causal diagram for the problem:



Since Omega does not do much other than translating your genes into money under a box, it does not seem to hurt to leave it out:


I presume that most LWers would one-box. (And as I understand it, not only CDT but also TDT would two-box, am I wrong?)

Now, how does this problem differ from the smoking lesion or Yudkowsky's (2010, p.67) chewing gum problem? Chewing Gum (or smoking) seems to be like taking box A to get at least/additional $1K, the two-boxing gene is like the CGTA gene, the illness itself (the abscess or lung cancer) is like not having $1M in box B. Here's another causal diagram, this time for the chewing gum problem:

As far as I can tell, the difference between the two problems is some additional, unstated intuition in the classic medical Newcomb problems. Maybe, the additional assumption is that the actual evidence lies in the "tickle", or that knowing and thinking about the study results causes some complications. In EDT terms: The intuition is that neither smoking nor chewing gum gives the agent additional information.

New Comment
93 comments, sorted by Click to highlight new comments since: Today at 9:25 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

I think UDT reasoning would go like this (if translated to human terms). There are two types of mathematical multiverse, only one of which is real (i.e., logically consistent). You as a UDT agent gets to choose which one. In the first one, UDT agents one-box in this Genetic Newcomb Problem (GNP), so the only genes that statistically correlate with two-boxing are those that create certain kinds of compulsions overriding deliberate decision making, or for other decision procedures that are not logically correlated with UDT. In the second type of mathematical multiverse, UDT agents two-box in GNP, so the list of genes that correlate with two-boxing also includes genes for UDT.

Which type of multiverse is better? It depends on how Omega chooses which gene to look at, which is not specified in the OP. To match the Medical Newcomb Problem as closely as possible, let's assume that in each world (e.g., Everett branch) of each multiverse, Omega picks a random gene look at (from a list of all human genes), and puts $1M in box B for you if you don't have that gene. You live in a world where Omega happened to pick a gene that correlates with two-boxing. Under this assumption, the second type o... (read more)

4Caspar Oesterheld9y
Thank you for this elaborate response!! Why would Omega look at other human genes and not the two-boxing (correlated) gene(s) in any world? Maybe I overlook something or did not describe the problem very well, but in the second multiverse UDT agents two-box, therefore UDT agents (probably) have the two-boxing gene and don't get the $1M. In the first multiverse, UDT agents one-box, therefore UDT agents (probably) don't have the one-boxing gene and get the $1M. So, the first multiverse seems to be better than the second. Yes, this is more or less the scenario, I was trying to describe. Specifically, I wrote: So, it's part of the GNP that Omega has looked at the "two-boxing gene" or (more realistically perhaps) the "most common gene correlated with two-boxing".
4Wei Dai9y
I was trying to create a version of the problem that corresponds more closely to MNP, where the fact that a single gene correlates with both chewing gum and abscess is a coincidence, not the result of some process looking for genes correlated with chewing gum, and giving people with those genes abscesses. Do you see that assuming Omega worked the way I described, then the number and distribution of boxes containing $1M is exactly the same in the two multiverses, therefore the second multiverse is better? I think this is what makes your version of GNP different from MNP, and why we have different intuitions about the two cases. If there is someone or something who looked the most common gene correlated with two-boxing (because it was the most common gene correlated with two-boxing, rather than due to a coincidence), then by changing whether you two-box, you can change whether other UDT agents two-box, and hence which gene is the most common gene correlated with two-boxing, and hence which gene Omega looked at, and hence who gets $1M in box B. In MNP, there is no corresponding process searching for genes correlated with gum chewing, so you can't try to influence that process by choosing to not chew gum.
2Caspar Oesterheld9y
Yes, I think I understand that now. But in your version the two-boxing gene practically does not cause the $1M to be in box B, because Omega mostly looks at random other genes. Would that even be a Newcomblike problem? In EY's chewing gum MNP, it seems like CGTA causes both the throat abscess and influences people to chew gum. (See p.67 of the TDT paper ) (It gets much more complicated, if evolution has only produced a correlation between CGTA and another chewing gum gene.) The CGTA gene is always read, copied into RNA etc., ultimately leading to throat abscesses. (The rest of the DNA is used, too, but only determines the size of your nose etc.) In the GNP, the two-boxing gene is always read by Omega and translated into a number of dollars under box B. (Omega can look at the rest of the DNA, too, but does not care.) I don't get the difference, yet, unfortunately. I don't understand UDT, yet, but it seems to me that in the chewing gum MNP, you could not chew gum, thereby changing whether other UDT agents chew gum, and hence whether UDT agents' genes contain CGTA. Unless you know that CGTA has no impact on how you ultimately resolve this problem, which is not stated in the problem description and which would make EDT also chew gum.

The general mistake that many people are making here is to think that determinism makes a difference. It does not.

Let's say I am Omega. The things that are playing are AIs. They are all 100% deterministic programs, and they take no input except an understanding of the game. They are not allowed to look at their source code.

I play my part as Omega in this way. I examine the source code of the program. If I see that it is a program that will one-box, I put the million. If I see that it is a program that will two-box, I do not put the million.

Note that determ... (read more)

3ike9y
You're describing regular Newcomb, not this gene version. (Also note that Omega needs to have more processing power than the programs to do what you want it to do, just like the human version.) The analogue would be defining a short program that Omega will run over the AIs code, that predicts what the AI will output correctly 99% of the time. Then it becomes a question of whether any given AI can outwit the program. If an AI thinks the program won't work on it, for whatever reason (by which I mean "conditioning on myself picking X doesn't cause my estimate of the prediction program outputting X to change, and vice-versa"), it's free to choose whatever it wants to. Getting back to humans, I submit that a certain class of people that actually think about the problem will induce a far greater failure rate in Omega, and that therefore that severs the causal link between my decision and Omega's, in the same way as an AI might be able to predict that the prediction program won't work on it. As I said elsewhere, were this incorrect, my position would change, but then you probably aren't talking about "genes" anymore. You shouldn't be able to get 100% prediction rates from only genes.
0Unknowns9y
It should be obvious that there is no difference between regular Newcomb and genetic Newcomb here. I examine the source code to see whether the program will one-box or not; that is the same as looking at its genetic code to see if it has the one-boxing gene.
-1Jiro9y
Regular Newcomb requires that, for certain decision algorithms, Omega solve the halting problem. Genetic Newcomb requires that Omega look for the gene, something which he can always do. The "regular equivalent" of genetic Newcomb is that Omega looks at the decision maker's source code, but it so happens that most decision makers work in ways which are easy to analyze.
1g_pepper9y
How so? I have not been able to come up with a valid decision algorithm that would require Omega to solve the halting problem. Do you have an example?
0Jiro9y
"Predict what Omega thinks you'll do, then do the opposite". Which is really what the halting problem amounts to anyway, except that it's not going to be spelled out; it's going to be something that is equivalent to that but in a nonobvious way. Saying "Omega will determine what the agent outputs by reading the agent's source code", is going to implicate the halting problem.
0g_pepper9y
I don't know if that is possible given Unknowns' constraints. Upthread Unknowns defined this variant of Newcomb as: Since the player is not allowed to look at its own (or, presumably, Omega's) code, it is not clear to me that it can implement a decision algorithm that will predict what Omega will do and then do the opposite. However, if you remove Unknowns' restrictions on the players, then your idea will cause some serious issues for Omega! In fact, a player than can predict Omega as effectively as Omega can predict the player seems like a reductio ad absurdum of Newcomb's paradox.
0Jiro9y
If Omega is a program too, then an AI that is playing can have a subroutine that is equivalent to "predict Omega". The AI doesn't have to actually look at its own source code to do things that are equivalent to looking at its own source code--that's how the halting problem works! If Omega is not a program and can do things that a program can't do, then this isn't true,. but I am skeptical that such an Omega is a meaningful concept. Of course, the qualifier "deterministic" means that Omega can pick randomly, which the program cannot do, but since Omega is predicting a deterministic program, picking randomly can't help Omega do any better.
0Jiro9y
Now that I think of it, it depends on exactly what it means for Omega to tell that you have a gene for two-boxing. If Omega has the equivalent of a textbook saying "gene AGTGCGTTACT leads to two-boxing" or if the gene produces a brain that is incapable of one-boxing at all in the same way that genes produce lungs that are incapable of breathing water, then what I said applies. If it's a gene for two-boxing because it causes the bearer to produce a specific chain of reasoning, and Omega knows it's a two-boxing gene because Omega has analyzed the chain and figured out that it leads to two-boxing, then there actually is no difference. (This is complicated by the fact that the problem states that having the gene is statistically associated with two-boxing, which is neither of those. If the gene is only statistically associated with two-boxing, it might be that the gene makes the bearer likely to two-box in ways that are not implicated if the bearer reasons the problem out in full logical detail.)
0Jiro9y
Actually, there's another difference. The original Newcomb problem implies that it is possible for you figure out the correct answer. With genetic Newcomb, it may be impossible for you to figure out the correct answer. It is true that having your decision determined by your genes is similar to having your decision determined by the algorithm you are executing. However, we know that both sets of genes can happen, but if your decision is determined by the algorithm you are using, certain algorithms may be contradictory and cannot happen. (Consider an algorithm that predicts what Omega will do and acts based on that prediction.) Although now that I think of it, that's pretty much the halting problem objection anyway.
0Jiro9y
Omega can solve the halting problem?

I may as well repeat my thoughts on Newcomb's, decision theory, and so on. I come to this from a background in decision analysis, which is the practical version of decision theory.

You can see decision-making as a two-step, three-state problem: the problem statement is interpreted to make a problem model, which is optimized to make a decision.

If you look at the wikipedia definitions of EDT and CDT, you'll see they primarily discuss the optimization process that turns a problem model into a decision. But the two accept different types of problem models; EDT ... (read more)

Hm, this is a really interesting idea.

The trouble is that it's tricky to apply a single decision theory to this problem, because by hypothesis, this gene actually changes which decision theory you use! If I'm a TDT agent, then this is good evidence I have the "TDT-agent gene," but in this problem I don't actually know whether the TDT-gene is the one-box gene or the two-box gene. If TDT leads to one-boxing, then it recommends two-boxing - but if it provably two-boxes it is the "two-box gene" and gets the bad outcome. This is to some exte... (read more)

4Caspar Oesterheld9y
I am not entirely sure, I understand your TDT analysis, maybe that's because I don't understand TDT that well. I assumed that TDT would basically just do what CDT does, because there are no simulations of the agent involved. Or do you propose that checking for the gene is something like simulating the agent? It does not seem to be more evil than Newcomb's problem, but I am not sure, what you mean by "evil". For every decision theory, it is possible, of course, to set up some decision problem, where this decision theory loses. Would you say that I set up the "genetic Newcomb problem" specifically to punish CDT/TDT?
-1Manfred9y
The role that would normally be played by simulation is here played by a big evidential study of what people with different genes do. This is why it matters whether the people in the study are good decision-makers or not - only when the people in the study are in a position similar to my own do they fulfill this simulation-like role. Yeah, that sentence is phrased poorly, sorry. But I'll try to explain. The easy way to construct an evil decision problem (say, targeting TDT) is to figure out what action TDT agents take, and then set the hidden variables so that that action is suboptimal - in this way the problem can be tilted against TDT agents even if the hidden variables don't explicitly care that their settings came from this evil process. In this problem, the "gene" is like a flag on a certain decision theory that tells what action it will take, and the hidden variables are set such that people with that decision theory (the decision theory that people with the one-box gene use) act suboptimally (people with the one-box gene who two-box get more money). So this uses very similar machinery to an evil decision problem. The saving grace is that the other action also gets its own flag (the two-box gene), which has a different setting of the hidden variables.
0Caspar Oesterheld9y
Yes, the idea is that they are sufficiently similar to you so that the study can be applied (but also sufficiently different to make it counter-intuitive to say it's a simulation). The subjects of the study may be told that there already exists a study, so that their situation is equivalent to yours. It's meant to be similar to the medical Newcomb problems in that regard. I briefly considered the idea that TDT would see the study as a simulation, but discarded the possibility, because in that case the studies in classic medical Newcomb problems could also be seen as simulations of the agent to some degree. The "abstract computation that an agent implements" is a bit vague, anyway, I assume, but if one were willing to go this far, is it possible that TDT conflates with EDT?
0Manfred9y
Under the formulation that leads to one-boxing here, TDT will be very similar to EDT whenever the evidence is about the unknown output of your agent's decision problem. They are both in some sense trying to "join the winning team" - EDT by expecting the winning-team action to make them have won, and TDT only in problems where what team you are on is identical to what action you take.
0Unknowns9y
This is not an "evil decision problem" for the same reason original Newcomb is not, namely that whoever chooses only one box gets the reward, not matter what process he uses.
4Unknowns9y
Yes, all of this is basically correct. However, it is also basically the same in the original Newcomb although somewhat more intuitive. In the original problem Omega decides to put the one million or not depending on its estimate of what you will do, which likely depends on "what kind of person" you are, in some sense. And being this sort of person is also going to determine what kind of decision theory you use, just as the gene does in the genetic version. The original Newcomb is more intuitive, though, because we can more easily accept that "being such and such a kind of person" could make us use a certain decision theory, than that a gene could do the same thing. Even the point about other people knowing the results or using certain reasoning is the same. If you find an Omega in real life, but find out that all the people being tested so far are not using any decision theory, but just choosing impulsively, and Omega is just judging how they would choose impulsively, then you should take both boxes. It is only if you know that Omega tends to be right no matter what decision theory people are using, that you should choose the one box.

Upvoting: This is a very good post which has caused everybody's cached decision-theory choices to fail horribly because they're far too focused on getting the "correct" answer and then proving that answer correct and not at all focused on actually thinking about the problem at hand. Enthusiastic applause.

The OP does not sufficiently determine the answer, unless we take its simplified causal graph as complete, in which case I would two-box. I hope that if in fact "most LWers would one-box," we would only do so because we think Omega would be smarter than that.

I assume that the one-boxing gene makes a person generically more likely to favor the one-boxing solution to Newcomb. But what about when people learn about the setup of this particular problem? Does the correlation between having the one-boxing gene and inclining toward one-boxing still hold? Are people who one-box only because of EDT (even though they would have two-boxed before considering decision theory) still more likely to have the one-boxing gene? If so, then I'd be more inclined to force myself to one-box. If not, then I'd say that the apparent co... (read more)

1Caspar Oesterheld9y
Yes, it should also hold in this case. Knowing about the study could be part of the problem and the subjects of the initial study could be lied to about a study. The idea of the "genetic Newcomb problem" is that the two-boxing gene is less intuitive than CGTA and that its workings are mysterious. It could make you be sure that you have or don't have the gene. It could make be comfortable with decision theories whose names start with 'C', interpret genetical Newcomb problem studies in a certain way etc. The only thing that we know is that is causes us to two-box, in the end. For CGTA, on the other hand, we have a very strong intuition that it causes a "tickle" or so that could be easily overridden by us knowing about the first study (which correlates chewing gum with throat abscesses). It could not possibly influence what we think about CDT vs. EDT etc.! But this intuition is not part of the original description of the problem.
0Brian_Tomasik9y
If there were a perfect correlation between choosing to one-box and having the one-box gene (i.e., everyone who one-boxes has the one-box gene, and everyone who two-boxes has the two-box gene, in all possible circumstances), then it's obvious that you should one-box, since that implies you must win more. This would be similar to the original Newcomb problem, where Omega also perfectly predicts your choice. Unfortunately, if you really will follow the dictates of your genes under all possible circumstances, then telling someone what she should do is useless, since she will do what her genes dictate. The more interesting and difficult case is when the correlation between gene and choice isn't perfect.
0Brian_Tomasik9y
(moved comment)

I think we need to remember here the difference between logical influence and causal influence?

My genes can cause me to be inclined towards smoking, and my genes can cause me to get lesions. If I choose to smoke, not knowing my genes, then that's evidence for what my genes say, and it's evidence about whether I'll get lesions; but it doesn't actually causally influence the matter.

My genes can incline me towards one-boxing, and can incline Omega towards putting $1M in the box. If I choose to two-box despite my inclinations, then that provides me with eviden... (read more)

2Unknowns9y
In the original Newcomb's problem, am I allowed to say "in the world with the million, I am more likely to one-box than in the world without, so I'm going to one-box"? If I thought this worked, then I would do it no matter what world I was in, and it would no longer be true... Except that it is still true. I can definitely reason this way, and if I do, then of course I had the disposition to one-box, and of course Omega put the million there; because the disposition to one-box was the reason I wanted to reason this way. And likewise, in the genetic variant, I can reason this way, and it will still work, because the one-boxing gene is responsible for me reasoning this way rather than another way.
1philh9y
In the original, you would say: "in the world where I one-box, the million is more likely to be there, so I'll one-box". If there's a gene that makes you think black is white, then you're going to get killed on the next zebra crossing. If there's a gene that makes you misunderstand decision theory, you're going to make some strange decisions. If Omega is fond of people with that gene, then lucky you. But if you don't have the gene, then acting like you do won't help you. Another reframing: in this version, Omega checks to see if you have the photic sneeze reflex, then forces you to stare at a bright light and checks whether or not you sneeze. Ve gives you $1k if you don't sneeze, and independently, $1M if you have the PSR gene. If I can choose whether or not to sneeze, then I should not sneeze. Maybe the PSR gene makes it harder for me to not sneeze, in which case I can be really happy that I have to stifle the urge to sneeze, but I should still not sneeze. But if the PSR gene just makes me sneeze, then why are we even asking whether I should sneeze or not?
3Unknowns9y
I think this is addressed by my top level comment about determinism. But if you don't see how it applies, then imagine an AI reasoning like you have above. "My programming is responsible for me reasoning the way I do rather than another way. If Omega is fond of people with my programming, then I'm lucky. But if he's not, then acting like I have the kind of programming he likes isn't going to help me. So why should I one-box? That would be acting like I had one-box programming. I'll just take everything that is in both boxes, since it's not up to me." Of course, when I examined the thing's source code, I knew it would reason this way, and so I did not put the million.
0philh9y
So I think where we differ is that I don't believe in a gene that controls my decision in the same way that you do. I don't know how well I can articulate myself, but: As an AI, I can choose whether my programming makes me one-box or not, by one-boxing or not. My programming isn't responsible for my reasoning, it is my reasoning. If Omega looks at my source code and works out what I'll do, then there are no worlds where Omega thinks I'll one-box, but I actually two-box. But imagine that all AIs have a constant variable in their source code, unhelpfully named TMP3. AIs with TMP3=true tend to one-box in Newcomblike problems, and AIs with TMP3=false tend to two-box. Omega decides whether to put in $1M by looking at TMP3. (Does the problem still count as Newcomblike? I'm not sure that it does, so I don't know if TMP3 correlates with my actions at all. But we can say that TMP3 correlates with how AIs act in GNP, instead.) If I have access to my source code, I can find out whether I have TMP3=true or false. And regardless of which it is, I can two-box. (If I can't choose to two-box, after learning that I have TMP3=true, then this isn't me.) Since I can two-box without changing Omega's decision, I should. Whereas in the original Newcomb's problem, I can look at my source code, and... maybe I can prove whether I one- or two-box. But if I can, that doesn't constrain my decision so much as predict it, in the same way that Omega can; the prediction of "one-box" is going to take into account the fact that the arguments for one-boxing overwhelm the consideration of "I really want to two-box just to prove myself wrong". More likely, I can't prove anything. And I can one- or two-box, but Omega is going to predict me correctly, unlike in GNP, so I one-box. The case where I don't look at my source code is more complicated (maybe AIs with TMP3=true will never choose to look?), but I hope this at least illustrates why I don't find the two comparable. (That said, I might actuall
1Unknowns9y
"I don't believe in a gene that controls my decision" refers to reality, and of course I don't believe in the gene either. The disagreement is whether or not such a gene is possible in principle, not whether or not there is one in reality. We both agree there is no gene like this in real life. As you note, if an AI could read its source code and sees that it says "one-box", then it will still one-box, because it simply does what it is programmed to do. This first of all violates the conditions as proposed (I said the AIs cannot look at their sourcec code, and Caspar42 stated that you do not know whether or not you have the gene). But for the sake of argument we can allow looking at the source code, or at the gene. You believe that if you saw you had the gene that says "one-box", then you could still two-box, so it couldn't work the same way. You are wrong. Just as the AI would predictably end up one-boxing if it had that code, so you would predictably end up one-boxing if you had the gene. It is just a question of how this would happen. Perhaps you would go through your decision process, decide to two-box, and then suddenly become overwhelmed with a sudden desire to one-box. Perhaps it would be because you would think again and change your mind. But one way or another you would end up one-boxing. And this "doesn't' constrain my decision so much as predict it", i.e. obviously both in the case of the AI and in the case of the gene, in reality causality does indeed go from the source code to one-boxing, or from the gene to one-boxing. But it is entirely the same in both cases -- causality runs only from past to future, but for you, it feels just like a normal choice that you make in the normal way.
-1philh9y
I was referring to "in principle", not to reality. Yes. I think that if I couldn't do that, it wouldn't be me. If we don't permit people without the two-boxing gene to two-box (the question as originally written did, but we don't have to), then this isn't a game I can possibly be offered. You can't take me, and add a spooky influence which forces me to make a certain decision one way or the other, even when I know it's the wrong way, and say that I'm still making the decision. So again, we're at the point where I don't know why we're asking the question. If not-me has the gene, he'll do one thing; if not, he'll do the other; and it doesn't make a difference what he should do. We're not talking about agents with free action, here. Again, I'm not sure exactly how this extends to the case where an agent doesn't know whether they have the gene.
1Unknowns9y
What if we take the original Newcomb, then Omega puts the million in the box, and then tells you "I have predicted with 100% certainty that you are only going to take one box, so I put the million there?" Could you two-box in that situation, or would that take away your freedom? If you say you could two-box in that situation, then once again the original Newcomb and the genetic Newcomb are the same. If you say you could not, why would that be you when the genetic case would not be?
-1philh9y
Unless something happens out of the blue to force my decision - in which case it's not my decision - then this situation doesn't happen. There might be people for whom Omega can predict with 100% certainty that they're going to one-box even after Omega has told them his prediction, but I'm not one of them. (I'm assuming here that people get offered the game regardless of their decision algorithm. If Omega only makes the offer to people whom he can predict certainly, we're closer to a counterfactual mugging. At any rate, it changes the game significantly.)
1Unknowns9y
I agree that in reality it is often impossible to predict someone's actions, if you are going to tell them your prediction. That is why it is perfectly possible that the situation where you know the gene is impossible. But in any case this is all hypothetical because the situation posed assumes you cannot know which gene you have until you choose one or both boxes, at which point you immediately know. EDIT: You're really not getting the point, which is that the genetic Newcomb is identical to the original Newcomb in decision theoretic terms. Here you're arguing not about the decision theory issue, but whether or not the situations involved are possible in reality. If Omega can't predict with certainty when he tells his prediction, then I can equivalently say that the gene only predicts with certainty when you don't know about it. Knowing about the gene may allow you to two-box, but that is no different from saying that knowing Omega's decision before you make your choice would allow you to two-box, which it would. Basically anything said about one case can be transformed into the other case by fairly simple transpositions. This should be obvious.
0philh9y
Sorry, tapping out now. EDIT: but brief reply to your edit: I'm well aware that you think they're the same, and telling me that I'm not getting the point is super unhelpful.
-1Creutzer9y
Then you're talking about an evil decision problem. But neither in the original nor in the genetic Newcombe's problem is your source code investigated.
1Unknowns9y
No, it is not an evil decision problem, because I did that not because of the particular reasoning, but because of the outcome (taking both boxes). The original does not specify how Omega makes his prediction, so it may well be by investigating source code.

I have never agreed that there is a difference between the smoking lesion and Newcomb's problem. I would one-box, and I would not smoke. Long discussion in the comments here.

2Caspar Oesterheld9y
Interesting, thanks! I thought that it was more or less consensus that the smoking lesion refutes EDT. So, where should I look to see EDT refuted? Absent-minded driver, Evidential Blackmail, counterfactual mugging or something else?
-1Unknowns9y
Yes, as you can see from the comments on this post, there seems to be some consensus that the smoking lesion refutes EDT. The problem is that the smoking lesion, in decision theoretic terms, is entirely the same as Newcomb's problem, and there is also a consensus that EDT gets the right answer in the case of Newcomb. Your post reveals that the smoking lesion is the same as Newcomb's problem and thus shows the contradiction in that consensus. Basically there is a consensus but it is mistaken. Personally I haven't seen any real refutation of EDT.
-1hairyfigment9y
That does seem like the tentative consensus, and I was unpleasantly surprised to see someone on LW who would not chew the gum. We should be asking what decision procedure gives us more money, e.g. if we're writing a computer program to make a decision for us. You may be tempted to say that if Omega is physical - a premise not usually stated explicitly, but one I'm happy to grant - then it must be looking at some physical events linked to your action and not looking at the answer given by your abstract decision procedure. A procedure based on that assumption would lead you to two-box. This thinking seems likely to hurt you in analogous real-life situations, unless you have greater skill at lying or faking signals than (my model of) either a random human being or a random human of high intelligence. Discussing it, even 'anonymously', would constitute further evidence that you lack the skill to make this work. Now TDT, as I understand it, assumes that we can include in our graph a node for the answer given by an abstract logical process. For example, to predict the effects of pushing some buttons on a calculator, we would look at both the result of a "timeless" logical process and also some physical nodes that determine whether or not the calculator follows that process. Let's say you have a similar model of yourself. Then if and only if your model of the world says that the abstract answer given by your decision procedure does not sufficiently determine Omega's action, then a counterfactual question about that answer will tell you to two-box. But if Omega when examining physical evidence just looks at the physical nodes which (sufficiently) determine whether or not you will use TDT (or whatever decision procedure you're using), then presumably Omega knows what answer that process gives, which will help determine the result. A counterfactual question about the logical output would then tell you to one-box. TDT I think asks that question and gets that answer. UDT I b

In the classic problem, Omega cannot influence my decision; it can only figure out what it is before I do. It is as though I am solving a math problem, and Omega solves it first; the only confusing bit is that the problem in question is self-referential.

If there is a gene that determines what my decision is, then I am not making the decision at all. Any true attempt to figure out what to do is going to depend on my understanding of logic, my familiarity with common mistakes in similar problems, my experience with all the arguments made about Newcomb's prob... (read more)

1Unknowns9y
This is like saying "if my brain determines my decision, then I am not making the decision at all."
0Kindly9y
Not quite. I outlined the things that have to be going on for me to be making a decision.
0Unknowns9y
You cannot assume that any of those things are irrelevant or that they are overridden just because you have a gene. Presumably the gene is arranged in coordination with those things.

I think two-boxing in your modified Newcomb is the correct answer. In the smoking lesion, smoking is correct, so there's no contradiction.

One-boxing is correct in the classic Newcomb because your decision can "logically influence" the fact of "this person one-boxes". But your decision in the modified Newcomb can't logically influence the fact of "this person has the two-boxing gene".

3Unknowns9y
Under any normal understanding of logical influence, your decision can indeed "logically influence" whether you have the gene or not. Let's say there is a 100% correlation between having the gene and the act of choosing -- everyone who chooses the one box has the one boxing gene, and everyone who chooses both boxes has the two boxing gene. Then if you choose to one box, this logically implies that you have the one boxing gene. Or do you mean something else by "logically influence" besides logical implication?
0OrphanWilde9y
No, your decision merely reveals what genes you have, your decision cannot change what genes you have.
1Unknowns9y
Even in the original Newcomb you cannot change whether or not there is a million in the box. Your decision simply reveals whether or not it is already there.
-1OrphanWilde9y
In the original Newcomb, causality genuinely flowed in the reverse. Your decision -did- change whether or not there is a million dollars in the box. The original problem had information flowing backwards in time (either through a simulation which, for practical purposes, plays time forward, then goes back to the origin, or through an omniscient being seeing into the future, however one wishes to interpret it). In the medical Newcomb, causality -doesn't- flow in the reverse, so behaving as though causality -is- flowing in the reverse is incorrect.
3g_pepper9y
I don't know about that. Nozick's article from 1969 states: Nothing in that implies that causality flowed in the reverse; it sounds like the being just has a really good track record.
-1OrphanWilde9y
The "simulation" in this case could entirely be in the Predictor's head. But I concede the point, and shift to a weaker position: In the original Newcomb problem, the nature of the boxes is decided by a perfect or near-perfect prediction of your decision; it's predicting your decision, and is for all intents and purposes taking your decision into account. (Yes, it -could- be using genetics, but there's no reason to elevate that hypothesis.) In the medical Newcomb problem, the nature of the boxes is decided by your genetics, which have a very strong correlation with your decision; it is still predicting your decision, but by a known algorithm which doesn't take your decision into account. Your decision in the first case should account for the possibility that it accurately predicts your decision - unless you place .1% or greater odds on it mis-calculating your decision, you should one-box. [Edited: Fixed math error that reversed calculation versus mis-calculation.] Your decision in the second case should not - your genetics are already what your genetics are, and if your genetics predict two-boxing, you should two-box because $1,000 is better than nothing, and if your genetics predict one-boxing, you should two-box because $1,001,000 is better than $1,000,000.
0g_pepper9y
Actually, I am not sure about even that weaker position. The Nozick article stated: It seems to me that with this passage Nozick explicitly contradicts the assertion that the being is "taking your decision into account".
-1OrphanWilde9y
It is taking its -prediction- of your decision into account in the weaker version, and is good enough at prediction that the prediction is analogous to your decision (for all intents and purposes, taking your decision into account). The state is no longer part of the explanation of the decision, but rather the prediction of that decision, and the state derived therefrom. Introduce a .0001% chance of error and the difference is easier to see; the state is determined by the probability of your decision, given the information the being has available to it. (Although, reading the article, it appears reverse-causality vis a vis the being being God is an accepted, although not canonical, potential explanation of the being's predictive powers.) Imagine a Prisoner's Dilemma between two exactly precise clones of you, with one difference: One clone is created one minute after the first clone, and is informed the first clone has already made its decision. Both clones are informed of exactly the nature of the test (that is, the only difference in the test is that one clone makes a decision first). Does this additional information change your decision?
3Unknowns9y
In this case you are simply interpreting the original Newcomb to mean something absurd, because causality cannot "genuinely flow in reverse" in any circumstances whatsoever. Rather in the original Newcomb, Omega looks at your disposition, one that exists at the very beginning. If he sees that you are disposed to one-box, he puts the million. This is just the same as someone looking at the source code of an AI and seeing whether it will one-box, or someone looking for the one-boxing gene. Then, when you make the choice, in the original Newcomb you choose to one-box. Causality flows in only one direction, from your original disposition, which you cannot change since it is in the past, to your choice. This causality is entirely the same as in the genetic Newcomb. Causality never goes any direction except past to future.
-1OrphanWilde9y
Hypotheticals are not required to follow the laws of reality, and Newcomb is, in the original problem, definitionally prescient - he knows what is going to happen. You can invent whatever reason you would like for this, but causality flows, not from your current state of being, but from your current state of being to your future decision to Newcomb's decision right now. Because Newcomb's decision on what to put in the boxes is predicated, not on your current state of being, but on your future decision.

I think your last paragraph is more or less correct. The way I'd show it would be to place a node labelled 'decision' between the top node and the left node, representing a decision you make based on decision-theoretical or other reasoning. There are then two additional questions: 1) Do we remove the causal arrow from the top node to the bottom one and replace it with an arrow from 'decision' to the bottom? Or do we leave that arrow in place? 2) Do we add a 'free will' node representing some kind of outside causation on 'decision', or do we let 'decision' ... (read more)

2Unknowns9y
This is like saying a 100% determinate chess playing computer shouldn't look ahead, since it cannot affect its actions. That will result in a bad move. And likewise, just doing what you feel like here will result in smoking, since you (by stipulation) feel like doing that. So it is better to deliberate about it, like the chess computer, and choose both to one box and not to smoke.
[-][anonymous]9y-10

I would one-box if I had the one-boxing gene, and two-box if I had the two-boxing gene. I don't know what decision-making theory I'm using, because the problem statement didn't specify how the gene works.

I don't really see the point of asking people with neither gene what they'd do.

3Caspar Oesterheld9y
Maybe I should have added that you don't know which genes you have, before you make the decision, i.e. two-box or one-box.
0[anonymous]9y
I wasn't assuming that I knew beforehand. It's just that, if I have the one-boxing gene, it will compel me (in some manner not stated in the problem) to use a decision algorithm which will cause me to one-box, and similarly for the two-box gene.
3Caspar Oesterheld9y
Ah, okay. Well, the idea of my scenario is that you have no idea how all of this works. So, for example, the two-boxing gene could make you be 100% sure that you have or don't have the gene, so that two-boxing seems like the better decision. So, until you actually make a decision, you have no idea which gene you have. (Preliminary decisions, as in Eells tickle defense paper, are also irrelevant.) So, you have to make some kind of decision. The moment you one-box you can be pretty sure that you don't have the two-boxing gene since it did not manage to trick into two-boxing, which it usually does. So, why not just one-box and take the money? :-)
0[anonymous]9y
My problem with all this is, if hypothetical-me's decisionmaking process is made by genetics, why are you asking real-me what the decisionmaking process should be? Real-me can come up with whatever logic and arguments, but hypothetical-me will ignore all that and choose by some other method. (Traditional Newcomb is different, because in that case hypothetical-me can use the same decisionmaking process as real-me)
4Caspar Oesterheld9y
So, what if one day you learned that hypothetical-you is the actual-you, that is, what if Omega actually came up to you right now and told you about the study etc. and put you into the "genetic Newcomb problem"?
0[anonymous]9y
Well, I can say that I'd two-box. Does that mean I have the two-boxing gene?
1Unknowns9y
Hypothetical-me can use the same decisionmaking process as real-me also in genetic Newcomb, just as in the original. This simply means that the real you will stand for a hypothetical you which has the gene which makes you choose the thing that real you chooses, using the same decision process that the real you uses. Since you say you would two-box, that means the hypothetical you has the two-boxing gene. I would one-box, and hypothetical me has the one-boxing gene.
2Unknowns9y
This is no different from responding to the original Newcomb's by saying "I would one-box if Omega put the million, and two-box if he didn't." Both in the original Newcomb's problem and in this one you can use any decision theory you like.
-1[anonymous]9y
There is a difference - with the gene case, there is a causal pathway via brain chemistry or whatnot from the gene to the decision. In the original Newcomb problem, omega's prediction does not cause the decision.
4Unknowns9y
Even in the original Newcomb's problem there is presumably some causal pathway from your brain to your decision. Otherwise Omega wouldn't have a way to predict what you are going to do. And there is no difference here between "your brain" and the "gene" in the two versions. In neither case does Omega cause your decision, your brain causes it in both cases.

I would two-box in that situation. Don't see a problem.

3Caspar Oesterheld9y
Well, the problem seems to be that this will not give you the $1M, just like in Newcomb's original problem.
-1ike9y
Wait, you think I have the two-boxing gene? If that's the case, one-boxing won't help me; there's no causal link between my choice and which gene I have, unlike standard Newcomb, in which there is a causal link between my choice and the contents of the box, given TDT's definition of "causal link".
1Unknowns9y
Sure there is a link. The gene causes you to make the choice, just like in the standard Newcomb your disposition causes your choices. In the standard Newcomb, if you one-box, then you had the disposition to one-box, and Omega put the million. In the genetic Newcomb, if you one-box, then you had the gene to one-box, and Omega put the million.
1ike9y
OP here said (emphasis added) Which makes your claim incorrect. My beliefs about the world are that no such choice can be predicted by only genes with perfect accuracy; if you stipulate that they can, my answer would be different. Wrong; it's perfectly possible to have the gene to one-box but two-box. (If the facts were as stated in the OP, I'd actually expect conditioning on certain aspects of my decision-making processes to remove the correlation; that is, people who think similarly to me would have less correlation with choice-gene. If that prediction was stipulated away, my choice *might* change; it depends on exactly how that was formulated.)
5Caspar Oesterheld9y
So, as soon as it's not 100% of people two-boxing having the two-boxing gene, but only 99.9%, you assume that you are in the 0.1%?
0ike9y
You didn't specify any numbers. If the actual number was 99.9%, I'd consider that strong evidence against some of my beliefs about the relationship between decisions and genes. I was implicitly assuming a slightly lower number (like 70ish area), which would be somewhat more compatible, and in which case I would expect to be part of that 30% (with greater than 30% probability). If the number was, in fact, 99.9%, I'd have to assume that genes in general are far more related to specifics of how we think than I currently think, and it might be enough to make this an actual Newcomb's problem. The mechanism for the equivalency Newcomb would be that it creates a causal link from my reaching an opinion to my having a certain gene, in TDT terms. Gene would be another word for "brain state", as I've said elsewhere on this post.
1Unknowns9y
This is confusing the issue. I would guess that the OP wrote "most" because Newcomb's problem sometimes is put in such a way that the predictor is only right most of the time. And in such cases, it is perfectly possible to remove the correlation in the same way that you say. If I know how Omega is deciding who is likely to one-box and who is likely to two-box, I can purposely do the opposite of what he expects me to do. But if you want to solve the real problem, you have to solve it in the case of 100% correlation, both in the original Newcomb's problem and in this case.
0ike9y
Exactly; but since a vast majority of players won't do this, Omega can still be right most of the time. Can you formulate that scenario, then, or point me to somewhere it's been formulated? It would have to be a world with very different cognition than ours, if genes can determine choice 100% of the time; arguably, genes in that world would correspond to brain states in our world in a predictive sense, in which case this collapses to regular Newcomb, and I'd one-box. The problem presented by the gene-scenario, as stated by OP, is However, as soon as you add in a 100% correlation, it becomes very different, because you have no possibility of certain outcomes. If the smoking lesion problem was also 100%, then I'd agree that you shouldn't smoke, because whatever "gene" we're talking about can be completely identified (in a sense) with my brain state that leads to my decision.
2Unknowns9y
You are right that 100% correlation requires an unrealistic situation. This is true also in the original Newcomb, i.e. we don't actually expect anything in the real world to be able to predict our actions with 100% accuracy. Still, we can imagine a situation where Omega would predict our actions with a good deal of accuracy, especially if we had publicly announced that we would choose to one-box in such situations. The genetic Newcomb requires an even more unrealistic scenario, since in the real world genes do not predict actions with anything close to 100% certitude. I agree with you that this case is no different from the original Newcomb; I think most comments here were attempting to find a difference, but there isn't one.
0ike9y
We could, but I'm not going to think about those unless the problem is stated a bit more precisely, so we don't get caught up in arguing over the exact parameters again. The details on how exactly Omega determines what to do are very important. I've actually said elsewhere that if you didn't know how Omega did it, you should try to put probabilities on different possible methods, and do an EV calculation based on that; is there any way that can fail badly? (Also, if there was any chance of Omega existing and taking cues from our public announcements, the obvious rational thing to do would be to stop talking about it in public.) I think people may have been trying to solve the case mentioned in OP, which is less than 100%, and does have a difference.

Your "Newcomb-like" problem isn't. In the original Newcomb problem there is no situation where both boxes contain a reward, yet the naive CDT makes you act as though there were. In your setup there is such a possibility, so 2-boxing is the strictly better strategy. Any decision theory better make you 2-box.

EDIT: Thanks to those who pointed out my brain fart. Of course both boxes contain a reward in the one boxing case. It just doesn't help you any. I maintain that this is not a Newcomb-like problem, since here 2-boxing is a strictly better strategy. No one would one-box if they can help it.

5Caspar Oesterheld9y
I am sorry, but I am not sure about what you mean by that. If you are a one-boxing agent, then both boxes of Newcomb's original problem contain a reward, assuming that Omega is a perfect predictor.
4Unknowns9y
What are you talking about? In the original Newcomb problem both boxes contain a reward whenever Omega predicts that you are going to choose only one box.
0Unknowns9y
Re: the edit. Two boxing is strictly better from a causal decision theorist point of view, but that is the same here and in Newcomb. But from a sensible point of view, rather than the causal theorist point of view, one boxing is better, because you get the million, both here and in the original Newcomb, just as in the AI case I posted in another comment.

Anybody who one-boxes in the genetic-determinant of Omega are reversing causal flow.

1Unknowns9y
Why? They one-box because they have the gene. So no reversal. Just as in the original Newcomb problem they choose to one-box because they were the sort of person who would do that.
-1OrphanWilde9y
From the original post: If you one-box, you may or may not have the gene, but whether or not you have the gene is entirely irrelevant to what decision you should make. If, confronted with this problem, you say "I'll one-box", you're attempting to reverse causal flow - to determine your genetic makeup via the decisions you make, as opposed to the decision you make being determined by your genetic makeup. There is zero advantage conferred to declaring yourself a one-boxer in this arrangement.