I think UDT reasoning would go like this (if translated to human terms). There are two types of mathematical multiverse, only one of which is real (i.e., logically consistent). You as a UDT agent gets to choose which one. In the first one, UDT agents one-box in this Genetic Newcomb Problem (GNP), so the only genes that statistically correlate with two-boxing are those that create certain kinds of compulsions overriding deliberate decision making, or for other decision procedures that are not logically correlated with UDT. In the second type of mathematical multiverse, UDT agents two-box in GNP, so the list of genes that correlate with two-boxing also includes genes for UDT.
Which type of multiverse is better? It depends on how Omega chooses which gene to look at, which is not specified in the OP. To match the Medical Newcomb Problem as closely as possible, let's assume that in each world (e.g., Everett branch) of each multiverse, Omega picks a random gene look at (from a list of all human genes), and puts $1M in box B for you if you don't have that gene. You live in a world where Omega happened to pick a gene that correlates with two-boxing. Under this assumption, the second type o...
The general mistake that many people are making here is to think that determinism makes a difference. It does not.
Let's say I am Omega. The things that are playing are AIs. They are all 100% deterministic programs, and they take no input except an understanding of the game. They are not allowed to look at their source code.
I play my part as Omega in this way. I examine the source code of the program. If I see that it is a program that will one-box, I put the million. If I see that it is a program that will two-box, I do not put the million.
Note that determ...
I may as well repeat my thoughts on Newcomb's, decision theory, and so on. I come to this from a background in decision analysis, which is the practical version of decision theory.
You can see decision-making as a two-step, three-state problem: the problem statement is interpreted to make a problem model, which is optimized to make a decision.
If you look at the wikipedia definitions of EDT and CDT, you'll see they primarily discuss the optimization process that turns a problem model into a decision. But the two accept different types of problem models; EDT ...
Hm, this is a really interesting idea.
The trouble is that it's tricky to apply a single decision theory to this problem, because by hypothesis, this gene actually changes which decision theory you use! If I'm a TDT agent, then this is good evidence I have the "TDT-agent gene," but in this problem I don't actually know whether the TDT-gene is the one-box gene or the two-box gene. If TDT leads to one-boxing, then it recommends two-boxing - but if it provably two-boxes it is the "two-box gene" and gets the bad outcome. This is to some exte...
Upvoting: This is a very good post which has caused everybody's cached decision-theory choices to fail horribly because they're far too focused on getting the "correct" answer and then proving that answer correct and not at all focused on actually thinking about the problem at hand. Enthusiastic applause.
The OP does not sufficiently determine the answer, unless we take its simplified causal graph as complete, in which case I would two-box. I hope that if in fact "most LWers would one-box," we would only do so because we think Omega would be smarter than that.
I assume that the one-boxing gene makes a person generically more likely to favor the one-boxing solution to Newcomb. But what about when people learn about the setup of this particular problem? Does the correlation between having the one-boxing gene and inclining toward one-boxing still hold? Are people who one-box only because of EDT (even though they would have two-boxed before considering decision theory) still more likely to have the one-boxing gene? If so, then I'd be more inclined to force myself to one-box. If not, then I'd say that the apparent co...
I think we need to remember here the difference between logical influence and causal influence?
My genes can cause me to be inclined towards smoking, and my genes can cause me to get lesions. If I choose to smoke, not knowing my genes, then that's evidence for what my genes say, and it's evidence about whether I'll get lesions; but it doesn't actually causally influence the matter.
My genes can incline me towards one-boxing, and can incline Omega towards putting $1M in the box. If I choose to two-box despite my inclinations, then that provides me with eviden...
In the classic problem, Omega cannot influence my decision; it can only figure out what it is before I do. It is as though I am solving a math problem, and Omega solves it first; the only confusing bit is that the problem in question is self-referential.
If there is a gene that determines what my decision is, then I am not making the decision at all. Any true attempt to figure out what to do is going to depend on my understanding of logic, my familiarity with common mistakes in similar problems, my experience with all the arguments made about Newcomb's prob...
I think two-boxing in your modified Newcomb is the correct answer. In the smoking lesion, smoking is correct, so there's no contradiction.
One-boxing is correct in the classic Newcomb because your decision can "logically influence" the fact of "this person one-boxes". But your decision in the modified Newcomb can't logically influence the fact of "this person has the two-boxing gene".
I think your last paragraph is more or less correct. The way I'd show it would be to place a node labelled 'decision' between the top node and the left node, representing a decision you make based on decision-theoretical or other reasoning. There are then two additional questions: 1) Do we remove the causal arrow from the top node to the bottom one and replace it with an arrow from 'decision' to the bottom? Or do we leave that arrow in place? 2) Do we add a 'free will' node representing some kind of outside causation on 'decision', or do we let 'decision' ...
I would one-box if I had the one-boxing gene, and two-box if I had the two-boxing gene. I don't know what decision-making theory I'm using, because the problem statement didn't specify how the gene works.
I don't really see the point of asking people with neither gene what they'd do.
Your "Newcomb-like" problem isn't. In the original Newcomb problem there is no situation where both boxes contain a reward, yet the naive CDT makes you act as though there were. In your setup there is such a possibility, so 2-boxing is the strictly better strategy. Any decision theory better make you 2-box.
EDIT: Thanks to those who pointed out my brain fart. Of course both boxes contain a reward in the one boxing case. It just doesn't help you any. I maintain that this is not a Newcomb-like problem, since here 2-boxing is a strictly better strategy. No one would one-box if they can help it.
I am currently learning about the basics of decision theory, most of which is common knowledge on LW. I have a question, related to why EDT is said not to work.
Consider the following Newcomblike problem: A study shows that most people who two-box in Newcomblike problems as the following have a certain gene (and one-boxers don't have the gene). Now, Omega could put you into something like Newcomb's original problem, but instead of having run a simulation of you, Omega has only looked at your DNA: If you don't have the "two-boxing gene", Omega puts $1M into box B, otherwise box B is empty. And there is $1K in box A, as usual. Would you one-box (take only box B) or two-box (take box A and B)? Here's a causal diagram for the problem:
Since Omega does not do much other than translating your genes into money under a box, it does not seem to hurt to leave it out:
I presume that most LWers would one-box. (And as I understand it, not only CDT but also TDT would two-box, am I wrong?)
Now, how does this problem differ from the smoking lesion or Yudkowsky's (2010, p.67) chewing gum problem? Chewing Gum (or smoking) seems to be like taking box A to get at least/additional $1K, the two-boxing gene is like the CGTA gene, the illness itself (the abscess or lung cancer) is like not having $1M in box B. Here's another causal diagram, this time for the chewing gum problem:
As far as I can tell, the difference between the two problems is some additional, unstated intuition in the classic medical Newcomb problems. Maybe, the additional assumption is that the actual evidence lies in the "tickle", or that knowing and thinking about the study results causes some complications. In EDT terms: The intuition is that neither smoking nor chewing gum gives the agent additional information.