Comment author: Khoth 29 June 2015 07:10:59PM 0 points [-]

I wasn't assuming that I knew beforehand.

It's just that, if I have the one-boxing gene, it will compel me (in some manner not stated in the problem) to use a decision algorithm which will cause me to one-box, and similarly for the two-box gene.

Comment author: Caspar42 29 June 2015 07:44:03PM 2 points [-]

Ah, okay. Well, the idea of my scenario is that you have no idea how all of this works. So, for example, the two-boxing gene could make you be 100% sure that you have or don't have the gene, so that two-boxing seems like the better decision. So, until you actually make a decision, you have no idea which gene you have. (Preliminary decisions, as in Eells tickle defense paper, are also irrelevant.) So, you have to make some kind of decision. The moment you one-box you can be pretty sure that you don't have the two-boxing gene since it did not manage to trick into two-boxing, which it usually does. So, why not just one-box and take the money? :-)

Comment author: Manfred 29 June 2015 03:24:04PM *  4 points [-]

Hm, this is a really interesting idea.

The trouble is that it's tricky to apply a single decision theory to this problem, because by hypothesis, this gene actually changes which decision theory you use! If I'm a TDT agent, then this is good evidence I have the "TDT-agent gene," but in this problem I don't actually know whether the TDT-gene is the one-box gene or the two-box gene. If TDT leads to one-boxing, then it recommends two-boxing - but if it provably two-boxes it is the "two-box gene" and gets the bad outcome. This is to some extent an "evil decision problem." Currently I'd one-box, based on some notion of resolving these sorts of problems through more UDT-ish proof-based reasoning (though it has some problems). Or in TDT-language, I'd be 'controlling' whether the TDT-gene was the two-box gene by picking the output of TDT.

However, this problem becomes a lot easier if most people are not actually using any formal reasoning, but are just doing whatever seems like a good idea at the time. Like, the sort of reasoning that leads to people actually smoking. If I'm dropped into this genetic Newcomb's problem, or into the smoking lesion problem, and I learn that almost all people in the data set I've seen were either bad at decision theory or didn't know the results of the data, then those people no longer have quite the same evidential impact about my current situation, and I can just smoke / two-box. It's only when those people and myself are in symmetrical situations (similar information, use similar decision-making processes) that I have to "listen" to them.

Comment author: Caspar42 29 June 2015 06:53:04PM 2 points [-]

I am not entirely sure, I understand your TDT analysis, maybe that's because I don't understand TDT that well. I assumed that TDT would basically just do what CDT does, because there are no simulations of the agent involved. Or do you propose that checking for the gene is something like simulating the agent?

This is to some extent an "evil decision problem."

It does not seem to be more evil than Newcomb's problem, but I am not sure, what you mean by "evil". For every decision theory, it is possible, of course, to set up some decision problem, where this decision theory loses. Would you say that I set up the "genetic Newcomb problem" specifically to punish CDT/TDT?

Comment author: ike 29 June 2015 04:22:56PM *  0 points [-]

The gene causes you to make the choice, just like in the standard Newcomb your disposition causes your choices.

OP here said (emphasis added)

A study shows that most people

Which makes your claim incorrect. My beliefs about the world are that no such choice can be predicted by only genes with perfect accuracy; if you stipulate that they can, my answer would be different.

In the genetic Newcomb, if you one-box, then you had the gene to one-box, and Omega put the million.

Wrong; it's perfectly possible to have the gene to one-box but two-box.

(If the facts were as stated in the OP, I'd actually expect conditioning on certain aspects of my decision-making processes to remove the correlation; that is, people who think similarly to me would have less correlation with choice-gene. If that prediction was stipulated away, my choice *might* change; it depends on exactly how that was formulated.)

Comment author: Caspar42 29 June 2015 06:39:27PM *  3 points [-]

OP here said (emphasis added)

A study shows that most people

Which makes your claim incorrect. My beliefs about the world are that no such choice can be predicted by only genes with perfect accuracy; if you stipulate that they can, my answer would be different.

So, as soon as it's not 100% of people two-boxing having the two-boxing gene, but only 99.9%, you assume that you are in the 0.1%?

Comment author: Khoth 29 June 2015 04:20:58PM -1 points [-]

I would one-box if I had the one-boxing gene, and two-box if I had the two-boxing gene. I don't know what decision-making theory I'm using, because the problem statement didn't specify how the gene works.

I don't really see the point of asking people with neither gene what they'd do.

Comment author: Caspar42 29 June 2015 06:13:09PM 2 points [-]

Maybe I should have added that you don't know which genes you have, before you make the decision, i.e. two-box or one-box.

Comment author: ike 29 June 2015 02:55:05PM -1 points [-]

I would two-box in that situation. Don't see a problem.

Comment author: Caspar42 29 June 2015 03:47:35PM 2 points [-]

Well, the problem seems to be that this will not give you the $1M, just like in Newcomb's original problem.

Comment author: shminux 29 June 2015 03:07:20PM *  -2 points [-]

Your "Newcomb-like" problem isn't. In the original Newcomb problem there is no situation where both boxes contain a reward, yet the naive CDT makes you act as though there were. In your setup there is such a possibility, so 2-boxing is the strictly better strategy. Any decision theory better make you 2-box.

EDIT: Thanks to those who pointed out my brain fart. Of course both boxes contain a reward in the one boxing case. It just doesn't help you any. I maintain that this is not a Newcomb-like problem, since here 2-boxing is a strictly better strategy. No one would one-box if they can help it.

Comment author: Caspar42 29 June 2015 03:26:17PM 3 points [-]

I am sorry, but I am not sure about what you mean by that. If you are a one-boxing agent, then both boxes of Newcomb's original problem contain a reward, assuming that Omega is a perfect predictor.

Comment author: Unknowns 29 June 2015 10:55:52AM 2 points [-]

I have never agreed that there is a difference between the smoking lesion and Newcomb's problem. I would one-box, and I would not smoke. Long discussion in the comments here.

Comment author: Caspar42 29 June 2015 11:16:58AM 1 point [-]

Interesting, thanks! I thought that it was more or less consensus that the smoking lesion refutes EDT. So, where should I look to see EDT refuted? Absent-minded driver, Evidential Blackmail, counterfactual mugging or something else?

Two-boxing, smoking and chewing gum in Medical Newcomb problems

14 Caspar42 29 June 2015 10:35AM

I am currently learning about the basics of decision theory, most of which is common knowledge on LW. I have a question, related to why EDT is said not to work.

Consider the following Newcomblike problem: A study shows that most people who two-box in Newcomblike problems as the following have a certain gene (and one-boxers don't have the gene). Now, Omega could put you into something like Newcomb's original problem, but instead of having run a simulation of you, Omega has only looked at your DNA: If you don't have the "two-boxing gene", Omega puts $1M into box B, otherwise box B is empty. And there is $1K in box A, as usual. Would you one-box (take only box B) or two-box (take box A and B)? Here's a causal diagram for the problem:



Since Omega does not do much other than translating your genes into money under a box, it does not seem to hurt to leave it out:


I presume that most LWers would one-box. (And as I understand it, not only CDT but also TDT would two-box, am I wrong?)

Now, how does this problem differ from the smoking lesion or Yudkowsky's (2010, p.67) chewing gum problem? Chewing Gum (or smoking) seems to be like taking box A to get at least/additional $1K, the two-boxing gene is like the CGTA gene, the illness itself (the abscess or lung cancer) is like not having $1M in box B. Here's another causal diagram, this time for the chewing gum problem:

As far as I can tell, the difference between the two problems is some additional, unstated intuition in the classic medical Newcomb problems. Maybe, the additional assumption is that the actual evidence lies in the "tickle", or that knowing and thinking about the study results causes some complications. In EDT terms: The intuition is that neither smoking nor chewing gum gives the agent additional information.

Comment author: Vulture 04 October 2014 12:38:26AM 3 points [-]

As a (relatively) non-technical LW "regular" I'm somewhat curious for vaguely sociological reasons why this post is receiving such an anomalous lack of replies.

Comment author: Caspar42 04 October 2014 10:34:13AM 1 point [-]

Me too, note, however, that I received several PMs from volunteers.

Comment author: aberglas 29 September 2014 11:45:25PM 1 point [-]

Humans are definitely a result of natural selection, but it does not seem to be difficult at all to find goals of ours that do not serve the goal of survival or reproduction at all.

I challenge you to find one.

We put a lot of effort into our children. We work in tribes and therefor like to work with people that support us and ostracize those that are seen to be unhelpful. So we ourselves need to be helpful and to be seen to be helpful.

We help our children, family, tribe, and general community in that genetic order.

We like to dance. It is the traditional way to attract a mate.

We have a strong sense of moral value because people that have that strong sense obey the rules and so are more likely to fit in and be able to have grandchildren.

Comment author: Caspar42 30 September 2014 01:06:38PM 1 point [-]

I challenge you to find one.

One particular example of those "evolutionary accidents / coincidences", is homosexuality in males. Here are two studies claiming that homosexuality in males correlates with fecundity in female maternal relatives:

Ciani, Iemmola, Blecher: Genetic factors increase fecundity in female maternal relatives of bisexual men as in homosexuals.

Iemmola, Ciani: New evidence of genetic factors influencing sexual orientation in men: female fecundity increase in the maternal line.

So, appear to be some genetic factors that prevail, because they make women more fecund. Coincidentally, they also make men homosexual, which is both an obstacle to reproduction and survival (not only due to the homophobia of other's but also STDs. I presume, that especially our (human) genetic material is full of such coincidences, because the lack of them (i.e. the thesis that all genetic factors that prevail in evolutionary processes only lead to higher reproduction and survival rates and nothing else) seems very unlikely.

View more: Prev | Next