Comment author: Manfred 29 June 2015 03:24:04PM *  4 points [-]

Hm, this is a really interesting idea.

The trouble is that it's tricky to apply a single decision theory to this problem, because by hypothesis, this gene actually changes which decision theory you use! If I'm a TDT agent, then this is good evidence I have the "TDT-agent gene," but in this problem I don't actually know whether the TDT-gene is the one-box gene or the two-box gene. If TDT leads to one-boxing, then it recommends two-boxing - but if it provably two-boxes it is the "two-box gene" and gets the bad outcome. This is to some extent an "evil decision problem." Currently I'd one-box, based on some notion of resolving these sorts of problems through more UDT-ish proof-based reasoning (though it has some problems). Or in TDT-language, I'd be 'controlling' whether the TDT-gene was the two-box gene by picking the output of TDT.

However, this problem becomes a lot easier if most people are not actually using any formal reasoning, but are just doing whatever seems like a good idea at the time. Like, the sort of reasoning that leads to people actually smoking. If I'm dropped into this genetic Newcomb's problem, or into the smoking lesion problem, and I learn that almost all people in the data set I've seen were either bad at decision theory or didn't know the results of the data, then those people no longer have quite the same evidential impact about my current situation, and I can just smoke / two-box. It's only when those people and myself are in symmetrical situations (similar information, use similar decision-making processes) that I have to "listen" to them.

Comment author: Caspar42 29 June 2015 06:53:04PM 2 points [-]

I am not entirely sure, I understand your TDT analysis, maybe that's because I don't understand TDT that well. I assumed that TDT would basically just do what CDT does, because there are no simulations of the agent involved. Or do you propose that checking for the gene is something like simulating the agent?

This is to some extent an "evil decision problem."

It does not seem to be more evil than Newcomb's problem, but I am not sure, what you mean by "evil". For every decision theory, it is possible, of course, to set up some decision problem, where this decision theory loses. Would you say that I set up the "genetic Newcomb problem" specifically to punish CDT/TDT?

Comment author: ike 29 June 2015 04:22:56PM *  0 points [-]

The gene causes you to make the choice, just like in the standard Newcomb your disposition causes your choices.

OP here said (emphasis added)

A study shows that most people

Which makes your claim incorrect. My beliefs about the world are that no such choice can be predicted by only genes with perfect accuracy; if you stipulate that they can, my answer would be different.

In the genetic Newcomb, if you one-box, then you had the gene to one-box, and Omega put the million.

Wrong; it's perfectly possible to have the gene to one-box but two-box.

(If the facts were as stated in the OP, I'd actually expect conditioning on certain aspects of my decision-making processes to remove the correlation; that is, people who think similarly to me would have less correlation with choice-gene. If that prediction was stipulated away, my choice *might* change; it depends on exactly how that was formulated.)

Comment author: Caspar42 29 June 2015 06:39:27PM *  3 points [-]

OP here said (emphasis added)

A study shows that most people

Which makes your claim incorrect. My beliefs about the world are that no such choice can be predicted by only genes with perfect accuracy; if you stipulate that they can, my answer would be different.

So, as soon as it's not 100% of people two-boxing having the two-boxing gene, but only 99.9%, you assume that you are in the 0.1%?

Comment author: Khoth 29 June 2015 04:20:58PM -1 points [-]

I would one-box if I had the one-boxing gene, and two-box if I had the two-boxing gene. I don't know what decision-making theory I'm using, because the problem statement didn't specify how the gene works.

I don't really see the point of asking people with neither gene what they'd do.

Comment author: Caspar42 29 June 2015 06:13:09PM 2 points [-]

Maybe I should have added that you don't know which genes you have, before you make the decision, i.e. two-box or one-box.

Comment author: ike 29 June 2015 02:55:05PM -1 points [-]

I would two-box in that situation. Don't see a problem.

Comment author: Caspar42 29 June 2015 03:47:35PM 2 points [-]

Well, the problem seems to be that this will not give you the $1M, just like in Newcomb's original problem.

Comment author: shminux 29 June 2015 03:07:20PM *  -2 points [-]

Your "Newcomb-like" problem isn't. In the original Newcomb problem there is no situation where both boxes contain a reward, yet the naive CDT makes you act as though there were. In your setup there is such a possibility, so 2-boxing is the strictly better strategy. Any decision theory better make you 2-box.

EDIT: Thanks to those who pointed out my brain fart. Of course both boxes contain a reward in the one boxing case. It just doesn't help you any. I maintain that this is not a Newcomb-like problem, since here 2-boxing is a strictly better strategy. No one would one-box if they can help it.

Comment author: Caspar42 29 June 2015 03:26:17PM 3 points [-]

I am sorry, but I am not sure about what you mean by that. If you are a one-boxing agent, then both boxes of Newcomb's original problem contain a reward, assuming that Omega is a perfect predictor.

Comment author: Unknowns 29 June 2015 10:55:52AM 2 points [-]

I have never agreed that there is a difference between the smoking lesion and Newcomb's problem. I would one-box, and I would not smoke. Long discussion in the comments here.

Comment author: Caspar42 29 June 2015 11:16:58AM 1 point [-]

Interesting, thanks! I thought that it was more or less consensus that the smoking lesion refutes EDT. So, where should I look to see EDT refuted? Absent-minded driver, Evidential Blackmail, counterfactual mugging or something else?

Comment author: Vulture 04 October 2014 12:38:26AM 3 points [-]

As a (relatively) non-technical LW "regular" I'm somewhat curious for vaguely sociological reasons why this post is receiving such an anomalous lack of replies.

Comment author: Caspar42 04 October 2014 10:34:13AM 1 point [-]

Me too, note, however, that I received several PMs from volunteers.

Comment author: aberglas 29 September 2014 11:45:25PM 1 point [-]

Humans are definitely a result of natural selection, but it does not seem to be difficult at all to find goals of ours that do not serve the goal of survival or reproduction at all.

I challenge you to find one.

We put a lot of effort into our children. We work in tribes and therefor like to work with people that support us and ostracize those that are seen to be unhelpful. So we ourselves need to be helpful and to be seen to be helpful.

We help our children, family, tribe, and general community in that genetic order.

We like to dance. It is the traditional way to attract a mate.

We have a strong sense of moral value because people that have that strong sense obey the rules and so are more likely to fit in and be able to have grandchildren.

Comment author: Caspar42 30 September 2014 01:06:38PM 1 point [-]

I challenge you to find one.

One particular example of those "evolutionary accidents / coincidences", is homosexuality in males. Here are two studies claiming that homosexuality in males correlates with fecundity in female maternal relatives:

Ciani, Iemmola, Blecher: Genetic factors increase fecundity in female maternal relatives of bisexual men as in homosexuals.

Iemmola, Ciani: New evidence of genetic factors influencing sexual orientation in men: female fecundity increase in the maternal line.

So, appear to be some genetic factors that prevail, because they make women more fecund. Coincidentally, they also make men homosexual, which is both an obstacle to reproduction and survival (not only due to the homophobia of other's but also STDs. I presume, that especially our (human) genetic material is full of such coincidences, because the lack of them (i.e. the thesis that all genetic factors that prevail in evolutionary processes only lead to higher reproduction and survival rates and nothing else) seems very unlikely.

Comment author: Caspar42 29 September 2014 01:16:09PM 4 points [-]

This post argues that there is one and only one super goal for any agent, and that goal is simply to exist in a competitive world. Our human sense of other purposes is just an illusion created by our evolutionary origins. It is not the goal of an apple tree to make apples. Rather it is the goal of the apple tree's genes to exist. The apple tree has developed a clever strategy to achieve that, namely it causes people to look after it by producing juicy apples.

Humans are definitely a result of natural selection, but it does not seem to be difficult at all to find goals of ours that do not serve the goal of survival or reproduction at all. Evolution seems to produce these other preferences accidentally. One thing how that happens may be examplified by the following: Our ability to contemplate our thinking from an almost external perspective (sometimes referred to as self-consiousness), is definitely helpful for learning / improving our thinking and could therefore prevail in evolution. However, it may also be the cause of altruism, because it makes every single one of us realize, that they are not very special. (This is by no means an attempt to explain altruism scientifically or something...) More generally, it would be a really strange coincidence, if all cognitive features of an organism in our physical world that serve the goal to survive and reproduce do not serve any other goal. In conclusion, even evolution can (probably) produce (by coincidence) organisms with goals that are not subgoals of the goal to survive and reproduce.

Likewise the paper clip making AI only makes paper clips because if it did not make paper clips then the people that created it would turn it off and it would cease to exist. (That may not be a conscious choice of the AI anymore than than making juicy apples was a conscious choice of the apple tree, but the effect is the same.)

Now, imagine the paper clip maximizer to be more than a robot arm, imagine it to be a well-programmed Seed AI (or the like). As pointed out in ViliamBur's and cousinit's comment, its goal will probably not be easily changed (by coincidence or evolution of several such AIs), for example it could save its source code on several hard drives that are synchronized by a hard-wired mechanism or something... Now this paper clip maximizer would start turning all matter into paper clips. To achieve its goal, it would certainly remain in existence (and thereby give you the illusion of having the supergoal to exist in the first place) and protect its values (which is not extremely difficult). Assuming, it is successful (and we can expect this from a seed AI/superintelligence), the only matter (in reach) left, would at some point be the hardware of the paper clip maximizer itself. What would the paper clip maximizer do then? In conclusion, self-preservation and maybe propagation of value may be important subgoals, but it is certainly not the supergoal.

Comment author: ChristianKl 26 June 2014 03:48:12PM 3 points [-]

My experience is that philosophers often carelessly use words to avoid conveying a clear statement, that could be refutable.

If they do it with the purposes of not making a statement that's open to certain refutations I don't see how that's careless.

Comment author: Caspar42 26 June 2014 06:34:03PM 1 point [-]

Oops... ;-)

View more: Prev | Next