Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Unknowns 17 July 2015 02:40:04AM 7 points [-]

The main problem with this is that it says that human beings are extremely unlike all nearby alien races. But if you willing to admit that humanity is that unique you might as well say that intelligence only evolved on earth, which is a much simpler and more likely hypothesis.

Comment author: Unknowns 16 July 2015 04:17:48AM 1 point [-]

If "being rational" means choosing the best option, you never have to choose between "being reasonable" and "being rational," because you should always choose the best option. And sometimes the best option is influenced by what other people think of what you are doing; sometimes it's not.

Comment author: [deleted] 13 July 2015 07:37:12AM *  5 points [-]

I was just wondering abou the following: testosterone as a hormone is actually closely linkable to pretty much everything that is culturally considered masculine (muscles, risk-taking i.e. courage, sex drive etc.) and thus it is not wrong to "essentialize" it as the The He Hormone.

However it seems estrogen does not work like that for women: surprisingly, it is NOT linked with many culturally feminine characteristics, and probably should NOT be essentialized as The She Hormone. For example, it crashes during childbirth: i.e it has nothing to do with nurturing, motherhood stuff (if it had, it should peak at birth and gradually drop as children become more self-sufficient, yet it actually peaks in early pregnancy and drops at birth). Given that birth control pills are estrogen, it reduces fertility (at least in those doses) and there is a common report that it reduces libido as well (at least in those doses, again). The primary behavioral effects seem to be a strong desire to be accepted by one's group (see puberty, "teenage girl syndrome", and once I learned it I saw the word "marginalization" in a different light as well) and mood swings (see: early pregnancy). (I should also add I see more and more health-conscious women warning each other about xenoestrogens in food, increasing the risk of ovarian cancer. They are probably not very good for men either (manboobz?) so I think this should be paid attention to in general, I just want to point out how xenoestrogens seem to have no beneficial effects for women which is a bit weird as well.)

So I just want to say it is sort of odd, estrogen does not represent cultural femininity nearly as well as testosterone represents cultural masculinity.

Any good articles or books or personal opinions that shed some light on this?

I should not be surprised that complex human behaviors cannot be reduced to a hormone. But once I was surprised that many popular, symbolical, role-model men in fact often can be, that everything that a Mike Tyson type symbolizes is T, I expected the same...

In response to comment by [deleted] on Open Thread, Jul. 13 - Jul. 19, 2015
Comment author: Unknowns 13 July 2015 07:57:18AM 22 points [-]

It actually is not very odd for there to be a difference like this. Given that there are only two sexes, there only needs to be one hormone which is sex determining in that way. Having two in fact could have strange effects of its own.

Comment author: Unknowns 12 July 2015 05:50:17AM 6 points [-]

I think what you need to realize is that it is not a question of proving that all of those things are false, but rather that it makes no difference whether they are or not. For example when you go to sleep and wake up it feels just the same whether it is still you or a different person, so it doesn't matter at all.

Comment author: Unknowns 06 July 2015 01:19:02PM 5 points [-]

Excellent post. Basically simpler hypotheses are on average more probable than more complex ones, no matter how complexity is defined, as long as there is a minimum complexity and no maximum complexity. But some measures of simplicity are more useful than others, and this is determined by the world we live in; thus we learn by experience that mathematical simplicity is a better measure than "number of words it takes to describe the hypothesis," even though both would work to some extent.

Comment author: philh 01 July 2015 01:41:21PM -1 points [-]

Unless something happens out of the blue to force my decision - in which case it's not my decision - then this situation doesn't happen. There might be people for whom Omega can predict with 100% certainty that they're going to one-box even after Omega has told them his prediction, but I'm not one of them.

(I'm assuming here that people get offered the game regardless of their decision algorithm. If Omega only makes the offer to people whom he can predict certainly, we're closer to a counterfactual mugging. At any rate, it changes the game significantly.)

Comment author: Unknowns 01 July 2015 02:03:05PM *  1 point [-]

I agree that in reality it is often impossible to predict someone's actions, if you are going to tell them your prediction. That is why it is perfectly possible that the situation where you know the gene is impossible. But in any case this is all hypothetical because the situation posed assumes you cannot know which gene you have until you choose one or both boxes, at which point you immediately know.

EDIT: You're really not getting the point, which is that the genetic Newcomb is identical to the original Newcomb in decision theoretic terms. Here you're arguing not about the decision theory issue, but whether or not the situations involved are possible in reality. If Omega can't predict with certainty when he tells his prediction, then I can equivalently say that the gene only predicts with certainty when you don't know about it. Knowing about the gene may allow you to two-box, but that is no different from saying that knowing Omega's decision before you make your choice would allow you to two-box, which it would.

Basically anything said about one case can be transformed into the other case by fairly simple transpositions. This should be obvious.

Comment author: philh 01 July 2015 09:25:50AM -1 points [-]

I was referring to "in principle", not to reality.

You believe that if you saw you had the gene that says "one-box", then you could still two-box

Yes. I think that if I couldn't do that, it wouldn't be me. If we don't permit people without the two-boxing gene to two-box (the question as originally written did, but we don't have to), then this isn't a game I can possibly be offered. You can't take me, and add a spooky influence which forces me to make a certain decision one way or the other, even when I know it's the wrong way, and say that I'm still making the decision. So again, we're at the point where I don't know why we're asking the question. If not-me has the gene, he'll do one thing; if not, he'll do the other; and it doesn't make a difference what he should do. We're not talking about agents with free action, here.

Again, I'm not sure exactly how this extends to the case where an agent doesn't know whether they have the gene.

Comment author: Unknowns 01 July 2015 11:51:54AM *  1 point [-]

What if we take the original Newcomb, then Omega puts the million in the box, and then tells you "I have predicted with 100% certainty that you are only going to take one box, so I put the million there?"

Could you two-box in that situation, or would that take away your freedom?

If you say you could two-box in that situation, then once again the original Newcomb and the genetic Newcomb are the same.

If you say you could not, why would that be you when the genetic case would not be?

Comment author: philh 30 June 2015 02:46:10PM 0 points [-]

So I think where we differ is that I don't believe in a gene that controls my decision in the same way that you do. I don't know how well I can articulate myself, but:

As an AI, I can choose whether my programming makes me one-box or not, by one-boxing or not. My programming isn't responsible for my reasoning, it is my reasoning. If Omega looks at my source code and works out what I'll do, then there are no worlds where Omega thinks I'll one-box, but I actually two-box.

But imagine that all AIs have a constant variable in their source code, unhelpfully named TMP3. AIs with TMP3=true tend to one-box in Newcomblike problems, and AIs with TMP3=false tend to two-box. Omega decides whether to put in $1M by looking at TMP3.

(Does the problem still count as Newcomblike? I'm not sure that it does, so I don't know if TMP3 correlates with my actions at all. But we can say that TMP3 correlates with how AIs act in GNP, instead.)

If I have access to my source code, I can find out whether I have TMP3=true or false. And regardless of which it is, I can two-box. (If I can't choose to two-box, after learning that I have TMP3=true, then this isn't me.) Since I can two-box without changing Omega's decision, I should.

Whereas in the original Newcomb's problem, I can look at my source code, and... maybe I can prove whether I one- or two-box. But if I can, that doesn't constrain my decision so much as predict it, in the same way that Omega can; the prediction of "one-box" is going to take into account the fact that the arguments for one-boxing overwhelm the consideration of "I really want to two-box just to prove myself wrong". More likely, I can't prove anything. And I can one- or two-box, but Omega is going to predict me correctly, unlike in GNP, so I one-box.

The case where I don't look at my source code is more complicated (maybe AIs with TMP3=true will never choose to look?), but I hope this at least illustrates why I don't find the two comparable.

(That said, I might actually one-box, because I'm not sufficiently convinced of my reasoning.)

Comment author: Unknowns 01 July 2015 04:54:19AM 1 point [-]

"I don't believe in a gene that controls my decision" refers to reality, and of course I don't believe in the gene either. The disagreement is whether or not such a gene is possible in principle, not whether or not there is one in reality. We both agree there is no gene like this in real life.

As you note, if an AI could read its source code and sees that it says "one-box", then it will still one-box, because it simply does what it is programmed to do. This first of all violates the conditions as proposed (I said the AIs cannot look at their sourcec code, and Caspar42 stated that you do not know whether or not you have the gene).

But for the sake of argument we can allow looking at the source code, or at the gene. You believe that if you saw you had the gene that says "one-box", then you could still two-box, so it couldn't work the same way. You are wrong. Just as the AI would predictably end up one-boxing if it had that code, so you would predictably end up one-boxing if you had the gene. It is just a question of how this would happen. Perhaps you would go through your decision process, decide to two-box, and then suddenly become overwhelmed with a sudden desire to one-box. Perhaps it would be because you would think again and change your mind. But one way or another you would end up one-boxing. And this "doesn't' constrain my decision so much as predict it", i.e. obviously both in the case of the AI and in the case of the gene, in reality causality does indeed go from the source code to one-boxing, or from the gene to one-boxing. But it is entirely the same in both cases -- causality runs only from past to future, but for you, it feels just like a normal choice that you make in the normal way.

Comment author: OrphanWilde 30 June 2015 02:49:54PM -1 points [-]

In the original Newcomb, causality genuinely flowed in the reverse. Your decision -did- change whether or not there is a million dollars in the box. The original problem had information flowing backwards in time (either through a simulation which, for practical purposes, plays time forward, then goes back to the origin, or through an omniscient being seeing into the future, however one wishes to interpret it).

In the medical Newcomb, causality -doesn't- flow in the reverse, so behaving as though causality -is- flowing in the reverse is incorrect.

Comment author: Unknowns 01 July 2015 04:46:24AM 3 points [-]

In this case you are simply interpreting the original Newcomb to mean something absurd, because causality cannot "genuinely flow in reverse" in any circumstances whatsoever. Rather in the original Newcomb, Omega looks at your disposition, one that exists at the very beginning. If he sees that you are disposed to one-box, he puts the million. This is just the same as someone looking at the source code of an AI and seeing whether it will one-box, or someone looking for the one-boxing gene.

Then, when you make the choice, in the original Newcomb you choose to one-box. Causality flows in only one direction, from your original disposition, which you cannot change since it is in the past, to your choice. This causality is entirely the same as in the genetic Newcomb. Causality never goes any direction except past to future.

Comment author: OrphanWilde 30 June 2015 02:00:00PM 0 points [-]

No, your decision merely reveals what genes you have, your decision cannot change what genes you have.

Comment author: Unknowns 30 June 2015 02:11:04PM 2 points [-]

Even in the original Newcomb you cannot change whether or not there is a million in the box. Your decision simply reveals whether or not it is already there.

View more: Next