Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Lumifer 16 March 2017 05:50:35PM *  1 point [-]

But in fact all the probabilities are equally real, depending on your selection process.

This is not so. You are confused between two kinds of uncertainty (and so, probability): the uncertainty of the actual outcome in the real, physical world, and the uncertainty of some agent not knowing the outcome.

For a random person, the total probability of getting cancer will be 45.5%.

Let's unroll this. The actual probability for a random person to get cancer is either 90% or 1%. You just don't know which one of these two numbers applies, so you produce an estimate by combining them. Your estimate doesn't change anything in the real world and someone else -- e.g. someone who has access to the lesion-scanning results for this random person -- would have a different estimate.

Note, by the way, the difference between speaking about a "random person" and about the whole population. For the population as a whole, the 45.5% value is correct: out of 1000 people, about 455 will get cancer. But for a single person it is not correct: a single person has either a 90% actual probability or a 1% actual probability.

For simplicity consider an urn containing an equal number of white and black balls. You would say that a "random ball" has a 50% chance of being black -- but each ball is either black or white, it's not 50% of anything. 50% of the entire set of balls is black, true, but each ball's state is not uncertain and is not subject to ("actual") probability.

Comment author: entirelyuseless 17 March 2017 01:16:56AM 0 points [-]

The problem does not say that any physical randomness is involved. The 90% of those with the lesion may be determined by entirely physical and determinate causes. And in that case, the 90% is just as "actual" or "not actual", whichever you prefer, as the 45.5% of the population who get cancer, or as the 85.55% of smokers who get cancer.

Second, physical randomness is irrelevant anyway, because the only way it would make a difference to your choice would be by making you subjectively uncertain of things. As I said in an earlier comment, if we knew for certain that determinism was true, we would make our choices in the same way we do now. So the only uncertainty that is relevant to decision making is subjective uncertainty.

Comment author: moridinamael 16 March 2017 12:03:54PM 7 points [-]

This perspective suggests "don't yak shave" is a classic deepity. The superficial true meaning is "don't waste time on unimportant sub tasks" and the clearly false but immediately actionable meaning is "don't do subtasks". If you've already clearly identified which tasks are on the critical path and which are not, the yak shaving heuristic is useless, and if you haven't, it's harmfully misleading.

Comment author: entirelyuseless 16 March 2017 02:47:56PM 4 points [-]

I think the post is suggesting rather that "unimportant sub task" and "important sub task" is a mostly fallacious distinction. Omitting lots and lots of somewhat unimportant sub tasks, even though they really were not very important taken individually, can lead to very bad effects overall. Mark Forster notes that if you keep "prioritizing" in such a way that you are always doing the "important" things, and consequently never doing the less important ones, sooner or later they will show you just how unimportant they are.

Comment author: g_pepper 15 March 2017 05:16:51PM *  0 points [-]

You stated that you think that the idea of increasing or decreasing your confidence that a thing will (or did) happen and the idea of increasing or decreasing the probability that it actually will (or did) happen are “the same thing as long as your confidence is reasonable”. I disagree with the idea that the probability that a thing actually will (or did) happen is the same as your confidence that a thing will (or did) happen, as illustrated by these examples:

  • John’s wife died under suspicious circumstances. You are a detective investigating the death. You suspect John killed his wife. Clearly, John either did or did not kill his wife, and presumably John knows which of these is the case. However, as a detective, as you uncover each new piece of evidence, you will adjust your confidence that John killed his wife either up or down, depending on whether the evidence supports or refutes the idea that John killed his wife. However, the evidence does not change the fact of what actually happened – it just changes your confidence in your assessment that John killed his wife. This example is like the Newcomb example – Omega either did or did not put $1M in the second box – any evidence that you obtain based on your choice to one box or two box may change your assessment of the likelihood, but it does not affect the reality of the matter.

  • Suppose I put 900 black marbles and 100 white marbles in an opaque jar and mix them more-less uniformly. I now ask you to estimate the probability that a marble selected blindly from the jar will be white, and then to actually remove a marble, examine it and replace it. This is repeated a number of times, and each time the marble is replaced, the contents of the jar is mixed. Suppose that due to luck, your first four picks yield two white marbles and two black marbles. You will probably assess the likelihood of the next marble being white at or around .5. However, after an increasing number of trials, your estimate will begin to converge on .1. However, the actual probability has been .1 all along – what has changed is your assessment of the probability. This is like the smoking lesion hypothetical where your decision to smoke may increase your assessment of the probability that you will get cancer, but does not affect the actual probability that you will get cancer.

In other words, from my point of view, "the probability a thing will happen" just is your reasonable assessment, not an objective feature of the world.

In both the examples listed above, there is an objective reality (John either did or did not kill his wife, and the probability of selecting a white marble is .1), and there is your confidence that John killed his wife, and your estimation of the probability of selecting a white marble. These things all exist, and they are not the same.

Again, every time I brought up the idea of a perfect correlation, you simply fought the hypothetical instead of addressing it.

You brought up the idea of omniscience when you said:

Note that if your estimates are different, you may be certain you will get the million if you take one box, and certain that you will not, if you take both.

and I addressed it by pointing out that omniscience is not a part of Newcomb. Perfect correlation likewise is not a part of the smoking lesion. Perfect correlation of past trials is an aspect of the Newcomb problem, but perfect correlation of past trials is not really qualitatively different from a merely high correlation, as it does not imply that “you may be certain you will get the million if you take one box, and certain that you will not, if you take both” in the same way that flipping a coin 6 times in a row and getting heads each time does not imply that you will forever more get heads each time you flip that coin. I did consider perfect correlation of past trials in the Newcomb problem, because it is built in to Nozick’s statement of the problem. And, perfect correlation of past trials in the smoking lesion, while not part of the smoking lesion as originally stated, does not change my decision to smoke.

I was not fighting the hypothetical when I stated that omniscience is not part of Newcomb – I merely pointed out that you changed the hypothetical; a Newcomb with an omniscient Omega is a different problem than the one proposed by Nozick. I am sticking with Nozick’s and Egan’s hypotheticals.

It is true that I did not address Yvain’s predestination example. I did not find it to be relevant because Calvinist predestination involves actual predeterminism and omniscience, neither of which is anywhere suggested by Nozick. In short, Yvain has invented a new, different hypothetical; if we can’t agree on Newcomb, I don’t see how adding another hypothetical into the mix helps.

I have stated my position with the most succinct example that I can think of, and you have not addressed that example. The example was:

Suppose that 90% of people with the lesion get cancer, and 99% of the people without the lesion do not get cancer.

Suppose that you have the lesion. In this case the probability that you will get cancer is .9, independent of whether or not you smoke.

Now, suppose that you do not have the lesion. In this case the probability that you will get cancer is .01, independent of whether or not you smoke.

You clearly either have the lesion or do not have the lesion. That was determined long before you made a choice about smoking, and your choice to or not to smoke does not change whether or not you have the lesion.

So, since the probability of a person with the lesion to get cancer is unaffected by his/her choice to smoke (it is .9), and the probability of a person without the lesion to get cancer is likewise unaffected by his/her choice to smoke (it is .01), then if you want to smoke you ought to go ahead and smoke; it isn’t going to affect the likelihood of your getting cancer.

A similar example can be made for two-boxing:

  • You are either a person whom Omega thinks will two-box or you are not. Based on Omega’s assessment it either will or will not place $1M in box two.

  • Only after Omega has done this will it make its offer to you.

  • Your choice to one-box or two-box may change your assessment as to whether Omega has placed $1M in box two, but it does not change whether Omega actually has placed $1M in box two.

  • If Omega placed $1M in box two, your expected utility (measured in $) is: $1M if you one-box, $1.001M if you two-box

  • If Omega did not place $1M in box two, your expected utility is: $0 if you one-box $1K if you two-box

  • Your choice to one-box vs two box does not change whether Omega did or did not put $1M in box two; Omega had already done that before you ever made your choice.

  • Therefore, since your expected utility is higher when you two-box regardless of what Omega did, you should two-box.

I don’t know that I can explain my position any more clearly than that; I suspect that if we are still in disagreement, we should simply agree to disagree (regardless of what Aumann might say about that :) ). After all, Nozick stated in his paper that it is quite difficult to obtain consensus on this problem:

To almost everyone it is perfectly clear and obvious what should be done. The difficulty is that these people seem to divide almost evenly on the problem, with large numbers thinking that the opposing half is just being silly.

Also, I do agree with your original point – Newcomb and the smoking lesion are equivalent in that similar reasoning that would lead one to one-box would likewise lead one to not smoke, and similar reasoning that would lead one to two-box would lead one to smoke.

Comment author: entirelyuseless 16 March 2017 02:29:21PM *  0 points [-]

I did not disagree that you can talk about "actual probabilities" in the way that you did. I said they are irrelevant to decision making, and I explained that using the example of determinism. This is also why I did not comment on your detailed scenario; because it uses the "actual probabilities" in the way which is not relevant to decision making.

Let me look at that in detail. In your scenario, 90% of the people with the lesion get cancer, and 1% of the people without the lesion get cancer.

Let's suppose that 50% of the people have the lesion and 50% do not, just to make the situation specific.

The probability of having the lesion given a random person (and it doesn't matter whether you call this an actual probability or a subjective assessment -- it is the number of people) will be 50%, and the probability of not having the lesion will be 50%.

Your argument that you should smoke if you want does not consider the correlation between having the lesion and smoking, of course because you consider this correlation irrelevant. But it is not irrelevant, and we can see that by seeing what happens when he consider it.

Suppose 95% of people with the lesion choose to smoke, and 5% of the people with the lesion choose not to smoke. Similarly, suppose 95% of the people without the lesion choose not to smoke, and 5% of the people without the lesion choose to smoke.

Given these stipulations it follows that 50% of the people smoke, and 50% do not.

For a random person, the total probability of getting cancer will be 45.5%. This is an "actual" probability: 45.5% of the total people will actually get cancer. This is just as actual as the probability of 90% that a person with the lesion gets it. If you pick a random person with the lesion, 90% of such random choices will get cancer; and if you pick a random person from the whole group, 45.5% of such random choices will get cancer.

Before you choose, therefore, your estimated probability of getting cancer will be 45.5%. You seem to admit that you could have this estimated probability, but want to say that the "real" probability is either 90% or 1%, depending on whether you have the lesion. But in fact all the probabilities are equally real, depending on your selection process.

What you ignored is the probability that you will get cancer given that you smoke. From the above stipulations, it follows of necessity that 85.55% of people choosing to smoke will get cancer, and 5.45% of people choosing not to smoke will get it. You say that this changes your estimate but not the "real probability." But this probability is quite real: it is just as true that 85.55% of smokers will get cancer, as that 90% of people with the lesion will.

This is the situation I originally described, except not as extreme. If you smoke, you will be fairly sure (and with a calibrated judgement of probability) that you will get cancer, and if you do not, you will be fairly sure that you will not get cancer.

Let's look at this in terms of calculating an expected utility. You suggest such a calculation in the Newcomb case, where you get more expected utility by taking two boxes, whether or not Omega put the million there. In the same way, in the smoking case, you think you will get more utility by smoking, whether or not you have the lesion. But notice that you are calculating two different values, one in case you have the lesion or the million, and one where you don't. In real life you have to act without knowing whether the lesion or the million is there. So you have to calculate an overall expected utility.

What would that be? It is easy to see that it is impossible to calculate an unbiased estimate of your expected utility which says overall that you will get more by taking two boxes or by smoking. This is absolutely necessary, because on average the people who smoke get less utility, and the people who take two boxes also get less utility, if there is a significant correlation between Omega's guess and people's actions.

Let's try it anyway. Let's say the overall odds of the million being there are 50/50, just like we had 50/50 odds of the lesion being there. According to you, your expected utility from taking two boxes will be $501,000, calculating your expectation from the "real" probabilities. And your expected utility from taking one box will be $500,000.

But it is easy to see that it is mathematically impossible for those to be unbiased estimates if there is some correlation between the person's choice and Omega's guess. E.g. if 90% of the people that are guessed to be one-boxers, also take just one box, and 90% of the people that are guessed to be two-boxers, also take two boxes, then the average utility from taking one box will be $900,000, and the average utility from taking both will be $100,100.90. These are "actual" utilities, that is, they are the average that those people really get. This proves definitively that estimates that say that you will get more by taking two are necessarily biased estimates.

But, you will say, your argument shows that you absolutely must get more by taking both. So what went wrong? What went wrong, is that your argument implicitly assumed that there is no correlation between your choice and what is in the box, or in the smoking case, whether you have the lesion. But this is false by the statement of the problem.

It is simply a mathematical necessity from the statement of the problem that your expected utility will be higher by one boxing and by not smoking (given a high enough discrepancy in utilities and high enough correlations). This is why I said that correlation matters, not causation.

Comment author: g_pepper 14 March 2017 11:30:22PM 0 points [-]

You agreed that you are saying that the three estimated chances are the same. That is not consistent with admitting that your choices are evidence (at least for you) one way or another -- if they are evidence, then your estimate should change depending on which choice you make.

Mea culpa, I was inconsistent. When I was thinking of Newcomb, my rationale was that I already know myself well enough to know that I am a “two-boxing” kind of person, so actually deciding to two-box does not really provide (me) any additional evidence. I could have applied the same logic in the smoking lesion – surly the fact that I want to smoke is already strong evidence that I have the lesion and actually choosing to smoke does not provide additional evidence.

In fact, in both cases, actually choosing to “one box” or “two box”, or to smoke or not to smoke, does provide evidence to an outside observer (hence my earlier quip about choosing to smoke will cause your insurance rates to increase) , and may provide new evidence to the one making the choice, depending on his/her introspective awareness (if he/she is already very in touch with his/her thoughts and preferences then actually making the choice may not provide him/her much more in the way of evidence).

However, whether or not my choice provides me evidence is a red herring. It seems to me that you are confusing the idea of increasing or decreasing your confidence that a thing will (or did) happen with the idea of increasing or decreasing the probability that it actually will (or did) happen. These two things are not the same, and in the case of the smoking lesion hypothetical, you should not smoke only if smoking increases the probability of actually getting cancer – merely increasing your assessment of the likelihood that you will get cancer is not a good reason to not smoke.

Similarly, even if choosing to open both boxes increases your expectation that Omega put nothing in the second box, the choice did not change whether or not Omega actually did put nothing in the second box.

We don't have to make Omega omniscient for there to be some correlation. Suppose that 85% of the people who chose one box found the million, but because many people took both, the total percentage was 40%. Are you arguing in favor of ignoring the correlation, or not?

Yes, I am arguing in favor of ignoring the correlation. Correlation is not causation. Omega’s choice has already been made – nothing that I do now will change what’s in the second box.

Comment author: entirelyuseless 15 March 2017 04:44:58AM 0 points [-]

It seems to me that you are confusing the idea of increasing or decreasing your confidence that a thing will (or did) happen with the idea of increasing or decreasing the probability that it actually will (or did) happen.

While I do think those are the same thing as long as your confidence is reasonable, I am not confusing anything with anything else, and I understand what you are trying to say. It just is not relevant to decision making, where what is relevant is your assessment of things.

In other words, from my point of view, "the probability a thing will happen" just is your reasonable assessment, not an objective feature of the world.

Suppose we found out that determinism was true: given the initial conditions of the universe, one particular result necessarily follows with 100% probability. If we consider "the probability a thing will happen" as an objective feature of the world, then in this situation, everything has a probability of 100% or 0%, as an objective feature. Consequently, by your method of decision making, it does not matter what you do, ever; because you never change the probability that a thing will actually happen, but only your assessment of the probability.

Obviously, though, if we found out that determinism was true, we would not suddenly stop caring about our decisions; we would keep making them in the same way as before. And what information would we be using? We would obviously be using our assessment of the probability that a result would follow, given a certain choice. We could not be using the objective probabilities since we could not change them by any decision.

So if we would use that method if we found out that determinism was true, we should use that method now.

Again, every time I brought up the idea of a perfect correlation, you simply fought the hypothetical instead of addressing it. And this is because in the situation of a perfect correlation, it is obvious that what matters is the correlation and not causation: in Scott Alexander's case, if you know that living a sinful life has 100% correlation with going to hell, that is absolutely a good reason to avoid living a sinful life, even though it does not change the objective probability that you will go to hell (which would be either 100% or 0%).

When you choose an action, it tells you a fact about the world: "I was a person who would make choice A" or "I was a person who would make choice B." And those are different facts, so you have different information in those cases. Consider the Newcomb case. You take two boxes, and you find out that you are a person who would take two boxes (or if you already think you would, you become more sure of this.) If you took only one box, you would instead find out that you were a person who would take one box. In the case of perfect correlation, it would be far better to find out you were a person who take one, than a person who would take two; and likewise even if the correlation is very high, it would be better to find out that you are a person who would take one.

You answer, in effect, that you cannot make yourself into a person who would take one or two, but this is already a fixed fact about the world. I agree. But you already know for certain that if you take one, you will learn that you are a person who would take one, and if you take both, you will learn that you are a person who would take both. You will not make yourself into that kind of person, but you will learn it nonetheless. And you already know which is better to learn, and therefore which you should choose.

The same is true about the lesion: it is better to learn that you do not have the lesion, than that you do, or even that you most likely do not have it, rather than learning that you probably have it.

Comment author: g_pepper 14 March 2017 04:17:59AM 0 points [-]

I am saying that correlation is what matters, and causation is not.

I do not understand why you think that (I suspect the point of this thread is to explain why, but in spite of that, I do not understand).

You seem to me to be suggesting that all three estimated chances should be the same.

Yes, that is what I am saying.

Note that if your estimates are different, you may be certain you will get the million if you take one box, and certain that you will not, if you take both.

No. Nowhere in Nozick’s original statement of Newcomb’s problem is any indication that Omega is omniscient to be found. All Nozick states regarding Omega’s prescience is that you have “enormous confidence” in the being’s power to predict, and that the being has a really good track record of making predictions in the past. Over the years, the problem has morphed in the heads of at least some LWers such that Omega has something resembling divine foreknowledge; I suspect that this is the reason behind at least some LWers opting to “one box”.

But -- if the lesion is likely to cause you to engage in that kind of thinking and go through with it, then choosing to smoke should make your estimate of the chance that you have the lesion higher, because it is likely that the reason you are being convinced to smoke is that you have the lesion.

Yes, I agree with that – choosing to smoke provides evidence that you have the lesion.

And in that case, if the chance increases enough, you should not smoke.

No. The fact that you have chosen to smoke may provide evidence that you have the lesion, but it does not increase the chances that you will get cancer. Think of this example:

Suppose that 90% of people with the lesion get cancer, and 99% of the people without the lesion do not get cancer.

Suppose that you have the lesion. In this case the probability that you will get cancer is .9, independent of whether or not you smoke.

Now, suppose that you do not have the lesion. In this case the probability that you will get cancer is .01, independent of whether or not you smoke.

You clearly either have the lesion or do not have the lesion. That was determined long before you made a choice about smoking, and your choice to or not to smoke does not change whether or not you have the lesion.

So, since the probability of a person with the lesion to get cancer is unaffected by his/her choice to smoke (it is .9), and the probability of a person without the lesion to get cancer is likewise unaffected by his/her choice to smoke (it is .01), then if you want to smoke you ought to go ahead and smoke; it isn’t going to affect the likelihood of your getting cancer (albeit your health insurance rates will likely go up, since it provides evidence that you have the lesion and will likely get cancer).

Comment author: entirelyuseless 14 March 2017 05:02:04AM *  0 points [-]

You agreed that you are saying that the three estimated chances are the same. That is not consistent with admitting that your choices are evidence (at least for you) one way or another -- if they are evidence, then your estimate should change depending on which choice you make.

Look at Newcomb in the way you wanted to look at the lesion. Either the million is in the box or it is not.

Let's suppose that you look at the past cases and it was there some percentage of the time. We can assume 40% for concreteness. Suppose you therefore estimate that there is a 40% chance that the million is there.

Suppose you decide to take both. What is your estimate, before you check, that the million is there?

Again, suppose you decide to take one. What is your estimate, before you check, that the million is there?

You seem to me to be saying that the estimate should remain fixed at 40%. I agree that if it does, you should take both. But this is not consistent with saying that your choice (in the smoking case) provides evidence you have the lesion; this would be equivalent to your choice to take one box being evidence that the million is there.

We don't have to make Omega omniscient for there to be some correlation. Suppose that 85% of the people who chose one box found the million, but because many people took both, the total percentage was 40%. Are you arguing in favor of ignoring the correlation, or not? After you decide to take the one box, and before you open it, do you think the chance the million is there is 40%, or 85% or something similar?

I am saying that a reasonable person would change his estimate to reflect more or less the previous correlation. And if you do, when I said "you may be certain," I was simply taking things to an extreme. We do not need that extreme. If you think the million is more likely to be there, after the choice to take the one, than after the choice to take both, and if this thinking is reasonable, then you should take one and not both.

Comment author: g_pepper 12 March 2017 07:30:40AM *  0 points [-]

However, if we speak of good and bad in terms of good and bad results, all three positions (two-boxing, smoking, and convincing yourself of something apart from evidence) are bad in that they have bad results (no million, cancer, and potentially believing something false.)

Not really - one of the main points of the smoking lesion is that smoking doesn't cause cancer. It seems to me that to choose not to smoke is to confuse correlation with causation - smoking and cancer are, in the hypothetical world of the smoking lesion, highly correlated but neither causes the other. To think that opting not to smoke has a health benefit in the world of the smoking lesion is to engage in magical thinking.

Similarly, Newcomb may be an interesting way of thinking about precommitments and decision theories for AGIs, but the fact remains that Omega has made its choice already - your choice now doesn't affect what's in the box. Nozick's statement of Newcomb is not asking if you want to make some sort of precommitment - it is asking you what you want to do after Omega has done whatever Omega has done and has left the scene. Nothing you do at that point can affect the contents of the boxes.

And, willfully choosing to convince yourself that you have free will and then succeeding in doing so cannot possibly lead one astray for the obvious reason that if you don't have free will, you can't willfully choose to do anything. If you willfully choose to convince yourself that you have free will then you have free will.

In other words, once the correlation is strong enough, the fact that you know for sure that something bad will happen if you make that choice, is enough reason not to make the choice, despite your reasoning about causality.

In the original statement of the smoking lesion, we don't know for sure that something bad will happen if we smoke. It states that smoking is "strongly correlated* with lung cancer, not that the correlation is 100%. And, even if the past correlation between A and B was 100%, there is no reason to assume that the future correlation will be 100%, particularly if A does not cause B.

And once you realize that this is true, you will realize that it can be true even when the correlation is less than 100%, although the effect size will be smaller.

The only reason a high correlation is meaningful input into a decision is because it suggests a possible causal relationship. Once you understand the causal factors, correlation no longer provides any additional relevant information.

Comment author: entirelyuseless 12 March 2017 03:37:35PM *  0 points [-]

I am not confusing correlation and causation. I am saying that correlation is what matters, and causation is not.

To think that opting not to smoke has a health benefit in the world of the smoking lesion is to engage in magical thinking.

It would be, if you thought that not smoking caused you not to get cancer. But that is not what I think. I think that you will be less likely to get cancer, via correlation. And I think being less likely to get cancer is better than being more likely to get it.

Nozick's statement of Newcomb is not asking if you want to make some sort of precommitment - it is asking you what you want to do after Omega has done whatever Omega has done and has left the scene. Nothing you do at that point can affect the contents of the boxes.

I agree, and the people here arguing that you have a reason to make a precommitment now to one-box in Newcomb are basically distracting everyone from the real issue. Take the situation where you do not have a precommitment. You never even thought about the problem before, and it comes on you by surprise.

You stand there in front of the boxes. What is your estimate of the chance that the million is there?

Now think to yourself: suppose I choose to take both. Before I open them, what will be my estimate of the chance the million is there?

And again think to yourself: suppose I choose to take only one. Before I open them, what will be my estimate of the chance the million is there?

You seem to me to be suggesting that all three estimated chances should be the same. And I am not telling you what to think about this. If your estimates are the same, fine. And in that case, I entirely agree that it is better to take both boxes.

I say it is better to take one box if and only if your estimated chances are different for those cases, and your expected utility based on the estimates will be greater using the estimate that comes after choosing to take one box.

Do you disagree with that? That is, if we assume for the sake of argument that your estimates are different, do you still think you should always take both? Note that if your estimates are different, you may be certain you will get the million if you take one box, and certain that you will not, if you take both.

This is why I am saying that correlation matters, not causation.

The only reason a high correlation is meaningful input into a decision is because it suggests a possible causal relationship. Once you understand the causal factors, correlation no longer provides any additional relevant information.

This is partly true, but what you don't seem to realize is that the direction of the causal relationship does not matter. That is, the reason you are saying this is that e.g. if you think that a smoking lesion causes cancer, then choosing to smoke will not make your estimate of the chances you will get cancer any higher than if you choose not to smoke. And in that case, your estimates do not differ. So I agree you should smoke in such a case. But -- if the lesion is likely to cause you to engage in that kind of thinking and go through with it, then choosing to smoke should make your estimate of the chance that you have the lesion higher, because it is likely that the reason you are being convinced to smoke is that you have the lesion. And in that case, if the chance increases enough, you should not smoke.

Comment author: entirelyuseless 10 March 2017 03:36:02PM 1 point [-]

"Why not just call yourself a rationalist?"

As you say, tribalism and rationality do not mix. So saying that they are a rationalist might be helpful for some people, but not for anyone who associates with people who use that name as a tribal label.

Comment author: g_pepper 09 March 2017 05:24:19AM *  0 points [-]

Got it.

I am, per your criteria, consistent. Per Newcomb, I've always been a two-boxer. One of my thoughts about Newcomb was nicely expressed in a recent posting by Lumifer.

Per the smoking lesion - as a non-smoker with no desire to smoke and a belief that smoking causes cancer, I've never gotten past fighting the hypothetical. However, I just now made the effort and realized that within the hypothetical world of the smoking lesion, I would choose to smoke.

And, I think the argument in favor of trying to convince yourself that you have free will has merit. I do have a slight concern about the word "libertarian" in your formulation of the argument, which is why I omitted it or included it parenthetically. My concern is that under a compatibilist conception of free will, it would be possible to willfully convince yourself of something even if determinism is true. But, if you remove the word "libertarian", it seems reasonable that a person interested in arriving at truth should attempt to convince himself/herself that he/she has free will.

ETA:

In the parent post, you said:

That depends. If you think that you should take both boxes in Newcomb, and that you should smoke in the Smoking Lesion, then you are consistent in also thinking that you should try to convince yourself that you have free will. But if you disagree with one of them but not all, your position is inconsistent.

But here you called the free will argument bad.

Which is it? Is the argument bad or is it only inconsistent with one-boxing and not smoking?

It seems to me that even if we accept your argument that to be consistent one must either two-box, smoke, and try to convince oneself that one has free will, or one box, not smoke, and not try to convince oneself that one has free will, you still have not made the case that the arguments in favor of two-boxing, smoking or trying to convince oneself that one has fee will are bad arguments.

Comment author: entirelyuseless 09 March 2017 02:28:28PM 0 points [-]

Intellectually I have more respect for someone who holds a consistent position on these things than someone who holds an inconsistent position. The original point was a bit ad hominem, as most people on LW were maintaining Eliezer's inconsistent position (one-boxing and smoking).

However, if we speak of good and bad in terms of good and bad results, all three positions (two-boxing, smoking, and convincing yourself of something apart from evidence) are bad in that they have bad results (no million, cancer, and potentially believing something false.) In that particular sense you would be better off with an inconsistent position, since you would get good results in one or more of the cases.

I thought I did, sort of, make the case for that in the post on the Alien Implant and the comments on that. It's true that it's not much of a case, since it is basically just saying, "obviously this is good and that's bad," but that's how it is. Here is Scott Alexander with a comment making the case:

How can I make this clearer...okay. Let's say there have been a trillion Calvinists throughout history. They've all been rationalists and they've all engaged in this same argument. Some of them have been pro-sin for the same reasons you are, others have been pro-virtue for the same reasons I am. Some on each side have changed their minds after having listened to the arguments. And of all of these trillion Calvinists, every single one who after all the arguments decides to live a life of virtue - has gone to Heaven. And every single one who, after all the arguments, decides to live a life of sin - has gone to Hell.

To say that you have no reason to change your mind here seems to be suggesting that there's a pretty good chance you will be the exception to a rule that has held 100% of the time in previous cases: the sinful Calvinist who goes to Heaven, or the virtuous Calvinist who goes to Hell. If this never worked for a trillion people in your exact position, why do you think it will work for you now?

In other words, once the correlation is strong enough, the fact that you know for sure that something bad will happen if you make that choice, is enough reason not to make the choice, despite your reasoning about causality.

And once you realize that this is true, you will realize that it can be true even when the correlation is less than 100%, although the effect size will be smaller.

Comment author: g_pepper 08 March 2017 04:18:01PM 0 points [-]

OK. Sorry to have misunderstood.

So, I don't see the flaw in the argument. Clearly the argument doesn't really demonstrate that we have free will, but I don't think that it is intended to do that. It does seem to make the case that if you want to be right about free will, you should try to convince yourself that you have free will.

What am I missing?

Comment author: entirelyuseless 09 March 2017 02:27:59AM 0 points [-]

That depends. If you think that you should take both boxes in Newcomb, and that you should smoke in the Smoking Lesion, then you are consistent in also thinking that you should try to convince yourself that you have free will. But if you disagree with one of them but not all, your position is inconsistent.

I disagree with all three, and the argument is implied in my other post about Newcomb and the lesion. In particular, in the case of convincing yourself, the fact that it would be bad to believe something false, is a reason not to convince yourself (unless the evidence supports it) even if it is merely something that happens, just like cancer is a reason not to smoke even though it would be just something that happens.

Comment author: g_pepper 08 March 2017 02:34:17PM 0 points [-]

Sure, but the question is why you should try to convince yourself of libertarian free will, instead of trying to convince yourself of the opposite.

It seems like you answered the question yourself when you said:

If you succeed in the first case, it shows you are right, but if you succeed in the second, it shows you are wrong.

Surely it is better to be right than wrong, right?

Comment author: entirelyuseless 08 March 2017 03:39:10PM 0 points [-]

Yes. I was trying to explain how the argument is supposed to work.

View more: Next