buybuydandavis comments on Open thread, 25-31 August 2014 - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (227)
The Smoking Lesion problem is
"Susan is debating whether or not to smoke. She knows that smoking is strongly correlated with lung cancer, but only because there is a common cause – a condition that tends to cause both smoking and cancer. Once we fix the presence or absence of this condition, there is no additional correlation between smoking and cancer. Susan prefers smoking without cancer to not smoking without cancer, and prefers smoking with cancer to not smoking with cancer. Should Susan smoke? Is seems clear that she should.”
But now assume that Susan suffers from painful anxiety proportional to her Bayesian estimate of the probability of her getting lung cancer. This anxiety plays a bigger role in her utility function than any enjoyment she might get from smoking. Should she still smoke?
Susan will have less anxiety if she doesn't smoke, so doesn't this mean she shouldn't smoke? But when Susan is making the decision about smoking couldn't she say to herself “whether I smoke will have no effect on the probability of my getting lung cancer, and since my brain makes a rational estimate of the probability of my getting lung cancer when deciding how much anxiety to dump on me, whether I smoke shouldn't impact my level of anxiety, so I should smoke since I enjoy it? Clearly, if Susan flipped a coin to decide if she should smoke, her anxiety would be the same regardless of how the coin landed. Also, is this functionally the same as Newcomb’s problem?
Generally, this is false.
If I took the time to write a comment laying out a decision theoretic problem and received a response like this (and saw it so upvoted), I would be pretty annoyed and suspect that maybe (though not definitely) the respondent was fighting the hypothetical, and that their flippant remark might change the tone of the conversation enough to discourage others engaging with my query.
I've been frustrated enough times by people nitpicking or derailing (even if only with not-supposed-to-be-derailing throwaway jokes) my attempts to introduce a hypothetical that by this point I would guess that in most cases it's actually rude to respond like this unless you're really, really sure that your nitpick of a premise actually significantly affects the hypothetical or that you've got a really good joke. In Should World people would evaluate the seriousness of a thought experiment on its merits and not by the immediate non-serious responses to it, but experience says to me that's not a property of the world we actually live in.
If I'm interpreting your comment correctly, you're either stating that it's not the case that people's brains make rational probability estimates (which everybody on friggin' LessWrong will already know!), or denying a very specific, intentionally artificial statement about the relation between credences and anxiety that was constructed for a decision theory thought experiment. In either case I'm not sure what the benefits of your comment are.
Am I missing something that you and the upvoters saw in your comment?
Edit: Okay, it occurs to me that maybe you were making an extremely tongue-in-cheek, understated rejection of the premise for comical effect--'Haha, the thought experiments we use are far divorced from the actual vagaries of human thought'. The fact I found it so hard to get this suggests to me that others probably didn't get the intended interpretation of your comment, which still leaves potential for it to have the negative effects I mentioned above. (E.g. maybe someone got your joke immediately, had a hearty laugh, and upvoted, but then the other upvoters thought they were upvoting the literal interpretation of your post.)