It's the file-drawer problem in comic form.
I like this post, but I think that it suffers from two things that make it badly written:
Many times (starting with the title) the phrasing chosen suggests that you are attacking the basic decision-theoretic principle that one should the take the action with the highest expected utility (or give to the charity with the highest expected marginal value resulting from the donation). But you're not attacking this; you're attacking a way to incorrectly calculate expected utility by using only information that can be easily quantified and leaving out information that's harder to quantify. This is certainly a correct point, and a good one to make, but it's not the point that the title suggests, and many commenters have already been confused by this.
Pascal's mugging should be left out entirely. For one thing, it's a deliberately counterintuitive situation, so your point that we should trust our intuitions (as manifestations of our unquantifiable prior) doesn't obviously apply here. Furthermore, it's clear that the outcome of not giving the mugger money is not normally (or log-normally) distributed, with a decent chance of producing any value between 0 and 2X. In fact, it's a bimodal distribution with almost everything weighted at 0 and the rest weighted at X, with (even relative to the small amount at X) nothing at 2X or 1/2 X. This is also very unlike the outcome of donating to a charity, which I can believe is approximately log-normal. So all of the references to Pascal's mugging just confuse the main point.
Nevertheless, the main point is a good one, and I have voted this post up for it.
This is also very unlike the outcome of donating to a charity, which I can believe is approximately log-normal.
This can't be right, because log-normal variables are never negative, and charitable interventions do backfire (e.g. Scared Straight, or any health-care program that promotes quackery over real treatment) a non-negligible percentage of the time.
There are real life examples where reality has turned out to be the "least convenient of possible worlds". I have spent many hours arguing with people who insist that there are no significant gender differences (beyond the obvious), and are convinced that to assert otherwise is morally reprehensible.
They have spent so long arguing that such differences do not exist, and this is the reason that sexism is wrong, that their morality just can't cope with a world in which this turns out not to be true. There are many similar politically charged issues - Pinker discusses quite a few in the Blank Slate - where people aren't wiling to listen to arguments about factual issues because they believe they have moral consequences.
The problem, of course - and I realise this is the main point of this post - is that if your morality is contingent on empirical issues where you might turn out to be wrong, you have to accept the consequences. If you believe that sexism is wrong because there are no heritable gender differences, you have to be willing to accept that if these differences do turn out to exist then you'll say sexism is ok.
This is probably a test you should apply to all of your moral beliefs - if it just so happens that I'm wrong about the factual issue on which I'm basing my belief is wrong, will really I be willing to change my mind?
That raises an interesting question: is it possible to base a moral code only on what's true in all possible worlds that contain me?
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
I think emotional nihilism is more like a utility function that's locally constant at zero. You have emotional investments, but they're options that are too far out of the money. (Worse is when your short puts and calls are at the money and your longs are out of it.)