MixedNuts comments on Model Uncertainty, Pascalian Reasoning and Utilitarianism - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (154)
I was pretty happy before LW, until I learnt about utility maximization. It tells me that I ought to do what I don't want to do on any other than some highly abstract intellectual level. I don't even get the smallest bit of satisfaction out of it, just depression.
Saving galactic civilizations from superhuman monsters burning the cosmic commons, walking into death camps as to reduce the likelihood of being blackmailed, discounting people by the length of their address in the multiverse...taking all that seriously and keeping one's sanity, that's difficult for some people.
What LW means by 'rationality' is to win in a hard to grasp sense that is often completely detached from the happiness and desires of the individual.
We do in fact want to save worlds we can't begin to fathom from dangers we can't begin to fathom even if it makes us depressed or dead... but if you don't get any satisfaction from saving the world, you might have a problem with selfishness.
That's not what I meant. What I meant is the general problem you run into when you take this stuff to its extreme. You end up saving hypothetical beings with a very low probability. That means that you might very well save no being at all, if your model was bogus. I am aware that the number of beings saved often outweighs the low probability...but I am not particular confident in this line of reasoning, i.e. in the meta-level of thinking about how to maximize good deeds. That leads to all kind of crazy seeming stuff.
If it does, something almost definitely went wrong. Biases crept in somewhere between the risk assessment, the outside view correction process, the policy-proposing process, the policy-analyzing process, the policy outside view correction process, the ethical injunction check, and the "(anonymously) ask a few smart people whether some part of this is crazy" step. I'm not just adding unnatural steps; each of those should be separate, and each of those is a place where error can throw everything off. Overconfidence plus conjunction fallacy equals crazy seeming stuff. And this coming from the guy who is all about taking ideas seriously.