MixedNuts comments on Model Uncertainty, Pascalian Reasoning and Utilitarianism - Less Wrong

23 Post author: multifoliaterose 14 June 2011 03:19AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (154)

You are viewing a single comment's thread. Show more comments above.

Comment author: XiXiDu 15 June 2011 10:46:53AM 4 points [-]

I don't see how you can get people to stop talking about human utility functions unless you close LW off from newcomers.

I was pretty happy before LW, until I learnt about utility maximization. It tells me that I ought to do what I don't want to do on any other than some highly abstract intellectual level. I don't even get the smallest bit of satisfaction out of it, just depression.

Saving galactic civilizations from superhuman monsters burning the cosmic commons, walking into death camps as to reduce the likelihood of being blackmailed, discounting people by the length of their address in the multiverse...taking all that seriously and keeping one's sanity, that's difficult for some people.

What LW means by 'rationality' is to win in a hard to grasp sense that is often completely detached from the happiness and desires of the individual.

Comment author: MixedNuts 15 June 2011 03:39:38PM -2 points [-]

We do in fact want to save worlds we can't begin to fathom from dangers we can't begin to fathom even if it makes us depressed or dead... but if you don't get any satisfaction from saving the world, you might have a problem with selfishness.

Comment author: XiXiDu 15 June 2011 04:12:08PM 0 points [-]

...but if you don't get any satisfaction from saving the world, you might have a problem with selfishness.

That's not what I meant. What I meant is the general problem you run into when you take this stuff to its extreme. You end up saving hypothetical beings with a very low probability. That means that you might very well save no being at all, if your model was bogus. I am aware that the number of beings saved often outweighs the low probability...but I am not particular confident in this line of reasoning, i.e. in the meta-level of thinking about how to maximize good deeds. That leads to all kind of crazy seeming stuff.

Comment author: Will_Newsome 16 June 2011 08:14:13AM *  0 points [-]

That leads to all kind of crazy seeming stuff.

If it does, something almost definitely went wrong. Biases crept in somewhere between the risk assessment, the outside view correction process, the policy-proposing process, the policy-analyzing process, the policy outside view correction process, the ethical injunction check, and the "(anonymously) ask a few smart people whether some part of this is crazy" step. I'm not just adding unnatural steps; each of those should be separate, and each of those is a place where error can throw everything off. Overconfidence plus conjunction fallacy equals crazy seeming stuff. And this coming from the guy who is all about taking ideas seriously.