Lukas_Gloor comments on A (small) critique of total utilitarianism - Less Wrong

36 Post author: Stuart_Armstrong 26 June 2012 12:36PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (237)

You are viewing a single comment's thread. Show more comments above.

Comment author: Lukas_Gloor 26 June 2012 01:33:34PM 0 points [-]

Yes. The error is that humans aren't good at utilitarianism.

Why would that be an error? It's not a requirement for an ethical theory that Homo sapiens must be good at it. If we notice that humans are bad at it, maybe we should make AI or posthumans that are better at it, if we truly view this as the best ethical theory. Besides, if the outcome of people following utilitarianism is really that bad, then utilitarianism would demand (it gets meta now) that people should follow some other theory that overall has better outcomes (see also Parfit's Reasons and Persons). Another solution is Hare's proposed "Two-Level Utilitarianism". From Wikipedia:

Hare proposed that on a day to day basis, one should think and act like a rule utilitarian and follow a set of intuitive prima facie rules, in order to avoid human error and bias influencing one's decision-making, and thus avoiding the problems that affected act utilitarianism.

Comment author: David_Gerard 26 June 2012 01:38:05PM *  1 point [-]

The error is that it's humans who are attempting to implement the utilitarianism. I'm not talking about hypothetical non-human intelligences, and I don't think they were implied in the context.

Comment author: private_messaging 27 June 2012 08:21:59AM *  2 points [-]

I don't think hypothetical superhuman would be dramatically different in their ability to employ predictive models upon uncertainty. If you increase power so it is to mankind as mankind is to 1 amoeba, you only double anything that is fundamentally logarithmic. While in many important cases there are faster approximations, it's magical thinking to expect them everywhere; and there are problems where the errors inherently grow exponentially with time even if the model is magically perfect (butterfly effect). Plus, of course, models of other intelligences rapidly get unethical as you try to improve fidelity (if it is emulating people and putting them through torture and dust speck experience to compare values).

Comment author: fubarobfusco 26 June 2012 07:25:38PM 0 points [-]

See also Ends Don't Justify Means (Among Humans): having non-consequentialist rules (e.g. "Thou shalt not murder, even if it seems like a good idea") can be consequentially desirable since we're not capable of being ideal consequentialists.

Comment author: David_Gerard 26 June 2012 09:37:54PM *  7 points [-]

Oh, indeed. But when you've repeatedly emphasised "shut up and multiply", tacking "btw don't do anything weird" on the end strikes me as susceptible to your readers not heeding it, particularly when they really need to.