Lukas_Gloor comments on A (small) critique of total utilitarianism - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (237)
Yes. The error is that humans aren't good at utilitarianism.
private_messaging has given an example elsewhere: the trouble with utilitarians is that they think they are utilitarians. They then use numbers to convince themselves to do something they would otherwise consider evil.
The Soviet Union was an attempt to build a Friendly government based on utilitarianism. They quickly reached "shoot someone versus dust specks" and went for shooting people.
They weren't that good at lesser utilitarian decisions either, tending to ignore how humans actually behaved in favour of taking their theories and shutting-up-and-multiplying. Then when that didn't work, they did it harder.
I'm sure someone objecting to the Soviet Union example as non-negligible evidence can come up with examples that worked out much better, of course.
Why would that be an error? It's not a requirement for an ethical theory that Homo sapiens must be good at it. If we notice that humans are bad at it, maybe we should make AI or posthumans that are better at it, if we truly view this as the best ethical theory. Besides, if the outcome of people following utilitarianism is really that bad, then utilitarianism would demand (it gets meta now) that people should follow some other theory that overall has better outcomes (see also Parfit's Reasons and Persons). Another solution is Hare's proposed "Two-Level Utilitarianism". From Wikipedia:
The error is that it's humans who are attempting to implement the utilitarianism. I'm not talking about hypothetical non-human intelligences, and I don't think they were implied in the context.
I don't think hypothetical superhuman would be dramatically different in their ability to employ predictive models upon uncertainty. If you increase power so it is to mankind as mankind is to 1 amoeba, you only double anything that is fundamentally logarithmic. While in many important cases there are faster approximations, it's magical thinking to expect them everywhere; and there are problems where the errors inherently grow exponentially with time even if the model is magically perfect (butterfly effect). Plus, of course, models of other intelligences rapidly get unethical as you try to improve fidelity (if it is emulating people and putting them through torture and dust speck experience to compare values).
See also Ends Don't Justify Means (Among Humans): having non-consequentialist rules (e.g. "Thou shalt not murder, even if it seems like a good idea") can be consequentially desirable since we're not capable of being ideal consequentialists.
Oh, indeed. But when you've repeatedly emphasised "shut up and multiply", tacking "btw don't do anything weird" on the end strikes me as susceptible to your readers not heeding it, particularly when they really need to.