TheOtherDave comments on Alan Carter on the Complexity of Value - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (41)
What? That's... not AAT at all.
What possible justification could he have for this? "No one is better at happiness than others, but some people are worse at happiness" is obviously impossible, and if the claim is that there's a plateau of "normal" people who are all roughly equivalent at converting resources into happiness and then people who are worse than that plateau, that sounds more like wishful thinking than a justified empirical claim.
They really don't look that similar to me; they're looking at very different problems and have very different approaches.
The basic problem is that utilitarianism simply doesn't work.
Carter takes the common critique of total utilitarianism and the common critique of average utilitarianism, and says "well, both critiques go away if we try to maximize a combination of total and average." But those are just the common critiques, not the most potent ones. The basic problem with utilitarianism is that utility is difficult to measure and impossible to compare- and so both total and average utilitarianism are not things that can actually be calculated.
Eliezer is trying to tackle the problem of what utilities actually cash out as, so that you can build a machine that can perform preference calculations and not get them horribly wrong. Will Alice be happier with an unlimited supply of cookies, or if she has to strive for them? The options satisfy different desires in different amounts, and so fun theory and complexity of value deal with the tradeoffs between different desires. If you just built a machine that knew about our desire to feel happy and didn't know about our desire to impact the real world, you would get a population of wireheads- something that many of us think would be a bad outcome now, but cannot justify that judgment in terms of average or total 'happiness.'
Do you really mean this, as opposed to "our desire to impact the real world"?
I've edited it to the version you said, as it's cleaner for this discussion that way. In general I think I would separate the desire to impact and the desire for the map to match the territory.
(nods) That's fair.