TheOtherDave comments on Alan Carter on the Complexity of Value - Less Wrong

30 Post author: Ghatanathoah 10 May 2012 07:23AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (41)

You are viewing a single comment's thread. Show more comments above.

Comment author: Vaniver 10 May 2012 03:08:52PM *  8 points [-]

According to Aumann’s Agreement theorem, such a concurrence provides a tiny amount of Bayesian evidence that you’re onto something.

What? That's... not AAT at all.

such pleasure-wizards, to put it bluntly, do not exist... But their opposites do.

What possible justification could he have for this? "No one is better at happiness than others, but some people are worse at happiness" is obviously impossible, and if the claim is that there's a plateau of "normal" people who are all roughly equivalent at converting resources into happiness and then people who are worse than that plateau, that sounds more like wishful thinking than a justified empirical claim.

On closer inspection it was not hard to see why Carter had developed theories so close to those of Eliezer and other members of Less Wrong and SIAI communities.

They really don't look that similar to me; they're looking at very different problems and have very different approaches.

The basic problem is that utilitarianism simply doesn't work.

Carter takes the common critique of total utilitarianism and the common critique of average utilitarianism, and says "well, both critiques go away if we try to maximize a combination of total and average." But those are just the common critiques, not the most potent ones. The basic problem with utilitarianism is that utility is difficult to measure and impossible to compare- and so both total and average utilitarianism are not things that can actually be calculated.

Eliezer is trying to tackle the problem of what utilities actually cash out as, so that you can build a machine that can perform preference calculations and not get them horribly wrong. Will Alice be happier with an unlimited supply of cookies, or if she has to strive for them? The options satisfy different desires in different amounts, and so fun theory and complexity of value deal with the tradeoffs between different desires. If you just built a machine that knew about our desire to feel happy and didn't know about our desire to impact the real world, you would get a population of wireheads- something that many of us think would be a bad outcome now, but cannot justify that judgment in terms of average or total 'happiness.'

Comment author: TheOtherDave 10 May 2012 03:35:38PM 5 points [-]

...knew about our desire to feel happy and didn't know about our desire to believe we're impacting the real world

Do you really mean this, as opposed to "our desire to impact the real world"?

Comment author: Vaniver 10 May 2012 03:46:32PM 1 point [-]

I've edited it to the version you said, as it's cleaner for this discussion that way. In general I think I would separate the desire to impact and the desire for the map to match the territory.

Comment author: TheOtherDave 10 May 2012 03:59:07PM 0 points [-]

(nods) That's fair.