Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Not for the Sake of Pleasure Alone

36 lukeprog 11 June 2011 11:21PM

Related: Not for the Sake of Happiness (Alone), Value is Fragile, Fake Fake Utility Functions, You cannot be mistaken about (not) wanting to wirehead, Utilons vs. Hedons, Are wireheads happy?

When someone tells me that all human action is motivated by the desire for pleasure, or that we can solve the Friendly AI problem by programming a machine superintelligence to maximize pleasure, I use a two-step argument to persuade them that things are more complicated than that.

First, I present them with a variation on Nozick's experience machine,1 something like this:

Suppose that an advanced team of neuroscientists and computer scientists could hook your brain up to a machine that gave you maximal, beyond-orgasmic pleasure for the rest of an abnormally long life. Then they will blast you and the pleasure machine into deep space at near light-speed so that you could never be interfered with. Would you let them do this for you?

Most people say they wouldn't choose the pleasure machine. They begin to realize that even though they usually experience pleasure when they get what they desired, they want more than just pleasure. They also want to visit Costa Rica and have good sex and help their loved ones succeed.

But we can be mistaken when inferring our desires from such intuitions, so I follow this up with some neuroscience.

continue reading »

Nonparametric Ethics

27 Eliezer_Yudkowsky 20 June 2009 11:31AM

(Inspired by a recent conversation with Robin Hanson.)

Robin Hanson, in his essay on "Minimal Morality", suggests that the unreliability of our moral reasoning should lead us to seek simple moral principles:

"In the ordinary practice of fitting a curve to a set of data points, the more noise one expects in the data, the simpler a curve one fits to that data.  Similarly, when fitting moral principles to the data of our moral intuitions, the more noise we expect in those intuitions, the simpler a set of principles we should use to fit those intuitions.  (This paper elaborates.)"

In "the limit of expecting very large errors of our moral intuitions", says Robin, we should follow an extremely simple principle - the simplest principle we can find that seems to compress as much morality as possible.  And that principle, says Robin, is that it is usually good for people to get what they want, if no one else objects.

Now I myself carry on something of a crusade against trying to compress morality down to One Great Moral Principle.  I have developed at some length the thesis that human values are, in actual fact, complex, but that numerous biases lead us to underestimate and overlook this complexity.  From a Friendly AI perspective, the word "want" in the English sentence above is a magical category.

But Robin wasn't making an argument in Friendly AI, but in human ethics: he's proposing that, in the presence of probable errors in moral reasoning, we should look for principles that seem simple to us, to carry out at the end of the day.  The more we distrust ourselves, the simpler the principles.

This argument from fitting noisy data, is a kind of logic that can apply even when you have prior reason to believe the underlying generator is in fact complicated.  You'll still get better predictions from the simpler model, because it's less sensitive to noise.

Even so, my belief that human values are in fact complicated, leads me to two objections and an alternative proposal:

continue reading »

Value is Fragile

38 Eliezer_Yudkowsky 29 January 2009 08:46AM

Followup toThe Fun Theory Sequence, Fake Fake Utility Functions, Joy in the Merely Good, The Hidden Complexity of WishesThe Gift We Give To Tomorrow, No Universally Compelling Arguments, Anthropomorphic Optimism, Magical Categories, ...

If I had to pick a single statement that relies on more Overcoming Bias content I've written than any other, that statement would be:

Any Future not shaped by a goal system with detailed reliable inheritance from human morals and metamorals, will contain almost nothing of worth.

"Well," says the one, "maybe according to your provincial human values, you wouldn't like it.  But I can easily imagine a galactic civilization full of agents who are nothing like you, yet find great value and interest in their own goals.  And that's fine by me.  I'm not so bigoted as you are.  Let the Future go its own way, without trying to bind it forever to the laughably primitive prejudices of a pack of four-limbed Squishy Things -"

My friend, I have no problem with the thought of a galactic civilization vastly unlike our own... full of strange beings who look nothing like me even in their own imaginations... pursuing pleasures and experiences I can't begin to empathize with... trading in a marketplace of unimaginable goods... allying to pursue incomprehensible objectives... people whose life-stories I could never understand.

That's what the Future looks like if things go right.

If the chain of inheritance from human (meta)morals is broken, the Future does not look like this.  It does not end up magically, delightfully incomprehensible.

With very high probability, it ends up looking dull.  Pointless.  Something whose loss you wouldn't mourn.

Seeing this as obvious, is what requires that immense amount of background explanation.

continue reading »

Thou Art Godshatter

60 Eliezer_Yudkowsky 13 November 2007 07:38PM

Followup toAn Alien God, Adaptation-Executers not Fitness-Maximizers, Evolutionary Psychology

Before the 20th century, not a single human being had an explicit concept of "inclusive genetic fitness", the sole and absolute obsession of the blind idiot god.  We have no instinctive revulsion of condoms or oral sex.  Our brains, those supreme reproductive organs, don't perform a check for reproductive efficacy before granting us sexual pleasure.

Why not?  Why aren't we consciously obsessed with inclusive genetic fitness?  Why did the Evolution-of-Humans Fairy create brains that would invent condoms?  "It would have been so easy," thinks the human, who can design new complex systems in an afternoon.

continue reading »