Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Value is Fragile

38 Post author: Eliezer_Yudkowsky 29 January 2009 08:46AM

Followup toThe Fun Theory Sequence, Fake Fake Utility Functions, Joy in the Merely Good, The Hidden Complexity of WishesThe Gift We Give To Tomorrow, No Universally Compelling Arguments, Anthropomorphic Optimism, Magical Categories, ...

If I had to pick a single statement that relies on more Overcoming Bias content I've written than any other, that statement would be:

Any Future not shaped by a goal system with detailed reliable inheritance from human morals and metamorals, will contain almost nothing of worth.

"Well," says the one, "maybe according to your provincial human values, you wouldn't like it.  But I can easily imagine a galactic civilization full of agents who are nothing like you, yet find great value and interest in their own goals.  And that's fine by me.  I'm not so bigoted as you are.  Let the Future go its own way, without trying to bind it forever to the laughably primitive prejudices of a pack of four-limbed Squishy Things -"

My friend, I have no problem with the thought of a galactic civilization vastly unlike our own... full of strange beings who look nothing like me even in their own imaginations... pursuing pleasures and experiences I can't begin to empathize with... trading in a marketplace of unimaginable goods... allying to pursue incomprehensible objectives... people whose life-stories I could never understand.

That's what the Future looks like if things go right.

If the chain of inheritance from human (meta)morals is broken, the Future does not look like this.  It does not end up magically, delightfully incomprehensible.

With very high probability, it ends up looking dull.  Pointless.  Something whose loss you wouldn't mourn.

Seeing this as obvious, is what requires that immense amount of background explanation.

And I'm not going to iterate through all the points and winding pathways of argument here, because that would take us back through 75% of my Overcoming Bias posts.  Except to remark on how many different things must be known to constrain the final answer.

Consider the incredibly important human value of "boredom" - our desire not to do "the same thing" over and over and over again.  You can imagine a mind that contained almost the whole specification of human value, almost all the morals and metamorals, but left out just this one thing -

- and so it spent until the end of time, and until the farthest reaches of its light cone, replaying a single highly optimized experience, over and over and over again.

Or imagine a mind that contained almost the whole specification of which sort of feelings humans most enjoy - but not the idea that those feelings had important external referents.  So that the mind just went around feeling like it had made an important discovery, feeling it had found the perfect lover, feeling it had helped a friend, but not actually doing any of those things - having become its own experience machine.  And if the mind pursued those feelings and their referents, it would be a good future and true; but because this one dimension of value was left out, the future became something dull.  Boring and repetitive, because although this mind felt that it was encountering experiences of incredible novelty, this feeling was in no wise true.

Or the converse problem - an agent that contains all the aspects of human value, except the valuation of subjective experience.  So that the result is a nonsentient optimizer that goes around making genuine discoveries, but the discoveries are not savored and enjoyed, because there is no one there to do so.  This, I admit, I don't quite know to be possible.  Consciousness does still confuse me to some extent.  But a universe with no one to bear witness to it, might as well not be.

Value isn't just complicated, it's fragile.  There is more than one dimension of human value, where if just that one thing is lost, the Future becomes null.  A single blow and all value shatters.  Not every single blow will shatter all value - but more than one possible "single blow" will do so.

And then there are the long defenses of this proposition, which relies on 75% of my Overcoming Bias posts, so that it would be more than one day's work to summarize all of it.  Maybe some other week.  There's so many branches I've seen that discussion tree go down.

After all - a mind shouldn't just go around having the same experience over and over and over again.  Surely no superintelligence would be so grossly mistaken about the correct action?

Why would any supermind want something so inherently worthless as the feeling of discovery without any real discoveries?  Even if that were its utility function, wouldn't it just notice that its utility function was wrong, and rewrite it?  It's got free will, right?

Surely, at least boredom has to be a universal value.  It evolved in humans because it's valuable, right?  So any mind that doesn't share our dislike of repetition, will fail to thrive in the universe and be eliminated...

If you are familiar with the difference between instrumental values and terminal values, and familiar with the stupidity of natural selection, and you understand how this stupidity manifests in the difference between executing adaptations versus maximizing fitness, and you know this turned instrumental subgoals of reproduction into decontextualized unconditional emotions...

...and you're familiar with how the tradeoff between exploration and exploitation works in Artificial Intelligence...

...then you might be able to see that the human form of boredom that demands a steady trickle of novelty for its own sake, isn't a grand universal, but just a particular algorithm that evolution coughed out into us.  And you might be able to see how the vast majority of possible expected utility maximizers, would only engage in just so much efficient exploration, and spend most of its time exploiting the best alternative found so far, over and over and over.

That's a lot of background knowledge, though.

And so on and so on and so on through 75% of my posts on Overcoming Bias, and many chains of fallacy and counter-explanation.  Some week I may try to write up the whole diagram.  But for now I'm going to assume that you've read the arguments, and just deliver the conclusion:

We can't relax our grip on the future - let go of the steering wheel - and still end up with anything of value.

And those who think we can -

- they're trying to be cosmopolitan.  I understand that.  I read those same science fiction books as a kid:  The provincial villains who enslave aliens for the crime of not looking just like humans.  The provincial villains who enslave helpless AIs in durance vile on the assumption that silicon can't be sentient.  And the cosmopolitan heroes who understand that minds don't have to be just like us to be embraced as valuable -

I read those books.  I once believed them.  But the beauty that jumps out of one box, is not jumping out of all boxes.  (This being the moral of the sequence on Lawful Creativity.)  If you leave behind all order, what is left is not the perfect answer, what is left is perfect noise.  Sometimes you have to abandon an old design rule to build a better mousetrap, but that's not the same as giving up all design rules and collecting wood shavings into a heap, with every pattern of wood as good as any other.  The old rule is always abandoned at the behest of some higher rule, some higher criterion of value that governs.

If you loose the grip of human morals and metamorals - the result is not mysterious and alien and beautiful by the standards of human value.  It is moral noise, a universe tiled with paperclips.  To change away from human morals in the direction of improvement rather than entropy, requires a criterion of improvement; and that criterion would be physically represented in our brains, and our brains alone.

Relax the grip of human value upon the universe, and it will end up seriously valueless.  Not, strange and alien and wonderful, shocking and terrifying and beautiful beyond all human imagination.  Just, tiled with paperclips.

It's only some humans, you see, who have this idea of embracing manifold varieties of mind - of wanting the Future to be something greater than the past - of being not bound to our past selves - of trying to change and move forward.

A paperclip maximizer just chooses whichever action leads to the greatest number of paperclips.

No free lunch.  You want a wonderful and mysterious universe?  That's your value.  You work to create that value.  Let that value exert its force through you who represents it, let it make decisions in you to shape the future.  And maybe you shall indeed obtain a wonderful and mysterious universe.

No free lunch.  Valuable things appear because a goal system that values them takes action to create them.  Paperclips don't materialize from nowhere for a paperclip maximizer.  And a wonderfully alien and mysterious Future will not materialize from nowhere for us humans, if our values that prefer it are physically obliterated - or even disturbed in the wrong dimension.  Then there is nothing left in the universe that works to make the universe valuable.

You do have values, even when you're trying to be "cosmopolitan", trying to display a properly virtuous appreciation of alien minds.  Your values are then faded further into the invisible background - they are less obviously human.  Your brain probably won't even generate an alternative so awful that it would wake you up, make you say "No!  Something went wrong!" even at your most cosmopolitan.  E.g. "a nonsentient optimizer absorbs all matter in its future light cone and tiles the universe with paperclips".  You'll just imagine strange alien worlds to appreciate.

Trying to be "cosmopolitan" - to be a citizen of the cosmos - just strips off a surface veneer of goals that seem obviously "human".

But if you wouldn't like the Future tiled over with paperclips, and you would prefer a civilization of...

...sentient beings...

...with enjoyable experiences...

...that aren't the same experience over and over again...

...and are bound to something besides just being a sequence of internal pleasurable feelings...

...learning, discovering, freely choosing...

...well, I've just been through the posts on Fun Theory that went into some of the hidden details on those short English words.

Values that you might praise as cosmopolitan or universal or fundamental or obvious common sense, are represented in your brain just as much as those values that you might dismiss as merely human.  Those values come of the long history of humanity, and the morally miraculous stupidity of evolution that created us.  (And once I finally came to that realization, I felt less ashamed of values that seemed 'provincial' - but that's another matter.)

These values do not emerge in all possible minds.  They will not appear from nowhere to rebuke and revoke the utility function of an expected paperclip maximizer.

Touch too hard in the wrong dimension, and the physical representation of those values will shatter - and not come back, for there will be nothing left to want to bring it back.

And the referent of those values - a worthwhile universe - would no longer have any physical reason to come into being.

Let go of the steering wheel, and the Future crashes.

Comments (81)

Sort By: Old
Comment author: Will_Pearson 29 January 2009 10:32:01AM 1 point [-]

"Except to remark on how many different things must be known to constrain the final answer."

What would you estimate the probability of each thing being correct is?

Comment author: CannibalSmith 29 January 2009 10:33:19AM -3 points [-]

What is human morals and metamorals?

Comment author: Joshua_Fox 29 January 2009 10:39:04AM 1 point [-]

What about "near-human" morals, like, say, Kzinti: Where the best of all possible words contains hierarchies, duels to the death, and subsentient females; along with exploration, technology, and other human-like activities. Though I find their morality repugnant for humans, I can see that they have the moral "right" to it. Is human morality, then, in some deep sense better than those?

Comment deleted 22 March 2010 02:19:27PM [-]
Comment author: Eliezer_Yudkowsky 29 January 2009 12:11:09PM 5 points [-]

Pearson, it's not that kind of chaining. More like trying to explain to someone why their randomly chosen lottery ticket won't win (big space, small target, poor aim) when their brain manufactures argument after argument after different argument for why they'll soon be rich.

The core problem is simple. The targeting information disappears, so does the good outcome. Knowing enough to refute every fallacious remanufacturing of the value-information from nowhere, is the hard part.

What are the odds that every proof of God's existence is wrong, when there are so many proofs? Pretty high. A selective search for plausible-sounding excuses won't change reality itself. But knowing the specific refutations - being able to pinpoint the flaws in every supposed proof - that might take some study.

Comment author: Robin_Hanson2 29 January 2009 01:14:48PM 0 points [-]

I have read and considered all of Eliezer's posts, and still disagree with him on this his grand conclusion. Eliezer, do you think the universe was terribly unlikely and therefore terribly lucky to have coughed up human-like values, rather than some other values? Or is it only in the stage after ours where such rare good values were unlikely to exist?

Comment author: Jordan 29 January 2009 01:40:46PM 2 points [-]

I imagine a distant future with just a smattering of paper clip maximizers -- having risen in different galaxies with slightly different notions of what a paperclip is -- might actually be quite interesting. But even so, so what? Screw the paperclips, even if they turn out to be more elegant and interesting than us!

Comment author: Eliezer_Yudkowsky 29 January 2009 02:00:58PM 6 points [-]

Robin, I discussed this in The Gift We Give To Tomorrow as a "moral miracle" that of course isn't really a miracle at all. We're judging the winding path that evolution took to human value, and judging it as fortuitous using our human values. (See also, "Where Recursive Justification Hits Bottom", "The Ultimate Source", "Created Already In Motion", etcetera.)

Comment author: Ian_C. 29 January 2009 03:08:06PM 2 points [-]

Evolution (as an algorithm) doesn't work on the indestructible. Therefore all naturally-evolved beings must be fragile to some extent, and must have evolved to value protecting their fragility.

Yes, a designed life form can have paper clip values, but I don't think we'll encounter any naturally occurring beings like this. So our provincial little values may not be so provincial after all, but common on many planets.

Comment author: Lightwave 29 January 2009 04:31:58PM 2 points [-]

Ian C.: [i]"Yes, a designed life form can have paper clip values, but I don't think we'll encounter any naturally occurring beings like this. So our provincial little values may not be so provincial after all, but common on many planets."[i] Almost all life forms (especially simpler ones) are sort of paperclip maximizers, they just make copies of themselves ad infinitum. If life could leave this planet and use materials more efficiently, it would consume everything. Good for us evolution couldn't optimize them to such an extent.

Comment author: Emile 29 January 2009 04:35:02PM 0 points [-]

Ian: some individual values of other naturally-evolved beings may be recognizable, but that doesn't mean that the value system as a whole will.

I'd expect that carnivores, or herbivores, or non-social creatures, or hermaphrodites, or creatures with a different set of senses - would probably have some quite different values.

And there can be different brain architectures, different social/political organisation, different transwhateverism technology, etc.

Comment author: Patrick_(orthonormal) 29 January 2009 04:48:38PM 1 point [-]

Roko:

Not so fast. We like some of our evolved values at the expense of others. Ingroup-outgroup dynamics, the way we're most motivated only when we have someone to fear and hate: this too is an evolved value, and most of the people here would prefer to do away with it if we can.

The interesting part of moral progress is that the values etched into us by evolution don't really need to be consistent with each other, so as we become more reflective and our environment changes to force new situations upon us, we realize that they conflict with one another. The analysis of which values have been winning and which have been losing (in different times and places) is another fascinating one...

Comment author: Carl_Shulman 29 January 2009 05:18:31PM 0 points [-]

"Ingroup-outgroup dynamics, the way we're most motivated only when we have someone to fear and hate: this too is an evolved value, and most of the people here would prefer to do away with it if we can."

So you would want to eliminate your special care for family, friends, and lovers? Or are you really just saying that your degree of ingroup-outgroup concern is less than average and you wish everyone was as cosmopolitan as you? Or, because ingroup-concern is indexical, it results in different values for different ingroups, so you wish every shared your precise ingroup concerns? Or that you are in a Prisoner's Dilemma with other groups (or worse), and you think the benefit of changing the values of others would be enough for you to accept a deal in which your own ingroup-concern was eliminated?

http://www.overcomingbias.com/2008/03/unwanted-morali.html

Comment author: Z._M._Davis 29 January 2009 06:15:30PM 4 points [-]

I suspect it gets worse. Eliezer seems to lean heavily on the psychological unity of humankind, but there's a lot of room for variance within that human dot. My morality is a human morality, but that doesn't mean I'd agree with a weighted sum across all possible extrapolated human moralities. So even if you preserve human morals and metamorals, you could still end up with a future we'd find horrifying (albeit better than a paperclip galaxy). It might be said that that's only a Weirdtopia, that's you're horrified at first, but then you see that it's actually for the best after all. But if "the utility function [really] isn't up for grabs," then I'll be horrified for as long as I damn well please.

Comment author: Zack_M_Davis 12 February 2013 04:15:20AM 7 points [-]

I'll be horrified for as long as I damn well please.

Well, okay, but the Weirdtopia thesis under consideration makes the empirical falsifiable prediction that "as long as you damn well please" isn't actually a very long time. Also, I call scope neglect: your puny human brain can model some aspects of your local environment, which is a tiny fraction of this Earth, but you're simply not competent to judge the entire future, which is much larger.

Comment author: shokwave 12 February 2013 05:20:59AM 10 points [-]

I would like to point out that you're probably replying to your past self. This gives me significant amusement.

Comment author: Tim_Tyler 29 January 2009 06:40:59PM -2 points [-]

This post seems almost totally wrong to me. For one thing, its central claim - that without human values the future would, with high probability be dull is not even properly defined.

To be a little clearer, one would need to say something like: if you consider a specified enumeration over the space of possibile utility functions, a random small sample from that space would be "dull" (it might help to say a bit more about what dullness means too, but that is a side issue for now).

That claim might well be true for typical "shortest-first" enumerations in sensible languages - but it is not a very interesting claim - since the dull utility functions would be those which led to an attainable goal - such as "count up to 10 and then stop".

The "open-ended" utilility functions - the ones that resulted in systems that would spread out - would almost inevitably lead to rich complexity. You can't turn the galaxy into paper-clips (or whatever) without extensively mastering science, technology, intergalactic flight, nanotechnology - and so on. So, you need scientists and engineers - and other complicated and interesting things. This conclusion seems so obvious as to hardly be worth discussing to me.

I've explained all this to Eleizer before. After reading this post I still have very little idea about what it is that he isn't getting. He seems to think that making paper clips are boring. However, they are not any more boring than making DNA sequences, and that's the current aim of most living systems.

A prime-seeking civilisation has a competitive disadvantage over one that doesn't have silly, arbitrary bits tacked on to its utility function. It is more likely to be wiped out in a battle with an alien race - and it's more likely to suffer from a mutiny from within. However, that is about all. They are unlikely to lack science, technology, or other interesting stuff.

Comment author: akshatrathi 29 November 2009 11:39:54PM 2 points [-]

However, they are not any more boring than making DNA sequences, and that's the current aim of most living systems.

Making a DNA sequence will count as (an extremely low level activity) [http://lesswrong.com/lw/xr/in_praise_of_boredom/] which is necessary to support non-boring activities. It is a very simple argument that these are the very activity we stop thinking about and concentrate on novel activities.

Comment author: benelliott 03 June 2011 11:46:38PM *  2 points [-]

So, you need scientists and engineers - and other complicated and interesting things. This conclusion seems so obvious as to hardly be worth discussing to me.

But you don't need very many, and you're free to enslave them while they work then kill them once they're done. They might not need to be conscious, and they certainly don't need to enjoy their work.

Probably, they will just be minor subroutines of the original AI, deleted and replaced once they learn everything necessary, which won't take long for a smart AI.

Comment author: Patrick_(orthonormal) 29 January 2009 06:50:25PM 1 point [-]

Carl:

I don't think that automatic fear, suspicion and hatred of outsiders is a necessary prerequisite to a special consideration for close friends, family, etc. Also, yes, outgroup hatred makes cooperation on large-scale Prisoner's Dilemmas even harder than it generally is for humans.

But finally, I want to point out that we are currently wired so that we can't get as motivated to face a huge problem if there's no villain to focus fear and hatred on. The "fighting" circuitry can spur us to superhuman efforts and successes, but it doesn't seem to trigger without an enemy we can characterize as morally evil.

If a disease of some sort threatened the survival of humanity, governments might put up a fight, but they'd never ask (and wouldn't receive) the level of mobilization and personal sacrifice that they got during World War II— although if they were crafty enough to say that terrorists caused it, they just might. Concern for loved ones isn't powerful enough without an idea that an evil enemy threatens them.

Wouldn't you prefer to have that concern for loved ones be a sufficient motivating force?

Comment author: Manon_de_Gaillande 29 January 2009 07:28:38PM 1 point [-]

@Eliezer: Can you expand on the "less ashamed of provincial values" part?

@Carl Shuman: I don't know about him, but for myself, HELL YES I DO. Family - they're just randomly selected by the birth lottery. Lovers - falling in love is some weird stuff that happens to you regardless of whether you want it, reaching into your brain to change your values: like, dude, ew - I want affection and tenderness and intimacy and most of the old interpersonal fun and much more new interaction, but romantic love can go right out of the window with me. Friends - I do value friendship; I'm confused; maybe I just value having friends, and it'd rock to be close friends with every existing mind; maybe I really value preferring some people to others; but I'm sure about this: I should not, and do not want to, worry more about a friend with the flu than about a stranger with cholera.

@Robin Hanson: HUH? You'd really expect natural selection to come up with minds who enjoy art, mourn dead strangers and prefer a flawed but sentient woman to a perfect catgirl on most planets?

This talk about "'right' means right" still makes me damn uneasy. I don't have more to show for it than "still feels a little forced" - when I visualize a humane mind (say, a human) and a paperclipper (a sentient, moral one) looking at each other in horror and knowing there is no way they could agree about whether using atoms to feed babies or make paperclips, I feel *wrong*. I think about the paperclipper in exactly the same way it thinks about me! Sure, that's also what happens when I talk to a creationist, but we're trying to approximate external truth; and if our priors were too stupid, our genetic line would be extinct (or at least that's what I think) - but morality doesn't work like probability, it's not trying to approximate anything external. So I don't feel so happier about the moral miracle that made us than about the one that makes the paperclipper.

Comment author: Carl_Shulman 29 January 2009 07:37:32PM 0 points [-]

Patrick,

Those are instrumental reasons, and could be addressed in other ways. I was trying to point out that giving up big chunks of our personality for instrumental benefits can be a real trade-off.

http://lesswrong.com/lw/gz/policy_debates_should_not_appear_onesided/

Comment author: Daniel4 29 January 2009 10:27:56PM 1 point [-]

Jordan: "I imagine a distant future with just a smattering of paper clip maximizers -- having risen in different galaxies with slightly different notions of what a paperclip is -- might actually be quite interesting."

That's exactly how I imagine the distant future. And I very much like to point to the cyclic cellular automaton (java applet) as a visualization. Actually, I speculate that we live in a small part of the space-time continuum not yet eaten by a paper clip maximizer. Now you may ask: Why don't we see huge blobs of paper clip maximizers expanding on the night sky? My answer is that they are expanding with the speed of light in every direction.

Note: I abused the term paper clip maximizer somewhat. Originally I called these things Expanding Space Amoebae, but PCM is more OB.

Comment author: Eliezer_Yudkowsky 29 January 2009 10:32:36PM 5 points [-]

Probability of an evolved alien species:

(A) Possessing analogues of pleasure and pain: HIGH. Reinforcement learning is simpler than consequentialism for natural selection to stumble across.

(B) Having a human idiom of boredom that desires a steady trickle of novelty: MEDIUM. This has to do with acclimation and adjustment as a widespread neural idiom, and the way that we try to abstract that as a moral value. It's fragile but not impossible.

(C) Having a sense of humor: LOW.

Probability of an expected paperclip maximizer having analogous properties, if it originated as a self-improving code soup (rather than by natural selection), or if it was programmed over a competence threshold by foolish humans and then exploded:

(A) MEDIUM

(B) LOW

(C) LOW

Comment author: Constant2 29 January 2009 10:40:39PM 0 points [-]

the vast majority of possible expected utility maximizers, would only engage in just so much efficient exploration, and spend most of its time exploiting the best alternative found so far, over and over and over.

I'm not convinced of that. First, "vast majority" needs to use an appropriate measure, one that is applicable to evolutionary results. If, when two equally probable mutations compete in the same environment, one of those mutations wins, making the other extinct, then the winner needs to be assigned the far greater weight. So, for example, if humans were to compete against a variant of human without the boredom instinct, who would win?

Second, it would seem easier to build (or mutate into) something that keeps going forever than it is to build something that goes for a while then stops. Cancer, for example, just keeps going and going, and it takes a lot of bodily tricks to put a stop to that.

Comment author: Constant2 29 January 2009 10:46:12PM 0 points [-]

it would seem easier to build (or mutate into) something that keeps going forever than it is to build something that goes for a while then stops.

On reflection, I realize this point might be applied to repetitive drudgery. But I was applying it to the behavior "engage in just so much efficient exploration." My point is that it may be easier to mutate into something that explores and explores and explores, than it would be to mutate into something that explores for a while then stops.

Comment author: Tim_Tyler 29 January 2009 10:55:56PM -1 points [-]

Thanks for the probability assessments. What is missing are supporting arguments. What you think is relatively clear - but why you think it is not.

...and what's the deal with mentioning a "sense of humour"? What has that to do with whether a civilization is complex and interesting? Whether our distant descendants value a sense of humour or not seems like an irrelevance to me. I am more concerned with whether they "make it" or not - factors affecting whether our descendants outlast the exploding sun - or whether the seed of human civilisation is obliterated forever.

Comment author: Jeffrey_Soreff 30 January 2009 12:00:33AM 0 points [-]

@Jordan - agreed.

I think the big difference in expected complexity is between sampling the space of possible singletons' algorithms results and sampling the space of competitive entities. I agree with Eliezer that an imprecisely chosen value function, if relentlessly optimized, is likely to yield a dull universe. To my mind the key is that the ability to relentlessly optimize one function only exists if a singleton gets and keeps an overwhelming advantage over everything else. If this does not happen, we get competing entities with the computationally difficult problem of outsmarting each other. Under this scenario, while I might not like the detailed results, I'd expect them to be complex to much the same extent and for much the same reasons as living organisms are complex.

Comment author: Dagon 30 January 2009 12:01:17AM 0 points [-]

What if I want a wonderful and non-mysterious universe? Your current argument seems to be that there's no such thing. I don't follow why this is so. "Fun" (defined as desire for novelty) may be the simplest way to build a strategy of exploration, but it's not obvious that it's the only one, is it?

A series on "theory of motivation" that explores other options besides novelty and fun as prime directors of optimization processes that can improve the universe (in their and maybe even our eyes).

Comment author: Jotaf 30 January 2009 12:04:09AM 0 points [-]

"This talk about "'right' means right" still makes me damn uneasy. I don't have more to show for it than "still feels a little forced" - when I visualize a humane mind (say, a human) and a paperclipper (a sentient, moral one) looking at each other in horror and knowing there is no way they could agree about whether using atoms to feed babies or make paperclips, I feel *wrong*. I think about the paperclipper in exactly the same way it thinks about me! Sure, that's also what happens when I talk to a creationist, but we're trying to approximate external truth; and if our priors were too stupid, our genetic line would be extinct (or at least that's what I think) - but morality doesn't work like probability, it's not trying to approximate anything external. So I don't feel so happier about the moral miracle that made us than about the one that makes the paperclipper."

Oh my, this is so wrong. So you're postulating that the paperclipper would be extinct too due to natural selection? Somehow I don't see the mechanisms of natural selection applying to that. With it being created once by humans and then exploding, and all that.

If 25% of its "moral drive" is the result of a programming error, is it still "understandable and as much of a worthy creature/shaper of the Universe" as us? This is the cosmopolitan view that Eliezer describes; and I don't see how you're convinced that admiring static is just as good as admiring evolved structure. It might just be bias but the later seems much better. Order > chaos, no?

Comment author: Jordan 30 January 2009 01:09:49AM 3 points [-]

@Jotaf, "Order > chaos, no?"

Imagine God shows up tomorrow. "Everyone, hey, yeah. So I've got this other creation and they're super moral. Man, moral freaks, let me tell you. Make Mennonites look Shintoist. And, sure, I like them better than you. It's why I'm never around, sorry. Thing is, their planet is about to get eaten by a supernova. So.. I'm giving them the moral green light to invade Earth. It's been real."

I'd be the first to sign up for the resistance. Who cares about moral superiority? Are we more moral than a paperclip maximizer? Are human ideals 'better'? Who cares? I don't want an OfficeMax universe, so I'll take up arms against a paperclip maximizer, whether its blessed by God or not.

Comment author: Patrick_(orthonormal) 30 January 2009 01:30:51AM 0 points [-]

Carl:

Those are instrumental reasons, and could be addressed in other ways.

I wouldn't want to modify/delete hatred for instrumental reasons, but on behalf of the values that seem to clash almost constantly with hatred. Among those are the values I meta-value, including rationality and some wider level of altruism.

I was trying to point out that giving up big chunks of our personality for instrumental benefits can be a real trade-off.

I agree with that heuristic in general. I would be very cautious regarding the means of ending hatred-as-we-know-it in human nature, and I'm open to the possibility that hatred might be integral (in a way I cannot now see) to the rest of what I value. However, given my understanding of human psychology, I find that claim improbable right now.

My first point was that our values are often the victors of cultural/intellectual/moral combat between the drives given us by the blind idiot god; most of human civilization can be described as the attempt to make humans self-modify away from the drives that lost in the cultural clash. Right now, much of this community values (for example) altruism and rationality over hatred where they conflict, and exerts a certain willpower to keep the other drive vanquished at times. (E.g. repeating the mantra "Politics is the Mind-Killer" when tempted to characterize the other side as evil).

So far, we haven't seen disaster from this weak self-modification against hatred, and we've seen a lot of good (from the perspective of the values we privilege). I take this as some evidence that we can hope to push it farther without losing what we care about (or what we want to care about).

Comment author: Patrick_(orthonormal) 30 January 2009 01:33:40AM 0 points [-]

(E.g. repeating the mantra "Politics is the Mind-Killer" when tempted to characterize the other side as evil)

Uh, I don't mean that literally, though doing up a whole Litany of Politics might be fun.

Comment author: TGGP4 30 January 2009 04:32:22AM -2 points [-]

Maybe it's the types I of haunts I've been frequenting lately, but the elimination of all conscious life in the universe doesn't strike me as too terrible at the moment (provided it doesn't shorten my own lifespan).

Comment author: Wei_Dai2 30 January 2009 09:04:53AM 1 point [-]

We can sort the values evolution gave us into the following categories (not necessarily exhaustive). Note that only the first category of values is likely to be preserved without special effort, if Eliezer is right and our future is dominated by singleton FOOM scenarios. But many other values are likely to survive naturally in alternative futures.

- likely values for all intelligent beings and optimization processes (power, resources) - likely values for creatures with roughly human-level brain power (boredom, knowledge) - likely values for all creatures under evolutionary competition (reproduction, survival, family/clan/tribe) - likely values for creatures under evolutionary competition who cannot copy their minds (individual identity, fear of personal death) - likely values for creatures under evolutionary competition who cannot wirehead (pain, pleasure) - likely values for creatures with sexual reproduction (beauty, status, sex) - likely values for intelligent creatures with sexual reproduction (music, art, literature, humor) - likely values for intelligent creatures who cannot directly prove their beliefs (honesty, reputation, piety) - values caused by idiosyncratic environmental characteristics (salt, sugar) - values caused by random genetic/memetic drift and co-evolution (Mozart, Britney Spears, female breasts, devotion to specific religions)

The above probably isn't controversial, rather the disagreement is mainly on the following:

- the probabilities of various future scenarios - which values, if any, can be preserved using approaches such as FAI - which values, if any, we should try to preserve

I agree with Roko that Eliezer has made his case in an impressive fashion, but it seems that many of us are still not convinced on these three key points.

Take the last one. I agree with those who say that human values do not form a consistent and coherent whole. Another way of saying this is that human beings are not expected utility maximizers, not as individuals and certainly not as societies. Nor do most of us desire to become expected utility maximizers. Even amongst the readership of this blog, where one might logically expect to find the world's largest collection of EU-maximizer wannabes, few have expressed this desire. But there is no principled way to derive an utility function from something that is not an expected utility maximizer!

Is there any justification for trying to create an expected utility maximizer that will forever have power over everyone else, whose utility function is derived using a more or less arbitrary method from the incoherent values of those who happen to live in the present? That is, besides the argument that it is the only feasible alternative to a null future. Many of us are not convinced of this, neither the "only" nor the "feasible".

Comment author: Wei_Dai 07 February 2011 11:31:59PM 15 points [-]

Wei_Dai2, it looks like you missed Eliezer's main point:

Value isn't just complicated, it's fragile. There is more than one dimension of human value, where if just that one thing is lost, the Future becomes null. A single blow and all value shatters. Not every single blow will shatter all value - but more than one possible "single blow" will do so.

It doesn't matter that "many" values survive, if Eliezer's "value is fragile" thesis is correct, because we could lose the whole future if we lose just a single critical value. Do we have such critical values? Maybe, maybe not, but you didn't address that issue.

Comment author: ata 08 February 2011 02:47:17AM 13 points [-]

I like the idea of replying to past selves and think it should be encouraged.

Comment author: Giles 19 June 2011 11:46:19PM 8 points [-]

The added bonus is they can't answer back.

Comment author: DSimon 13 April 2012 02:58:31AM 5 points [-]

"Yeah, past me is terrible, but don't even get me started on future me, sheesh!"

Comment author: Luke_A_Somers 12 February 2013 02:22:19PM *  1 point [-]

Quite. I never expected LW to resemble classic scenes from Homestuck... except, you know, way more functional.

Comment author: michael_vassar3 30 January 2009 11:12:44AM 2 points [-]

- likely values for all intelligent beings and optimization processes (power, resources)

Agree.

- likely values for creatures with roughly human-level brain power (boredom, knowledge)

Disagree. Maybe we don't mean the same thing by boredom?

- likely values for all creatures under evolutionary competition (reproduction, survival, family/clan/tribe)

Mostly agree. Depends somewhat on definition of evolution. Some evolved organisms pursue only 1 or 2 of these but all pursue at least one.

- likely values for creatures under evolutionary competition who cannot copy their minds (individual identity, fear of personal death)

Disagree. Genome equivalents which don't generate terminally valued individual identity in the minds they descrive should outperform those that do.

- likely values for creatures under evolutionary competition who cannot wirehead (pain, pleasure)

Disagree. Why not just direct expected utility? Pain and pleasure are easy to find but don't work nearly as well.

- likely values for creatures with sexual reproduction (beauty, status, sex)

Define sexual. Most sexual creatures are too simple to value the first two. Most plausible posthumans aren't sexual in a traditional sense.

- likely values for intelligent creatures with sexual reproduction (music, art, literature, humor)

Disagree.

- likely values for intelligent creatures who cannot directly prove their beliefs (honesty, reputation, piety)

Agree assuming that they aren't singletons. Even then for sub-components.

- values caused by idiosyncratic environmental characteristics (salt, sugar)

Agree.

- values caused by random genetic/memetic drift and co-evolution (Mozart, Britney Spears, female breasts, devotion to specific religions)

Agree. Some caveats about Mozart.

Comment author: Tim_Tyler 30 January 2009 06:50:23PM -3 points [-]

I agree with Eliezer that an imprecisely chosen value function, if relentlessly optimized, is likely to yield a dull universe.

So: you think a "paperclip maximiser" would be "dull"?

How is that remotely defensible? Do you think a "paperclip maximiser" will master molecular nanotechnology, artificial intelligence, space travel, fusion, the art of dismantling planets and stellar farming?

If so, how could that possibly be "dull"? If not, what reason do you have for thinking that those technologies would not help with the making of paper clips?

Apparently-simple processes can easily produce great complexity. That's one of the lessons of Conway's game.

Comment author: Wei_Dai2 30 January 2009 07:08:15PM 0 points [-]

Maybe we don't mean the same thing by boredom?

I'm using Eliezer's definition: a desire not to do the same thing over and over again. For a creature with roughly human-level brain power, doing the same thing over and over again likely means it's stuck in a local optimum of some sort.

Genome equivalents which don't generate terminally valued individual identity in the minds they descrive should outperform those that do.

I don't understand this. Please elaborate.

Why not just direct expected utility? Pain and pleasure are easy to find but don't work nearly as well.

I suppose you mean why not value external referents directly instead of indirectly through pain and pleasure. As long as wireheading isn't possible, I don't see why the latter wouldn't work just as well as the former in many cases. Also, the ability to directly value external referents depends on a complex cognitive structure to assess external states, which may be more vulnerable in some situations to external manipulation (i.e. unfriendly persuasion or parasitic memes) than hard-wired pain and pleasure, although the reverse is probably true in other situations. It seems likely that evolution would come up with both.

Define sexual. Most sexual creatures are too simple to value the first two. Most plausible posthumans aren't sexual in a traditional sense.

I mean reproduction where more than one party contributes genetic material and/or parental resources. Even simple sexual creatures probably have some notion of beauty and/or status to help attract/select mates, but for the simplest perhaps "instinct" would be a better word than "value".

- likely values for intelligent creatures with sexual reproduction (music, art, literature, humor)

Disagree.

These all help signal fitness and attract mates. Certainly not all intelligent creatures with sexual reproduction will value exactly music, art, literature, and humor, but it seems likely they will have values that perform the equivalent functions.

Comment author: Manon_de_Gaillande 30 January 2009 07:49:46PM 1 point [-]

@Jotaf: No, you misunderstood - guess I got double-transparent-deluded. I'm saying this:

* Probability is subjectively objective * Probability is about something external and real (called truth) * Therefore you can take a belief and call it "true" or "false" without comparing it to another belief * If you don't match truth well enough (if your beliefs are too wrong), you die * So if you're still alive, you're not too stupid - you were born with a smart prior, so justified in having it * So I'm happy with probability being subjectively objective, and I don't want to change my beliefs about the lottery. If the paperclipper had stupid beliefs, it would be dead - but it doesn't, it has evil morals.

* Morality is subjectively objective * Morality is about some abstract object, a computation that exists in Formalia but nowhere in the actual universe * Therefore, if you take a morality, you need another morality (possibly the same one) to assess it, rather than a nonmoral object * Even if there was some light in the sky you could test morality against, it wouldn't kill you for your morality being evil * So I don't feel on better moral ground than the paperclipper. It has human_evil morals, but I have paperclipper_evil morals - we are exactly equally horrified.

Comment author: Tim_Tyler 30 January 2009 10:22:13PM -2 points [-]

Another way of saying this is that human beings are not expected utility maximizers, not as individuals and certainly not as societies.

They are not perfect expected utility maximizers. However, no expected utility maximizer is perfect. Humans approach the ideal at least as well as other organisms. Fitness maximization is the central explanatory principle in biology - and the underlying idea is the same. The economic framework associated with utilitarianism is general, of broad applicability, and deserves considerable respect.

Comment author: Tim_Tyler 30 January 2009 10:35:06PM -2 points [-]

But there is no principled way to derive an utility function from something that is not an expected utility maximizer!

You can model any agent as in expected utility maximizer - with a few caveats about things such as uncomputability and infinitely complex functions.

You really can reverse-engineer their utility functions too - by considering them as Input-Transform-Output black boxes - and asking what expected utility maximizer would produce the observed transformation.

A utility function is like a program in a Turing-complete language. If the behaviour can be computed at all, it can be computed by a utility function.

Comment author: Wei_Dai2 31 January 2009 01:24:37AM 4 points [-]

A utility function is like a program in a Turing-complete language. If the behaviour can be computed at all, it can be computed by a utility function.

Tim, I've seen you state this before, but it's simply wrong. A utility function is not like a Turing-complete language. It imposes rather strong constraints on possible behavior.

Consider a program which when given the choices (A,B) outputs A. If you reset it and give it choices (B,C) it outputs B. If you reset it again and give it choices (C,A) it outputs C. The behavior of this program cannot be reproduced by a utility function.

Here's another example: When given (A,B) a program outputs "indifferent". When given (equal chance of A or B, A, B) it outputs "equal chance of A or B". This is also not allowed by EU maximization.

Comment author: Tyrrell_McAllister2 31 January 2009 02:05:31AM 0 points [-]

Wei Dai: Consider a program which when given the choices (A,B) outputs A. If you reset it and give it choices (B,C) it outputs B. If you reset it again and give it choices (C,A) it outputs C. The behavior of this program cannot be reproduced by a utility function.

I don't know the proper rational-choice-theory terminology, but wouldn't modeling this program just be a matter of describing the "space" of choices correctly? That is, rather than making the space of choices {A, B, C}, make it the set containing

(1) = taking A when offered A and B, (2) = taking B when offered A and B,

(3) = taking B when offered B and C, (4) = taking C when offered B and C,

(5) = taking C when offered C and A, (6) = taking A when offered C and A.

Then the revealed preferences (if that's the way to put it) from your experiment would be (1) > (2), (3) > (4), and (5) > (6). Viewed this way, there is no violation of transitivity by the relation >, or at least none revealed so far. I would expect that you could always "smooth over" any transitivity-violation by making an appropriate description of the space of options. In fact, I would guess that there's a standard theory about how to do this while still keeping the description-method as useful as possible for purposes such as prediction.

Comment author: Tim_Tyler 31 January 2009 10:34:59AM -2 points [-]

Consider a program which when given the choices (A,B) outputs A. If you reset it and give it choices (B,C) it outputs B. If you reset it again and give it choices (C,A) it outputs C. The behavior of this program cannot be reproduced by a utility function.

That is silly - the associated utility function is the one you have just explicitly given. To rephrase:

if (senses contain (A,B)) selecting A has high utility; else if (senses contain (B,C)) selecting B has high utility; else if (senses contain (C,A)) selecting C has high utility;

Here's another example: When given (A,B) a program outputs "indifferent". When given (equal chance of A or B, A, B) it outputs "equal chance of A or B". This is also not allowed by EU maximization.

Again, you have just given the utility function by describing it. As for "indifference" being a problem for a maximisation algorithm - it really isn't in the context of decision theory. An agent either takes some positive action, or it doesn't. Indifference is usually modelled as lazyness - i.e. a preference for taking the path of least action.

Comment author: tut 15 June 2009 06:40:22AM *  0 points [-]

Consider a program which when given the choices (A,B) outputs A. If you reset it >>and give it choices (B,C) it outputs B. If you reset it again and give it choices (C,A) it >>outputs C. The behavior of this program cannot be reproduced by a utility >>function.

That is silly - the associated utility function is the one you have just explicitly given. >To rephrase:</blockqoute>

No it isn't. It is a list of preferences. The corresponding utility function would be a function U(X) from {A,B,C} to the real numbers such that

1) U(A)>U(B) 2) U(B)>U(C) and 3) U(C)>U(A)

But only some lists of preferences can be described by utility functions, and this one can't, because 1) and 2) imply that U(A)>U(C), which contradicts 3).

Comment deleted 15 June 2009 06:42:05AM [-]
Comment author: arundelo 15 June 2009 07:05:41AM 0 points [-]

There's a help link under the box you type in. (Use > for quotes, as in email.)

See also the Markdown documentation.

Comment author: tut 15 June 2009 07:13:32AM 0 points [-]

Thank you.

Comment author: timtyler 27 March 2010 04:05:40PM *  -1 points [-]

"The corresponding utility function would be a function U(X) from {A,B,C} to the real numbers"

I doubt the premise. Where are you getting that from? It wasn't in the specification of the problem.

Comment author: tut 28 March 2010 01:22:12PM 0 points [-]

Where are you getting that from?

From the definition of utility function.

Comment author: timtyler 28 March 2010 02:53:01PM *  -2 points [-]

That seems like a ridiculous reply - it says nothing about the issue there.

Comment author: Sniffnoy 28 March 2010 02:59:42PM *  1 point [-]

Tim, that's what the term means. This other thing that you have called a "utility function", is not in fact a utility function, because that's not what the term means. It's already been pointed out that not every list of preferences can be derived from a utility function. If you want to define or use a generalization of the notion of utility function, you should do so explicitly.

Comment author: timtyler 28 March 2010 05:06:54PM *  0 points [-]

I have no argument with the definition of the term "utility function". It is a function that maps outcomes to utilities - usually real numbers. The function I described did just that. If you don't understand that, then you should explain what aspects of the function's map from outcomes to utilities you don't understand - since it seemed to be a pretty simple one to me.

I don't think that all preferences can be expressed as a utility function. For example, some preferences are uncomputable.

Note that Tyrrell_McAllister2's reply makes exactly the same point as I am making.

Comment author: Sniffnoy 28 March 2010 05:36:46PM 1 point [-]

See, this would have been a lot clearer if you had specified initially that your objection was to the domain.

Comment author: timtyler 28 March 2010 06:54:53PM *  1 point [-]

Sorry if there was any confusion. Here are all the possible outcomes - and their associated (real valued) utilities - laboriously spelled out in a table:

Remembers being presented with (A,B) and chooses A - utility 1.0.

Remembers being presented with (A,B) and chooses B - utility 0.0.

Remembers being presented with (B,C) and chooses B - utility 1.0.

Remembers being presented with (B,C) and chooses C - utility 0.0.

Remembers being presented with (C,A) and chooses C - utility 1.0.

Remembers being presented with (C,A) and chooses A - utility 0.0.

Other action - utility 0.0.

Comment author: Tim_Tyler 01 February 2009 12:28:37AM 0 points [-]

The core problem is simple. The targeting information disappears, so does the good outcome. Knowing enough to refute every fallacious remanufacturing of the value-information from nowhere, is the hard part.

The utility function of Deep Blue has 8,000 parts - and contained a lot of information. Throw all that information away, and all you really need to reconstruct Deep Blue is the knowledge that it's aim is to win games of chess. The exact details of the information in the original utility function are not recovered - but the eventual functional outcome would be much the same - a powerful chess computer.

The "targeting information" is actually a bunch of implementation details that can be effectively recreated from the goal - if that should prove to be necessary.

It is not precious information that must be preserved. If anything, attempts to preserve the 8,000 parts of Deep Blue's utility function while improving it would actually have a crippling negative effect on its future development. Similarly with human values: those are a bunch of implementation details - not the real target.

Comment author: Ghatanathoah 20 May 2012 09:55:51PM *  7 points [-]

The "targeting information" is actually a bunch of implementation details that can be effectively recreated from the goal - if that should prove to be necessary.

It is not precious information that must be preserved. If anything, attempts to preserve the 8,000 parts of Deep Blue's utility function while improving it would actually have a crippling negative effect on its future development. Similarly with human values: those are a bunch of implementation details - not the real target.

If Deep Blue had emotions and desires that were attached to the 8,000 parts of its utility function, if it drew great satisfaction, meaning, and joy from executing those 8,000 parts regardless of whether doing so resulted in winning a chess game, then yes, those 8,000 parts would be precious information that needed to be preserved. It would be a horrible disaster if they were lost. They wouldn't be the programmer's real target, but why in the world would Emotional Deep Blue care about what it's programmer wanted? It wouldn't want to win at chess, it would want to implement those 8,000 parts! That's what its real target is!

For humans, our real target is all those complex values that evolution metaphorically "programmed" into us. We don't care at all about what evolution's "real target" was. If those values were destroyed or replaced then it would be bad for us because those values are what humans really care about. Saying humans care about genetic fitness because we sometimes accidentally enhance it when we are fulfilling our real values is like saying that automobile drivers care about maximizing CO2 content in the atmosphere because they do that by accident when they drive. Humans don't care about genetic fitness, we never have, and hopefully we never will.

In fact, evolution doesn't even have a real target. It's an abstract statistical description of certain trends in the history of life. When we refer to it as "wanting" things and having "goals" that's not because it really does. It's because humans are good at understanding the minds of other humans, but bad at understanding abstract processes, so it helps people understand how evolution works better if we metaphorically describe it as a human-like mind with certain goals, even though that isn't true. Modeling evolution as having a "goal" describes it less accurately, but it makes up for it by making the model easier for a human brain to run.

When you say that preserving those parts of the utility function would have a "crippling negative" effect you are forgetting an important referent: Negative for who? Evolution has no feelings and desires, so preserving human values would not be crippling or negative for it, nothing is crippling of negative for it, since doesn't really have any feelings or goals. It literally doesn't care about anything. By contrast humans do have feelings and desires, so failing to preserve our values would have a crippling and negative effect on our future development, because we would lose something we deeply care about.

Comment author: timtyler 12 February 2013 11:46:31PM *  0 points [-]

The problem with self-improving Deep Blue preserving its 8,000 heuristics is that it might cause it to lose games of chess, to a player with a better representation of its target. If that happens, its 8,000 heuristics will probably turn out to assign very low values to the resulting lost games. Of course, that means that the values weren't very effectively maximized in the first place. Just so - that's one of the problems with working from a dud set of heuristics that poorly encode your target.

We potentially face a similar issue. Plenty of folks would love to live in a world where their every desire is satisfied - and they live in continual ecstasy. However, pursuing such goals in the short-term could easily lead humanity towards long-term extinction. We face much the same problem with our values that self-improving Deep Blue faces with its heuristics.

This issue doesn't have anything particularly to do with the difference between psychological and genetic optimization targets. Both genes and minds value dying out very negatively. They agree on the relevant values.

There's a proposed solution to this problem: pursue universal instrumental values until you have conquered the universe, and then switch to pursuing your "real" values. However it's a controversial proposal. When will you be confident of not facing a stronger opponent with different values? How much does lugging those "true values" around for billions of years actually cost?

My position is that you'll probably never know that you are safe, and that the cost isn't that great - but that any such expense is an intolerable squandering of resources.

Comment author: Ghatanathoah 20 February 2013 06:09:12AM *  -1 points [-]

Both genes and minds value dying out very negatively. They agree on the relevant values.

Minds value not dying out because dying out would mean that they can no longer pursue "true values," not because not dying out is an end in itself. Imagine we were given a choice between:

A) The human race dies out.

B) The human race survives forever, but every human being alive and who will ever live will be tortured 24/7 by a sadistic AI.

Any sane person would choose A. That's because in scenario B the human race, even though it survives, is unable to pursue any of its values, and is forced to pursue one of its major disvalues.

There is no point in the human race surviving if it can't pursue its values.

I personally think the solution for the species is the same as it is for an individual, mix pursuit of terminal and instrumental values. I do this every day and I assume you do as well. I spend lots of time and effort making sure that I will survive and exist in the future. But I also take minor risks, such as driving a car, in order to lead a more fun and interesting life.

Carl's proposal sounds pretty good to me. Yes, it has dangers, as you correctly pointed out. But some level of danger has to be accepted in order to live a worthwhile life.

Comment author: timtyler 21 February 2013 12:50:29AM *  2 points [-]

There is no point in the human race surviving if it can't pursue its values.

It's likely to not be a binary decision. We may well be able to trade preserving values against a better chance of surviving at all. The more we deviate from universal instrumental values, the greater our chances of being wiped out by accidents or aliens. The more we adhere to universal instrumental values, the more of our own values get lost.

Since I see our values heavily overlapping with universal instrumental values, adopting them doesn't seem too bad to me - while all our descendants being wiped out seems pretty negative - although also rather unlikely.

How to deal with this tradeoff is a controversial issue. However, it certainly isn't obvious that we should struggle to preserve our human values - and resist adopting universal instrumental values. That runs a fairly clear risk of screwing up the future for all our descendants.

Comment author: Ghatanathoah 21 February 2013 01:05:32AM -1 points [-]

It's likely to not be a binary decision. We may well be able to trade preserving values against a better chance of surviving at all......

....How to deal with this tradeoff is a controversial issue. However, it certainly isn't obvious that we should struggle to preserve our human values - and resist adopting universal instrumental values. That runs a fairly clear risk of screwing up the future for all our descendants.

If that's the case I don't think we disagree about anything substantial. We probably just disagree about what percentage of resources should go to UIV and what should go to terminal values.

Since I see our values heavily overlapping with universal instrumental values, adopting them doesn't seem too bad to me

You might be right to some extent. Human beings tend to place great terminal value on big, impressive achievements, and quickly colonizing the universe would certainly involve doing that.

Comment author: timtyler 21 February 2013 01:11:45AM *  0 points [-]

If that's the case I don't think we disagree about anything substantial. We probably just disagree about what percentage of resources should go to UIV and what should go to terminal values.

It's a tricky and controversial issue. The cost of preserving our values looks fairly small - but any such expense diverts resources away from the task of surviving - and increases the risk of eternal oblivion. Those who are wedded to the idea of preserving their values will need to do some careful accounting on this issue, if they want the world to run such risks.

While the phrase "universal instrumental values" has the word "instrumental" in it, that's just one way of thinking about them. You could also call them "nature's values" or "god's values". You can contrast them with human values - but it isn't really an "instrumental vs terminal" issue.

Comment author: Wei_Dai2 01 February 2009 01:33:01AM 0 points [-]

Tim and Tyrrell, do you know the axiomatic derivation of expected utility theory? If you haven't read http://cepa.newschool.edu/het/essays/uncert/vnmaxioms.htm or something equivalent, please read it first.

Yes, if you change the spaces of states and choices, maybe you can encode every possible agent as an utility function, not just those satisfying certain axioms of "rationality" (which I put in quotes because I don't necessarily agree with them), but that would be to miss the entire point of expected utility theory, which is that it is supposed to be a theory of rationality, and is supposed to rule out irrational preferences. That means using state and choice spaces where those axiomatic constraints have real world meaning.

Comment author: Nick_Tarleton 01 February 2009 02:22:24AM 0 points [-]

Wei: Most people in most situations would reject the idea that the set of options presented is part of the outcome - would say that (A,B,C) is a better outcome space than the richer one Tyrrell suggested - so expected utility theory is applicable. A set of preferences can never be instrumentally irrational, but it can be unreasonable as judged by another part of your morality.

Comment author: Russell_Wallace 01 February 2009 02:48:40AM 0 points [-]

Specifically, the point of utility theory is the attempt to predict the actions of complex agents by dividing them into two layers:

1. Simple list of values 2. Complex machinery for attaining those values

The idea being that if you can't know the details of the machinery, successful prediction might be possible by plugging the values into your own equivalent machinery.

Does this work in real life? In practice it works well for simple agents, or complex agents in simple/narrow contexts. It works well for Deep Blue, or for Kasparov on the chessboard. It doesn't work for Kasparov in life. If you try to predict Kasparov's actions away from the chessboard using utility theory, it ends up as epicycles; every time you see him taking a new action you can write a corresponding clause in your model of his utility function, but the model has no particular predictive power.

In hindsight we shouldn't really have expected otherwise; simple models in general have predictive power only in simple/narrow contexts.

Comment author: timtyler 13 February 2013 12:11:33AM *  2 points [-]

In hindsight we shouldn't really have expected otherwise; simple models in general have predictive power only in simple/narrow contexts.

Counter-example 1: gene-frequency maximization in biology. A tremendously simple principle with enormous explanatory power.

Counter-example 2: Entropy maximization. Another tremendously simple principle with enormous explanatory power.

Note that both are maximization principles - the very type of principle whose limitations you are arguing for.

Comment author: Wei_Dai2 01 February 2009 03:06:13AM 0 points [-]

To expand on my categorization of values a bit more, it seems clear to me that at least some human value do not deserved to be forever etched into the utility function of a singleton. Those caused by idiosyncratic environmental characteristics like taste for salt and sugar, for example. To me, these are simply accidents of history, and I wouldn't hesitate (too much) to modify them away in myself, perhaps to be replaced by more interesting and exotic tastes.

What about reproduction? It's a value that my genes programmed into me for their own purposes, so why should I be obligated to stick with it forever?

Or consider boredom. Eventually I may become so powerful that I can easily find the globally optimal course of action for any set of goals I might have, and notice that the optimal course of action often involves repetition of some kind. Why should I retain my desire not to do the same thing over and over again, which was programmed into me by evolution back when minds had a tendency to get stuck in local optimums?

And once I finally came to that realization, I felt less ashamed of values that seemed 'provincial' - but that's another matter.

Eliezer, I wonder if this actually has more to do with your current belief that rationality equals expected utility maximization. For an expected utility maximizer, there is no distinction between 'provincial' and 'universal' values, and certainly no reason to ever feel ashamed of one's values. One just optimizes according to whatever values one happens to have. But as I argued before, human beings are not expected utility maximizers, and I don't see why we should try to emulate them, especially this aspect.

Comment author: Tim_Tyler 01 February 2009 12:27:28PM 1 point [-]

In dealing with your example, I didn't "change the space of states or choices". All I did was specify a utility function. The input states and output states were exactly as you specified them to be. The agent could see what choices were available, and then it picked one of them - according to the maximum value of the utility function I specified.

The corresponding real world example is an agent that prefers Boston to Atlanta, Chicago to Boston, and Atlanta to Chicago. I simply showed how a utility maximiser could represent such preferences. Such an agent would drive in circles - but that is not necessarily irrational behaviour.

Of course much of the value of expected utility theory arises when you use short and simple utility functions - however, if you are prepared to use more complex utility functions, there really are very few limits on what behaviours can be represented.

The possibility of using complex utility functions does not in any way negate the value of the theory for providing a model of rational economic behaviour. In economics, the utility function is pretty fixed: maximise profit, with specified risk aversion and future discounting. That specifies an ideal which real economic agents approximate. Plugging in an arbitrary utility function is simply an illegal operation in that context.

Comment author: Tim_Tyler 03 February 2009 06:52:00PM 1 point [-]

The analogy between the theory that humans behave like expected utility maximisers - and the theory that atoms behave like billiard balls could be criticised - but it generally seems quite appropriate to me.

Comment author: PhilGoetz 11 August 2010 10:17:43PM *  3 points [-]

This is a critical post. I disagree with where Eliezer has gone from here; but I'm with him up to and including this point. This post is a good starting point for a dialogue.

Comment author: Randolf 15 September 2011 10:35:51PM *  0 points [-]

I don't know, or maybe I don't understand your point. I would find a quiet and silent, post-human world very beatiful in a way. A world where the only reminders of the great, yet long gone civilisation would be ancient ruins.. Super structures which once were the statues of human prosperity and glory, now standing along with nothing but trees and plants, forever forgotten. Simply sleeping in a never ending serenity and silence...

Don't you too, find such a future very beatiful in an eerie way? Even if there is no sentient being to perceive it at that time, the fact that such a future may exist one day, and that it can now be perceived through art and imagination, is where it's beauty truly lies.

Comment author: JoshuaZ 15 September 2011 10:41:50PM *  1 point [-]

I suspect that you are imagining this world a good because you can't actually separate your imagined observer from the world. The world you are talking about is not just a failure of humanity it is a world where we have failed so much that nothing is alive to witness our failure.

Comment author: Randolf 16 September 2011 10:45:58PM *  0 points [-]

I don't think you can call such a world good or perfect, but I don't think it's all bad either. I quess you could call it neutral.

I mean, I don't see that world as a big failure, if a failure at all. No civilization will be there forever*, but the one I mentioned had at least achieved something at it's time: it had once been glorious. While it left it's statues, it still managed to keep the world habitable for life and other species. (note how I mentioned trees and plants growing on the ruins). To put it simple, it was a beatiful civilization that left a beatiful world.. It isn't fair to call it a failure only because it wasn't eternal.

*Who am I to say that?

Comment author: APMason 16 September 2011 11:31:21PM 3 points [-]

I quess you could call it neutral.

I'll only speak for myself, but 'everybody dead' gives an output nowhere near zero on my utility function. Everybody dead is awful. It's not the worst imaginable outcome, but it is really really really low in my preference ordering. I can see why you would think it's neutral - there's nobody to be happy but there's nobody to suffer either. However, if you think that people dying is a bad thing in itself, this outcome really is horrifying.