"Except to remark on how many different things must be known to constrain the final answer."
What would you estimate the probability of each thing being correct is?
What about "near-human" morals, like, say, Kzinti: Where the best of all possible words contains hierarchies, duels to the death, and subsentient females; along with exploration, technology, and other human-like activities. Though I find their morality repugnant for humans, I can see that they have the moral "right" to it. Is human morality, then, in some deep sense better than those?
It is better in the sense that it is ours. It is an inescapable quality of life as an agent with values embedded in a much greater universe that might contain other agents with other values, that ultimately the only thing that makes one particular set of values matter more to that agent is that those are the values that belong to that agent.
We happen to have as one of our values, to respect others' values. But this particular value happens to be self-contradictory when taken to its natural conclusion. To take it to its conclusion would be to say that nothing matters in the end, not even what we ourselves care about. Consider the case of an alien being whose values include disrespecting others' values. Is the human value placed on respecting others' values in some deep sense better than this being's?
At some point you have to stop and say, "Sorry, my own values take precedence over yours when they are incompatible to this degree. I cannot respect this value of yours." And what gives you the justification to do this? Because it is your choice, your values. Ultimately, we must be chauvinists on some level if we are to have any values at all. Otherwise, what's wrong with a sociopath who murders for joy? How can we say that their values are wrong, except to say that their values contradict our own?
I think Eliezer is due for congratulation here. This series is nothing short of a mammoth intellectual achievement, integrating modern academic thought about ethics, evolutionary psychology and biases with the provocative questions of the transhumanist movement. I've learned a staggering amount from reading this OB series, especially about human values and my own biases and mental blank spots.
I hope we can all build on this. Really. There's a lot left to do, especially for transhumanists and those who hope for a significantly better future than the best available in today's world. For those who have more pedestrian ambitions for the future (i.e. most of the world), this series provides a stark warning as to how the well intentioned may destroy everything.
Bravo!
Pearson, it's not that kind of chaining. More like trying to explain to someone why their randomly chosen lottery ticket won't win (big space, small target, poor aim) when their brain manufactures argument after argument after different argument for why they'll soon be rich.
The core problem is simple. The targeting information disappears, so does the good outcome. Knowing enough to refute every fallacious remanufacturing of the value-information from nowhere, is the hard part.
What are the odds that every proof of God's existence is wrong, when there are so many proofs? Pretty high. A selective search for plausible-sounding excuses won't change reality itself. But knowing the specific refutations - being able to pinpoint the flaws in every supposed proof - that might take some study.
I have read and considered all of Eliezer's posts, and still disagree with him on this his grand conclusion. Eliezer, do you think the universe was terribly unlikely and therefore terribly lucky to have coughed up human-like values, rather than some other values? Or is it only in the stage after ours where such rare good values were unlikely to exist?
I imagine a distant future with just a smattering of paper clip maximizers -- having risen in different galaxies with slightly different notions of what a paperclip is -- might actually be quite interesting. But even so, so what? Screw the paperclips, even if they turn out to be more elegant and interesting than us!
Robin, I discussed this in The Gift We Give To Tomorrow as a "moral miracle" that of course isn't really a miracle at all. We're judging the winding path that evolution took to human value, and judging it as fortuitous using our human values. (See also, "Where Recursive Justification Hits Bottom", "The Ultimate Source", "Created Already In Motion", etcetera.)
RH: "I have read and considered all of Eliezer's posts, and still disagree with him on this his grand conclusion. Eliezer, do you think the universe was terribly unlikely and therefore terribly lucky to have coughed up human-like values, rather than some other values?"
yes, it almost certainly was because of the way we evolved. There are two distinct events here:
A species evolves to intelligence with the particular values we have.
Given that a species evolves to intelligence with some particular values, it decides that it likes those values.
Evolution (as an algorithm) doesn't work on the indestructible. Therefore all naturally-evolved beings must be fragile to some extent, and must have evolved to value protecting their fragility.
Yes, a designed life form can have paper clip values, but I don't think we'll encounter any naturally occurring beings like this. So our provincial little values may not be so provincial after all, but common on many planets.
Ian C.: [i]"Yes, a designed life form can have paper clip values, but I don't think we'll encounter any naturally occurring beings like this. So our provincial little values may not be so provincial after all, but common on many planets."[i] Almost all life forms (especially simpler ones) are sort of paperclip maximizers, they just make copies of themselves ad infinitum. If life could leave this planet and use materials more efficiently, it would consume everything. Good for us evolution couldn't optimize them to such an extent.
Ian: some individual values of other naturally-evolved beings may be recognizable, but that doesn't mean that the value system as a whole will.
I'd expect that carnivores, or herbivores, or non-social creatures, or hermaphrodites, or creatures with a different set of senses - would probably have some quite different values.
And there can be different brain architectures, different social/political organisation, different transwhateverism technology, etc.
Roko:
Not so fast. We like some of our evolved values at the expense of others. Ingroup-outgroup dynamics, the way we're most motivated only when we have someone to fear and hate: this too is an evolved value, and most of the people here would prefer to do away with it if we can.
The interesting part of moral progress is that the values etched into us by evolution don't really need to be consistent with each other, so as we become more reflective and our environment changes to force new situations upon us, we realize that they conflict with one another. The analysis of which values have been winning and which have been losing (in different times and places) is another fascinating one...
"Ingroup-outgroup dynamics, the way we're most motivated only when we have someone to fear and hate: this too is an evolved value, and most of the people here would prefer to do away with it if we can."
So you would want to eliminate your special care for family, friends, and lovers? Or are you really just saying that your degree of ingroup-outgroup concern is less than average and you wish everyone was as cosmopolitan as you? Or, because ingroup-concern is indexical, it results in different values for different ingroups, so you wish every shared ...
I suspect it gets worse. Eliezer seems to lean heavily on the psychological unity of humankind, but there's a lot of room for variance within that human dot. My morality is a human morality, but that doesn't mean I'd agree with a weighted sum across all possible extrapolated human moralities. So even if you preserve human morals and metamorals, you could still end up with a future we'd find horrifying (albeit better than a paperclip galaxy). It might be said that that's only a Weirdtopia, that's you're horrified at first, but then you see that it's actuall...
I'll be horrified for as long as I damn well please.
Well, okay, but the Weirdtopia thesis under consideration makes the empirical falsifiable prediction that "as long as you damn well please" isn't actually a very long time. Also, I call scope neglect: your puny human brain can model some aspects of your local environment, which is a tiny fraction of this Earth, but you're simply not competent to judge the entire future, which is much larger.
I would like to point out that you're probably replying to your past self. This gives me significant amusement.
This post seems almost totally wrong to me. For one thing, its central claim - that without human values the future would, with high probability be dull is not even properly defined.
To be a little clearer, one would need to say something like: if you consider a specified enumeration over the space of possibile utility functions, a random small sample from that space would be "dull" (it might help to say a bit more about what dullness means too, but that is a side issue for now).
That claim might well be true for typical "shortest-first"...
Carl:
I don't think that automatic fear, suspicion and hatred of outsiders is a necessary prerequisite to a special consideration for close friends, family, etc. Also, yes, outgroup hatred makes cooperation on large-scale Prisoner's Dilemmas even harder than it generally is for humans.
But finally, I want to point out that we are currently wired so that we can't get as motivated to face a huge problem if there's no villain to focus fear and hatred on. The "fighting" circuitry can spur us to superhuman efforts and successes, but it doesn't seem to...
@Eliezer: Can you expand on the "less ashamed of provincial values" part?
@Carl Shuman: I don't know about him, but for myself, HELL YES I DO. Family - they're just randomly selected by the birth lottery. Lovers - falling in love is some weird stuff that happens to you regardless of whether you want it, reaching into your brain to change your values: like, dude, ew - I want affection and tenderness and intimacy and most of the old interpersonal fun and much more new interaction, but romantic love can go right out of the window with me. Friends - I...
Patrick,
Those are instrumental reasons, and could be addressed in other ways. I was trying to point out that giving up big chunks of our personality for instrumental benefits can be a real trade-off.
http://lesswrong.com/lw/gz/policy_debates_should_not_appear_onesided/
Jordan: "I imagine a distant future with just a smattering of paper clip maximizers -- having risen in different galaxies with slightly different notions of what a paperclip is -- might actually be quite interesting."
That's exactly how I imagine the distant future. And I very much like to point to the cyclic cellular automaton (java applet) as a visualization. Actually, I speculate that we live in a small part of the space-time continuum not yet eaten by a paper clip maximizer. Now you may ask: Why don't we see huge blobs of paper clip maximizers...
Probability of an evolved alien species:
(A) Possessing analogues of pleasure and pain: HIGH. Reinforcement learning is simpler than consequentialism for natural selection to stumble across.
(B) Having a human idiom of boredom that desires a steady trickle of novelty: MEDIUM. This has to do with acclimation and adjustment as a widespread neural idiom, and the way that we try to abstract that as a moral value. It's fragile but not impossible.
(C) Having a sense of humor: LOW.
Probability of an expected paperclip maximizer having analogous properties, if it originated as a self-improving code soup (rather than by natural selection), or if it was programmed over a competence threshold by foolish humans and then exploded:
(A) MEDIUM
(B) LOW
(C) LOW
the vast majority of possible expected utility maximizers, would only engage in just so much efficient exploration, and spend most of its time exploiting the best alternative found so far, over and over and over.
I'm not convinced of that. First, "vast majority" needs to use an appropriate measure, one that is applicable to evolutionary results. If, when two equally probable mutations compete in the same environment, one of those mutations wins, making the other extinct, then the winner needs to be assigned the far greater weight. So, for example,...
it would seem easier to build (or mutate into) something that keeps going forever than it is to build something that goes for a while then stops.
On reflection, I realize this point might be applied to repetitive drudgery. But I was applying it to the behavior "engage in just so much efficient exploration." My point is that it may be easier to mutate into something that explores and explores and explores, than it would be to mutate into something that explores for a while then stops.
Thanks for the probability assessments. What is missing are supporting arguments. What you think is relatively clear - but why you think it is not.
...and what's the deal with mentioning a "sense of humour"? What has that to do with whether a civilization is complex and interesting? Whether our distant descendants value a sense of humour or not seems like an irrelevance to me. I am more concerned with whether they "make it" or not - factors affecting whether our descendants outlast the exploding sun - or whether the seed of human civilisation is obliterated forever.
@Jordan - agreed.
I think the big difference in expected complexity is between sampling the space of possible singletons' algorithms results and sampling the space of competitive entities. I agree with Eliezer that an imprecisely chosen value function, if relentlessly optimized, is likely to yield a dull universe. To my mind the key is that the ability to relentlessly optimize one function only exists if a singleton gets and keeps an overwhelming advantage over everything else. If this does not happen, we get competing entities with the computationally d...
What if I want a wonderful and non-mysterious universe? Your current argument seems to be that there's no such thing. I don't follow why this is so. "Fun" (defined as desire for novelty) may be the simplest way to build a strategy of exploration, but it's not obvious that it's the only one, is it?
A series on "theory of motivation" that explores other options besides novelty and fun as prime directors of optimization processes that can improve the universe (in their and maybe even our eyes).
"This talk about "'right' means right" still makes me damn uneasy. I don't have more to show for it than "still feels a little forced" - when I visualize a humane mind (say, a human) and a paperclipper (a sentient, moral one) looking at each other in horror and knowing there is no way they could agree about whether using atoms to feed babies or make paperclips, I feel wrong. I think about the paperclipper in exactly the same way it thinks about me! Sure, that's also what happens when I talk to a creationist, but we're trying to app...
@Jotaf, "Order > chaos, no?"
Imagine God shows up tomorrow. "Everyone, hey, yeah. So I've got this other creation and they're super moral. Man, moral freaks, let me tell you. Make Mennonites look Shintoist. And, sure, I like them better than you. It's why I'm never around, sorry. Thing is, their planet is about to get eaten by a supernova. So.. I'm giving them the moral green light to invade Earth. It's been real."
I'd be the first to sign up for the resistance. Who cares about moral superiority? Are we more moral than a paperclip maxi...
Carl:
Those are instrumental reasons, and could be addressed in other ways.
I wouldn't want to modify/delete hatred for instrumental reasons, but on behalf of the values that seem to clash almost constantly with hatred. Among those are the values I meta-value, including rationality and some wider level of altruism.
I was trying to point out that giving up big chunks of our personality for instrumental benefits can be a real trade-off.
I agree with that heuristic in general. I would be very cautious regarding the means of ending hatred-as-we-know-it in human ...
(E.g. repeating the mantra "Politics is the Mind-Killer" when tempted to characterize the other side as evil)
Uh, I don't mean that literally, though doing up a whole Litany of Politics might be fun.
We can sort the values evolution gave us into the following categories (not necessarily exhaustive). Note that only the first category of values is likely to be preserved without special effort, if Eliezer is right and our future is dominated by singleton FOOM scenarios. But many other values are likely to survive naturally in alternative futures.
Wei_Dai2, it looks like you missed Eliezer's main point:
Value isn't just complicated, it's fragile. There is more than one dimension of human value, where if just that one thing is lost, the Future becomes null. A single blow and all value shatters. Not every single blow will shatter all value - but more than one possible "single blow" will do so.
It doesn't matter that "many" values survive, if Eliezer's "value is fragile" thesis is correct, because we could lose the whole future if we lose just a single critical value. Do we have such critical values? Maybe, maybe not, but you didn't address that issue.
Agree.
Disagree. Maybe we don't mean the same thing by boredom?
Mostly agree. Depends somewhat on definition of evolution. Some evolved organisms pursue only 1 or 2 of these but all pursue at least one.
I agree with Eliezer that an imprecisely chosen value function, if relentlessly optimized, is likely to yield a dull universe.
So: you think a "paperclip maximiser" would be "dull"?
How is that remotely defensible? Do you think a "paperclip maximiser" will master molecular nanotechnology, artificial intelligence, space travel, fusion, the art of dismantling planets and stellar farming?
If so, how could that possibly be "dull"? If not, what reason do you have for thinking that those technologies would not help with t...
Maybe we don't mean the same thing by boredom?
I'm using Eliezer's definition: a desire not to do the same thing over and over again. For a creature with roughly human-level brain power, doing the same thing over and over again likely means it's stuck in a local optimum of some sort.
Genome equivalents which don't generate terminally valued individual identity in the minds they descrive should outperform those that do.
I don't understand this. Please elaborate.
Why not just direct expected utility? Pain and pleasure are easy to find but don't work nearly as we...
@Jotaf: No, you misunderstood - guess I got double-transparent-deluded. I'm saying this:
So I'm happy with probability being subjectively objective, a
Another way of saying this is that human beings are not expected utility maximizers, not as individuals and certainly not as societies.
They are not perfect expected utility maximizers. However, no expected utility maximizer is perfect. Humans approach the ideal at least as well as other organisms. Fitness maximization is the central explanatory principle in biology - and the underlying idea is the same. The economic framework associated with utilitarianism is general, of broad applicability, and deserves considerable respect.
But there is no principled way to derive an utility function from something that is not an expected utility maximizer!
You can model any agent as in expected utility maximizer - with a few caveats about things such as uncomputability and infinitely complex functions.
You really can reverse-engineer their utility functions too - by considering them as Input-Transform-Output black boxes - and asking what expected utility maximizer would produce the observed transformation.
A utility function is like a program in a Turing-complete language. If the behaviour can be computed at all, it can be computed by a utility function.
A utility function is like a program in a Turing-complete language. If the behaviour can be computed at all, it can be computed by a utility function.
Tim, I've seen you state this before, but it's simply wrong. A utility function is not like a Turing-complete language. It imposes rather strong constraints on possible behavior.
Consider a program which when given the choices (A,B) outputs A. If you reset it and give it choices (B,C) it outputs B. If you reset it again and give it choices (C,A) it outputs C. The behavior of this program cannot be reproduced b...
Wei Dai: Consider a program which when given the choices (A,B) outputs A. If you reset it and give it choices (B,C) it outputs B. If you reset it again and give it choices (C,A) it outputs C. The behavior of this program cannot be reproduced by a utility function.
I don't know the proper rational-choice-theory terminology, but wouldn't modeling this program just be a matter of describing the "space" of choices correctly? That is, rather than making the space of choices {A, B, C}, make it the set containing
(1) = taking A when offered A and B, (2) ...
Consider a program which when given the choices (A,B) outputs A. If you reset it and give it choices (B,C) it outputs B. If you reset it again and give it choices (C,A) it outputs C. The behavior of this program cannot be reproduced by a utility function.
That is silly - the associated utility function is the one you have just explicitly given. To rephrase:
if (senses contain (A,B)) selecting A has high utility; else if (senses contain (B,C)) selecting B has high utility; else if (senses contain (C,A)) selecting C has high utility;
Here's another example:...
The core problem is simple. The targeting information disappears, so does the good outcome. Knowing enough to refute every fallacious remanufacturing of the value-information from nowhere, is the hard part.
The utility function of Deep Blue has 8,000 parts - and contained a lot of information. Throw all that information away, and all you really need to reconstruct Deep Blue is the knowledge that it's aim is to win games of chess. The exact details of the information in the original utility function are not recovered - but the eventual functional outcome...
The "targeting information" is actually a bunch of implementation details that can be effectively recreated from the goal - if that should prove to be necessary.
It is not precious information that must be preserved. If anything, attempts to preserve the 8,000 parts of Deep Blue's utility function while improving it would actually have a crippling negative effect on its future development. Similarly with human values: those are a bunch of implementation details - not the real target.
If Deep Blue had emotions and desires that were attached to the 8,000 parts of its utility function, if it drew great satisfaction, meaning, and joy from executing those 8,000 parts regardless of whether doing so resulted in winning a chess game, then yes, those 8,000 parts would be precious information that needed to be preserved. It would be a horrible disaster if they were lost. They wouldn't be the programmer's real target, but why in the world would Emotional Deep Blue care about what it's programmer wanted? It wouldn't want to win at chess, it would want to implement those 8,000 parts! That's what its real target is!
For humans, our real target is all those complex values that evolut...
Tim and Tyrrell, do you know the axiomatic derivation of expected utility theory? If you haven't read http://cepa.newschool.edu/het/essays/uncert/vnmaxioms.htm or something equivalent, please read it first.
Yes, if you change the spaces of states and choices, maybe you can encode every possible agent as an utility function, not just those satisfying certain axioms of "rationality" (which I put in quotes because I don't necessarily agree with them), but that would be to miss the entire point of expected utility theory, which is that it is supposed ...
Wei: Most people in most situations would reject the idea that the set of options presented is part of the outcome - would say that (A,B,C) is a better outcome space than the richer one Tyrrell suggested - so expected utility theory is applicable. A set of preferences can never be instrumentally irrational, but it can be unreasonable as judged by another part of your morality.
Specifically, the point of utility theory is the attempt to predict the actions of complex agents by dividing them into two layers:
The idea being that if you can't know the details of the machinery, successful prediction might be possible by plugging the values into your own equivalent machinery.
Does this work in real life? In practice it works well for simple agents, or complex agents in simple/narrow contexts. It works well for Deep Blue, or for Kasparov on the chessboard. It doesn't w...
To expand on my categorization of values a bit more, it seems clear to me that at least some human value do not deserved to be forever etched into the utility function of a singleton. Those caused by idiosyncratic environmental characteristics like taste for salt and sugar, for example. To me, these are simply accidents of history, and I wouldn't hesitate (too much) to modify them away in myself, perhaps to be replaced by more interesting and exotic tastes.
What about reproduction? It's a value that my genes programmed into me for their own purposes, so why...
In dealing with your example, I didn't "change the space of states or choices". All I did was specify a utility function. The input states and output states were exactly as you specified them to be. The agent could see what choices were available, and then it picked one of them - according to the maximum value of the utility function I specified.
The corresponding real world example is an agent that prefers Boston to Atlanta, Chicago to Boston, and Atlanta to Chicago. I simply showed how a utility maximiser could represent such preferences. Su...
The analogy between the theory that humans behave like expected utility maximisers - and the theory that atoms behave like billiard balls could be criticised - but it generally seems quite appropriate to me.
This is a critical post. I disagree with where Eliezer has gone from here; but I'm with him up to and including this point. This post is a good starting point for a dialogue.
I don't know, or maybe I don't understand your point. I would find a quiet and silent, post-human world very beatiful in a way. A world where the only reminders of the great, yet long gone civilisation would be ancient ruins.. Super structures which once were the statues of human prosperity and glory, now standing along with nothing but trees and plants, forever forgotten. Simply sleeping in a never ending serenity and silence...
Don't you too, find such a future very beatiful in an eerie way? Even if there is no sentient being to perceive it at that time, the fact that such a future may exist one day, and that it can now be perceived through art and imagination, is where it's beauty truly lies.
Value isn't fragile because value isn't a process. Only processes can be fragile or robust.
Winning the lottery is fragile, is a fragile process, because it had to be done all in one go. Contrast that with the process of writing down a 12 digit phone number: if you to try to memoriese the whole number, and then write it down, you are likely to make a mistake, due to Millers law. Writing digits down one at time, as you hear them, is more robust. Being able to ask for corrections, or having errors pointed out to you, is more robust still.
Processes that ar...
This is the key point on which I disagree with Eliezer. I don't disagree with what he literally says here, but with what he implies and what he concludes. The key context he isn't giving here is that what he says here only applies fully to a hard-takeoff AI scenario. Consider what he says about boredom:
...Surely, at least boredom has to be a universal value. It evolved in humans because it's valuable, right? So any mind that doesn't share our dislike of repetition, will fail to thrive in the universe and be eliminated...
If you are familiar with the differ
I think some of the assumptions here have lead you to false conclusions. For one, you seem to assume that because humans share some values, all humans have an identical value system. This is just plain wrong, humans each have their own unique value "signature" more or less like a fingerprint. If there is one thing that you place more value weight on than a person who is otherwise identical, you are different. That being said, does your argument still hold with this, albeit minor in the grand scheme of things, heterogeneity added to human valu...
Regarding this post and the complexity of value:
Taking a paperclip maximizer as a starting point, the machine can be divided up into two primary components: the value function, which dictates that more paperclips is a good thing, and the optimizer that increases the universe's score with respect to that value function. What we should aim for, in my opinion, is to become the value function to a really badass optimizer. If we build a machine that asks us how happy we are, and then does everything in its power to improve that rating (so long as it doesn't inv...
I'm not sure if 'fragile' is the right word, removing one component might be devastating, but in my opinion, that more reflects on the importance of each piece, and not so much on the fragility of the actual system. The way I see it, it's something like a tower with 4 large beams for support, if one takes out a single piece, it would be worse than, say if one removed a piece from a tower with 25 smaller beams to support it.
But other than that, thank you very much for the informative article.
Any Future not shaped by a goal system with detailed reliable inheritance from human morals and metamorals, will contain almost nothing of worth.
Did anyone notice that this flatly contradicts Three Worlds Collide? The superhappies and babyeaters don't inherit from human morals at all (let alone detailedly and reliably), but the humans still regard the aliens as moral patients, having meddling preferences for the babyeater children to not be eaten, rather than being as indifferent as they would be to heaps of pebbles being scattered.
(Yes, it was fiction,...
I don't think Three Worlds Collide should be interpreted as having anything to do with actual aliens, any more than The Scorpion and the Frog should be interpreted as having anything to do with actual scorpions and frogs. TWC uses different alien species to allegorically explore human differences of opinion.
Consider the incredibly important human value of "boredom" - our desire not to do "the same thing" over and over and over again. You can imagine a mind that contained almost the whole specification of human value, almost all the morals and metamorals, but left out just this one thing -
- and so it spent until the end of time, and until the farthest reaches of its light cone, replaying a single highly optimized experience, over and over and over again.
For what it's worth, I don't buy this. To my intuitions, it seems like the whole universe experiencing...
If I had to pick a single statement that relies on more Overcoming Bias content I've written than any other, that statement would be:
Any Future not shaped by a goal system with detailed reliable inheritance from human morals and metamorals, will contain almost nothing of worth.
"Well," says the one, "maybe according to your provincial human values, you wouldn't like it. But I can easily imagine a galactic civilization full of agents who are nothing like you, yet find great value and interest in their own goals. And that's fine by me. I'm not so bigoted as you are. Let the Future go its own way, without trying to bind it forever to the laughably primitive prejudices of a pack of four-limbed Squishy Things -"
My friend, I have no problem with the thought of a galactic civilization vastly unlike our own... full of strange beings who look nothing like me even in their own imaginations... pursuing pleasures and experiences I can't begin to empathize with... trading in a marketplace of unimaginable goods... allying to pursue incomprehensible objectives... people whose life-stories I could never understand.
That's what the Future looks like if things go right.
If the chain of inheritance from human (meta)morals is broken, the Future does not look like this. It does not end up magically, delightfully incomprehensible.
With very high probability, it ends up looking dull. Pointless. Something whose loss you wouldn't mourn.
Seeing this as obvious, is what requires that immense amount of background explanation.
And I'm not going to iterate through all the points and winding pathways of argument here, because that would take us back through 75% of my Overcoming Bias posts. Except to remark on how many different things must be known to constrain the final answer.
Consider the incredibly important human value of "boredom" - our desire not to do "the same thing" over and over and over again. You can imagine a mind that contained almost the whole specification of human value, almost all the morals and metamorals, but left out just this one thing -
- and so it spent until the end of time, and until the farthest reaches of its light cone, replaying a single highly optimized experience, over and over and over again.
Or imagine a mind that contained almost the whole specification of which sort of feelings humans most enjoy - but not the idea that those feelings had important external referents. So that the mind just went around feeling like it had made an important discovery, feeling it had found the perfect lover, feeling it had helped a friend, but not actually doing any of those things - having become its own experience machine. And if the mind pursued those feelings and their referents, it would be a good future and true; but because this one dimension of value was left out, the future became something dull. Boring and repetitive, because although this mind felt that it was encountering experiences of incredible novelty, this feeling was in no wise true.
Or the converse problem - an agent that contains all the aspects of human value, except the valuation of subjective experience. So that the result is a nonsentient optimizer that goes around making genuine discoveries, but the discoveries are not savored and enjoyed, because there is no one there to do so. This, I admit, I don't quite know to be possible. Consciousness does still confuse me to some extent. But a universe with no one to bear witness to it, might as well not be.
Value isn't just complicated, it's fragile. There is more than one dimension of human value, where if just that one thing is lost, the Future becomes null. A single blow and all value shatters. Not every single blow will shatter all value - but more than one possible "single blow" will do so.
And then there are the long defenses of this proposition, which relies on 75% of my Overcoming Bias posts, so that it would be more than one day's work to summarize all of it. Maybe some other week. There's so many branches I've seen that discussion tree go down.
After all - a mind shouldn't just go around having the same experience over and over and over again. Surely no superintelligence would be so grossly mistaken about the correct action?
Why would any supermind want something so inherently worthless as the feeling of discovery without any real discoveries? Even if that were its utility function, wouldn't it just notice that its utility function was wrong, and rewrite it? It's got free will, right?
Surely, at least boredom has to be a universal value. It evolved in humans because it's valuable, right? So any mind that doesn't share our dislike of repetition, will fail to thrive in the universe and be eliminated...
If you are familiar with the difference between instrumental values and terminal values, and familiar with the stupidity of natural selection, and you understand how this stupidity manifests in the difference between executing adaptations versus maximizing fitness, and you know this turned instrumental subgoals of reproduction into decontextualized unconditional emotions...
...and you're familiar with how the tradeoff between exploration and exploitation works in Artificial Intelligence...
...then you might be able to see that the human form of boredom that demands a steady trickle of novelty for its own sake, isn't a grand universal, but just a particular algorithm that evolution coughed out into us. And you might be able to see how the vast majority of possible expected utility maximizers, would only engage in just so much efficient exploration, and spend most of its time exploiting the best alternative found so far, over and over and over.
That's a lot of background knowledge, though.
And so on and so on and so on through 75% of my posts on Overcoming Bias, and many chains of fallacy and counter-explanation. Some week I may try to write up the whole diagram. But for now I'm going to assume that you've read the arguments, and just deliver the conclusion:
We can't relax our grip on the future - let go of the steering wheel - and still end up with anything of value.
And those who think we can -
- they're trying to be cosmopolitan. I understand that. I read those same science fiction books as a kid: The provincial villains who enslave aliens for the crime of not looking just like humans. The provincial villains who enslave helpless AIs in durance vile on the assumption that silicon can't be sentient. And the cosmopolitan heroes who understand that minds don't have to be just like us to be embraced as valuable -
I read those books. I once believed them. But the beauty that jumps out of one box, is not jumping out of all boxes. (This being the moral of the sequence on Lawful Creativity.) If you leave behind all order, what is left is not the perfect answer, what is left is perfect noise. Sometimes you have to abandon an old design rule to build a better mousetrap, but that's not the same as giving up all design rules and collecting wood shavings into a heap, with every pattern of wood as good as any other. The old rule is always abandoned at the behest of some higher rule, some higher criterion of value that governs.
If you loose the grip of human morals and metamorals - the result is not mysterious and alien and beautiful by the standards of human value. It is moral noise, a universe tiled with paperclips. To change away from human morals in the direction of improvement rather than entropy, requires a criterion of improvement; and that criterion would be physically represented in our brains, and our brains alone.
Relax the grip of human value upon the universe, and it will end up seriously valueless. Not, strange and alien and wonderful, shocking and terrifying and beautiful beyond all human imagination. Just, tiled with paperclips.
It's only some humans, you see, who have this idea of embracing manifold varieties of mind - of wanting the Future to be something greater than the past - of being not bound to our past selves - of trying to change and move forward.
A paperclip maximizer just chooses whichever action leads to the greatest number of paperclips.
No free lunch. You want a wonderful and mysterious universe? That's your value. You work to create that value. Let that value exert its force through you who represents it, let it make decisions in you to shape the future. And maybe you shall indeed obtain a wonderful and mysterious universe.
No free lunch. Valuable things appear because a goal system that values them takes action to create them. Paperclips don't materialize from nowhere for a paperclip maximizer. And a wonderfully alien and mysterious Future will not materialize from nowhere for us humans, if our values that prefer it are physically obliterated - or even disturbed in the wrong dimension. Then there is nothing left in the universe that works to make the universe valuable.
You do have values, even when you're trying to be "cosmopolitan", trying to display a properly virtuous appreciation of alien minds. Your values are then faded further into the invisible background - they are less obviously human. Your brain probably won't even generate an alternative so awful that it would wake you up, make you say "No! Something went wrong!" even at your most cosmopolitan. E.g. "a nonsentient optimizer absorbs all matter in its future light cone and tiles the universe with paperclips". You'll just imagine strange alien worlds to appreciate.
Trying to be "cosmopolitan" - to be a citizen of the cosmos - just strips off a surface veneer of goals that seem obviously "human".
But if you wouldn't like the Future tiled over with paperclips, and you would prefer a civilization of...
...sentient beings...
...with enjoyable experiences...
...that aren't the same experience over and over again...
...and are bound to something besides just being a sequence of internal pleasurable feelings...
...learning, discovering, freely choosing...
...well, I've just been through the posts on Fun Theory that went into some of the hidden details on those short English words.
Values that you might praise as cosmopolitan or universal or fundamental or obvious common sense, are represented in your brain just as much as those values that you might dismiss as merely human. Those values come of the long history of humanity, and the morally miraculous stupidity of evolution that created us. (And once I finally came to that realization, I felt less ashamed of values that seemed 'provincial' - but that's another matter.)
These values do not emerge in all possible minds. They will not appear from nowhere to rebuke and revoke the utility function of an expected paperclip maximizer.
Touch too hard in the wrong dimension, and the physical representation of those values will shatter - and not come back, for there will be nothing left to want to bring it back.
And the referent of those values - a worthwhile universe - would no longer have any physical reason to come into being.
Let go of the steering wheel, and the Future crashes.