Posts

Sorted by New

Wiki Contributions

Comments

Eliezer's novella provides a vivid illustration of the danger of promoting what should have stayed an instrumental value to the the status of a terminal value. Eliezer likes to refer to this all-too-common mistake as losing purpose. I like to refer to it as adding a false terminal value.

For example, eating babies was a valid instrumental goal when the Babyeaters were at an early state of technological development. It is not IMHO evil to eat babies when the only alternative is chronic severe population pressure which will eventually either lead to your extinction or the disintegration of your agricultural civilization with a reversion to a more primitive existence in which technological advancement is slow, uncertain and easily reversed by things like natural disasters.

But then babyeating became an end in itself.

By clinging to the false terminal value of babyeating, the Babyeaters caused their own extinction even though at the time of their extinction they had an alternative means of preventing an explosion of their population (particularly, editing their own genome so that fewer babies are born: if they did not have the tech to do that, they could have asked the humans or the the Superhappies for it).

In the same way, the humans in the novella and the Superhappies are the victims of a false terminal value, which we might call "hedonic altruism": the goal of extinguishing suffering wherever it exists in the universe. Eliezer explains some of the reasons for the great instrumental value of becoming motivated by the suffering of others in Sympathetic Minds in the passage that starts with "Who is the most formidable, among the human kind?" Again, just because something has great instrumental value is no reason to promote it to a terminal value; when circumstances change, it may lose its instrumental value; and a terminal value once created tends to persist indefinitely because by definition there is no criterion by which to judge a system of terminal values.

I hope that human civilization will abandon the false terminal value of hedonic altruism before it spreads to the stars. I.e., I hope that the human dystopian future portrayed in the novella can be averted.

Anna, it takes very little effort to rattle off a numerical probability -- and then most readers take away an impression (usually false) of precision of thought.

At the start of Causality Judea Pearl explains why humans (should and usually do) use "causal" concepts rather than "statistical" ones. Although I do not recall whether he comes right out and says it, I definitely took away from Pearl the heuristic that stating your probability about some question is basically useless unless you also state the calculation that led to the number. I do recall that stating a number is clearly what Pearl defines as a statistical statement rather than a causal statement. What you should usually do instead of stating a probability estimate is to share with your readers the parts of your causal graph that most directly impinges on the question under discussion.

So, unless Eliezer goes on to list one or more factors that he believes would cause a human to convert to or convert away from my system of valuing things (namely, goal system zero or GSZ) or one or more factors that he believes would tend to prevents other factors from causing a conversion to or away from GSZ, I am going to go on believing that Eliezer has probably not reflected enough on the question for his numbers to be worth anything and that he is just blowing me off.

In summary, I tend to think that most uses of numerical probabilities on these pages have been useless. On this particular question I am particularly sceptical because Eliezer has exhibited signs (which I am prepared to describe if asked) that he has not reflected enough on goal system zero to understand it well enough to make any numerical probability estimate about it.

I am busy with an urgency today, so I might take 24 h to reply to replies to this.

Instead of describing my normative reasoning as guided by the criterion of non-arbitrariness, I prefer to describe it as guided by the criterion of minimizing or pessimizing algorithmic complexity. And that is a reply to steven's question right above: there is nothing unstable or logically inconsistent about my criterion for the same reason that there is nothing unstable about Occam's Razor.

Roko BTW had a conversion experience and now praises CEV and the Fun Theory sequence.

Let me clarify that what horrifies me is the loss of potential. Once our space-time continuum becomes a bunch of supermassive black holes, it remains that way till the end of time. It is the condition of maximum physical entropy (according to Penrose). Suffering on the other hand is impermanent. Ever had a really bad cold or flu? One day you wake up and it is gone and the future is just as bright as it would have been if the cold had never been.

And pulling numbers (80%, 95%) out of the air on this question is absurd.

Richard, I'd take the black holes of course.

As I expected. Much you (Eliezer) have written entails it, but it still gives me a shock because piling as much ordinary matter as possible into supermassive black holes is the most evil end I have been able to imagine. In contrast, suffering is merely subjective experience and consequently, according to my way of assigning value, unimportant.

Transforming ordinary matter into mass inside a black hole is a very potent means to create free energy, and I can imagine applying that free energy to ends that justify the means. But to put ordinary matter and radiation into black holes massive enough that the mass will never come back out as Hawking radiation as an end in itself -- horror!

Question for Eliezer. If the human race goes extinct without leaving any legacy, then according to you, any nonhuman intelligent agent that might come into existence will be unable to learn about morality?

If your answer is that the nonhuman agent might be able to learn about morality if it is sentient then please define "sentient". What is it about a paperclip maximizer that makes it nonsenient? What is it about a human that makes it sentient?

Speaking of compressing down nicely, that is a nice and compressed description of humanism. Singularitarians, question humanism.

trying to distance ourselves from, control, or delete too much of ourselves - then having to undo it.

I cannot recall ever trying to delete or even control a large part of myself, so no opinion there, but "distancing ourselves from ourselves" sounds a lot like developing what some have called an observing self, which is probably a very valuable thing for an person wishing to make a large contribution to the world IMHO.

A person worried about not feeling alive enough would probably get more bang for his buck by avoiding exposure to mercury, which binds permanently to serotonin receptors, causing a kind of deadening.

s/werewolf/Easter bunny/ IMHO.

Did that make sense?

Yes, and I can see why you would rather say it that way.

My theory is that most of those who believe quantum suicide is effective assign negative utility to suffering and also assign a negative utility to death, but knowing that they will continue to live in one Everett branch removes the sting of knowing (and consequently the negative utility of the fact) that they will die in a different Everett branch. I am hoping Cameron Taylor or another commentator who thinks quantum suicide might be effective will let me know whether I have described his utility function.

Load More