Anna, it takes very little effort to rattle off a numerical probability -- and then most readers take away an impression (usually false) of precision of thought.
At the start of Causality Judea Pearl explains why humans (should and usually do) use "causal" concepts rather than "statistical" ones. Although I do not recall whether he comes right out and says it, I definitely took away from Pearl the heuristic that stating your probability about some question is basically useless unless you also state the calculation that led to the number...
Instead of describing my normative reasoning as guided by the criterion of non-arbitrariness, I prefer to describe it as guided by the criterion of minimizing or pessimizing algorithmic complexity. And that is a reply to steven's question right above: there is nothing unstable or logically inconsistent about my criterion for the same reason that there is nothing unstable about Occam's Razor.
Roko BTW had a conversion experience and now praises CEV and the Fun Theory sequence.
Let me clarify that what horrifies me is the loss of potential. Once our space-time continuum becomes a bunch of supermassive black holes, it remains that way till the end of time. It is the condition of maximum physical entropy (according to Penrose). Suffering on the other hand is impermanent. Ever had a really bad cold or flu? One day you wake up and it is gone and the future is just as bright as it would have been if the cold had never been.
And pulling numbers (80%, 95%) out of the air on this question is absurd.
Richard, I'd take the black holes of course.
As I expected. Much you (Eliezer) have written entails it, but it still gives me a shock because piling as much ordinary matter as possible into supermassive black holes is the most evil end I have been able to imagine. In contrast, suffering is merely subjective experience and consequently, according to my way of assigning value, unimportant.
Transforming ordinary matter into mass inside a black hole is a very potent means to create free energy, and I can imagine applying that free energy to ends that justify...
Question for Eliezer. If the human race goes extinct without leaving any legacy, then according to you, any nonhuman intelligent agent that might come into existence will be unable to learn about morality?
If your answer is that the nonhuman agent might be able to learn about morality if it is sentient then please define "sentient". What is it about a paperclip maximizer that makes it nonsenient? What is it about a human that makes it sentient?
trying to distance ourselves from, control, or delete too much of ourselves - then having to undo it.
I cannot recall ever trying to delete or even control a large part of myself, so no opinion there, but "distancing ourselves from ourselves" sounds a lot like developing what some have called an observing self, which is probably a very valuable thing for an person wishing to make a large contribution to the world IMHO.
A person worried about not feeling alive enough would probably get more bang for his buck by avoiding exposure to mercury, which binds permanently to serotonin receptors, causing a kind of deadening.
Did that make sense?
Yes, and I can see why you would rather say it that way.
My theory is that most of those who believe quantum suicide is effective assign negative utility to suffering and also assign a negative utility to death, but knowing that they will continue to live in one Everett branch removes the sting of knowing (and consequently the negative utility of the fact) that they will die in a different Everett branch. I am hoping Cameron Taylor or another commentator who thinks quantum suicide might be effective will let me know whether I have described his utility function.
OK, my previous comment was too rude. I won't do it again, OK?
Rather than answer your question about fitness, let me take back what I said and start over. I think you and I have different terminal values.
I am going to assume -- and please correct me if I am wrong -- that you assign an Everett branch in which you painless wink out of existence a value of zero (neither desirable or undesirable) and that consequently, under certain circumstances (e.g., at least one alternative Everett branch remains in which you survive) you would prefer painlessly winking...
At some point the most profitable avenue of research in the pursuit of friendly AI would become the logistics of combining a mechanism for quantum suicide with a random number generator.
Usually learning new true information increases a person's fitness, but learning about the many-worlds interpretation seems to decrease the fitness of many who learn it.
The way science is currently done, experimental data that the formulator of the hypothesis did not know about is much stronger evidence for a hypothesis than experimental data he did know about.
A hypothesis formulated by a perfect Bayesian reasoner would not have that property, but hypotheses from human scientists do, and I know of no cost-effective way to stop human scientists from generating the effect. Part of the reason human scientists do it is because the originator of a hypothesis is too optimistic about the hypothesis (and this optimism stems in ...
in a previous [comment] in this thread I argued that one should be surprised by externally improbable survival, at least in the sense that it should make one increase the probability assigned to alternative explanations of the world that do not make survival so unlikely.
Simon, I think that the previous comment you refer to was the smartest thing anyone has said in this comment section. Instead of continuing to point out the things you got right, I hope you do not mind if I point out something you got wrong, namely,
Richard: your first criticism has too...
I disagree with the last 2 comments.
Eliezer's priority has gradually shifted over the last 5 years or so from increasing his own knowledge to transmitting what he knows to others, which is exactly the behavior I would expect from someone with his stated goals who knows what he is doing.
Yes, he has suggested or implied many times that he expects to implement the intelligence explosion more or less by himself (and I do not like that) but ever since the Summer of AI his actions (particularly all the effort he has put into blogging and his references to 15-to-...
I will probably have to stop reading this blog for a while because my life has gotten very tricky and precarious. I am still available for more personal communication with rationalists and scientific generalists especially those living in the Bay Area.
There have been 3 comments on this blog by men to the effect that sex is not that important or that the writer has given up on sex. Those comments suggest what I would consider a lack of sufficient respect for the importance of sex. I tend to believe that for a young man to learn how to have a satisfying a...
It seems the ultimate confusion here is that we are talking about instrumental values . . . before agreeing on terminal values . . .If we could agree on some well-defined goal, e.g. maximization of human happiness, we could much more easily theorize on whether a particular case of murder would benefit or harm that goal.
denis bider, under the CEV plan for singularity, no human has to give an unambiguous definition or enumeration of his or her terminal values before the launch of the seed of the superintelligence. Consequently, those who lean toward th...
It seems the ultimate confusion here is that we are talking about instrumental values . . . before agreeing on terminal values . . .If we could agree on some well-defined goal, e.g. maximization of human happiness, we could much more easily theorize on whether a particular case of murder would benefit or harm that goal.
denis bider, I would not be surprised to learn that refraining from murder is a terminal value for Eliezer. Eliezer's writings imply that he has hundreds of terminal values: he cannot even enumerate them all.
Defn. "Murder" is killing under particular circumstances, e.g., not by uniformed soldiers during a war, not in self-defense, not by accident.
Thesis: regarding some phenomenon as possible is nothing other than . . .
I consider that an accurate summary of Eliezer's original post (OP) to which these are comments.
Will you please navigate to this page and start reading where it says,
Imagine that in an era before recorded history or formal mathematics, I am a shepherd and I have trouble tracking my sheep.
You need read only to where it says, "Markos Sophisticus Maximus".
Those six paragraphs attempt to be a reductive exposition of the concept of whole number, a.k.a., non-negative integer...
Joseph Knecht says to Eliezer,
you dedicated an entire follow-up post to chiding Brandon, in part for using realizable in his explanation . . . [and] you committed the same mistake in using reachable.
Congratulations to Joseph Knecht for finding a flaw in Eliezer's exposition!
I would like his opinion about Eliezer's explanation of how to fix the exposition. I do not see a flaw in the exposition if it is fixed as Eliezer explains. Does he?
Richard, if you're seriously proposing that consciousness is a mistaken idea, but morality isn't, I can only say that that has got to be one unique theory of morality.
Yes, Z.M.D., I am seriously proposing. And I know my theory of morality is not unique to me because a man caused thousands of people to declare for a theory of morality that makes no reference to consciousness (or subject experience for that matter) and although most of those thousands might have switched by now to some other moral theory and although most of the declarations might have bee...
RI asks,
how moral or otherwise desirable would the story have been if half a billion years' of sentient minds had been made to think, act and otherwise be in perfect accordance to what three days of awkward-tentacled, primitive rock fans would wish if they knew more, thought faster, were more the people they wished they were...
Eliezer answers,
A Friendly AI should not be a person. I would like to know at least enough about this "consciousness" business to ensure a Friendly AI doesn't have (think it has) it. An even worse critical failure is if the...
I also see no explanation as to why knowledge of objective reality is of any value, even derivative; objective reality is there, and is what it is, regardless of whether it's known or not.
You and I can influence the future course of objective reality, or at least that is what I want you to assume. Why should you assume it, you ask? For the same reason you should assume that reality has a compact algorithmic description (an assumption we might call Occam's Razor): no one knows how to be rational without assuming it: in other words, it is an inductive bias...
Yes, mitchell porter, of course there is no method (so far) (that we know of) for moral perception or moral action that does not rely on the human mind. But that does not refute my point, which again is as follows: most of the readers of these words seem to believe that the maximization of happiness or pleasure and the minimization of pain is the ultimate good. Now when you combine that belief with egalitarianism, which can be described as the belief that you yourself have no special moral value relative to any other human, and neither do kings or movie ...
Doug, I do not agree because my utility function depends on the identity of the people involved, not simply on N. Specifically, it might be possible for an agent to become confident that Bob is much more useful to whatever is the real meaning of life than Charlie is, in which case a harm to Bob has greater disutility in my system than a harm to Charlie. In other words, I do not consider egalitarianism to be a moral principle that applies to every situation without exception. So, for me U is not a function of (N,I,T)
Please let me interrupt this discussion on utilitarianism/humanism with an alternative perspective.
I do not claim to know what the meaning of life is, but I can rule certain answers out. For example, I am highly certain that it is not to maximize the number of paperclips in my vicinity.
I also believe it has nothing to do with how much pain or pleasure the humans experience -- or in fact anything to do with the humans.
More broadly, I believe that although perhaps intelligent or ethical agents are somehow integral to the meaning of life, they are integral f...
Do you consider the following a fair rephrasing of your last comment? A quantum measurement has probability p of going one way and p - 1 of going the other way where p depends on a choice made by the measurer. That is an odd property for the next bit in a message to have, and makes me suspicious of the whole idea.
If so, I agree. Another difficulty that must be overcome is, assuming one has obtained the first n bits of the message, to explain how one obtains the next bit.
Nevertheless, I believe my primary point remains: since our model of physics does no...
In cryptography, you try to hide the message from listeners (except your friends). In anticryptography, you try to write a message that a diligent and motivated listener can decode despite his having none of your biological, pyschological and social reference points.
I certainly don't know how you are going to do it at the blackboard. Anything you write on the blackboard comes from you, not something outside space-time.
I meant that most of the difficulty of the project is in understanding our laws of physics well enough to invent a possible novel method f...
Physicists have been proceeding like physicists for some time now and none of them has done anything like receiving the Old Testament from outside of our space-time.
As far as I know, none of them are looking for a message from beyond the space-time continuum. Maybe I will try to interest them in making the effort. My main interest however is a moral system that does not break down when thinking about seed AI and the singularity. Note that the search for a message from outside space-time takes place mainly at the blackboard and only at the very end mov...
This was Eliezer's point: how could you ever recognize which ones are good and which ones are evil? How could you even recognize a process for recognizing objective good and evil?
I have only one suggestion so far, which is that if you find yourself in a situation which satisfies all five of the conditions I just listed, obeying the Mugger initiates an indefinitely-long causal chain that is good rather than evil. I consider, "You might as well assume it is good," to be equivalent to, "It is good." Now that I have an example I can try t...
TGGP pointed out a mistake, which I acknowledged and tried to recover from by saying that what you learn about reality can create a behavioral obligation. g pointed out that you don't need to consider exotic things like godlike beings to discover that. If you're driving along a road, then whether you have an obligation to brake sharply depends on physical facts such as whether there's a person trying to cross the road immediately in front of you. So now I have to retreat again.
There are unstated premises that go into the braking-sharply conclusion. Wha...
The blog "item" to which this is a comment started 5 days ago. I am curious whether any besides TGGP and I are still reading. One thing newsgroups and mailing lists do better than blogs is to enable conversational threads to persist for more than a few days. Dear reader, just this once, as a favor to me, please comment here (if only with a blank comment) to signal your presence. If no one signals, I'm not continuing.
Why is a "civilization" the unit of analysis rather than a single agent?
Since you put the word in quotes, I take it ...
When I write for a very bright "puzzle-solving-type" audience, I do the mental equivalent of deleting every fourth sentence or at least the tail part of every fourth sentence to prevent the reader from getting bored. I believe that practice helps my writings to compete with the writings around it for the critical resource of attention. There are of course many ways of competing for attention, and this is one of the least prejudicial to rational thought. I recommend this practice only in forums in which the reader can easily ask followup questi...
I suppose to a Pete Singer utilitarian it might be correct that we assign equal weight of importance to everyone in and beyond our [spacetime].
In the scenario with all the properties I list above, I assign most of the intrinsic good to obeying the Mugger. Some intrinsic good is assigned to continuing to refine our civilization's model of reality, but the more investment in that project fails to yield the ability to cause effects that persist indefinitely without the Mugger's help, the more intrinsic good gets heaped on obeying the Mugger. Nothing else ge...
The ought is, You ought to do whatever the very credible Mugger tells you to do if you find yourself in a situation with all the properties I list above. Blind obedience does not have a very good reputation; please remember, reader, that the fact that the Nazis enthusiastically advocated and built an interstate highway system does not mean that an interstate highway system is always a bad idea. Every ethical intelligent agent should do his best to increase his intelligence, his knowledge of reality and to help other ethical intelligent agents do the same...
For the sake of brevity, I borrow from Pascal's Mugger.
If a Mugger appears in every respect to be an ordinary human, let us call him a "very unconvincing Mugger". In contrast, an example of a very convincing Pascal's Mugger is one who demonstrates an ability to modify fundamental reality: he can violate physical laws that have always been (up to now) stable, global, and exception-free. And he can do so in exactly the way you specify.
For example, you say, "Please Mr Mugger follow me into my physics laboratory." There you repeat the Mi...
TGGP, I maintain that the goals that people now advocate as the goal that trumps all other goals are not deserving of our loyalty and a search most be conducted for a goal that is so deserving. (The search should use essentially the same intellectual skills as physicists.) The identification of that goal can have a very drastic effect on the universe e.g. by inspiring a group of bright 20 year-olds to implement a seed AI with that goal as its utility function. But that does not answer your question, does it?
Eliezer clarified earlier that this blog entry is about personal utility rather than global utility. That presents me with another opportunity to represent a distinctly minority (many would say extreme) point of view, namely, that personal utility (mine or anyone else's) is completely trumped by global utility. This admittedly extreme view is what I have sincerely believed for about 15 years, and I know someone who held it for 30 years without his becoming an axe murderer or anything horrid like that. To say it in other words, I regard humans as means t...
Eliezer's novella provides a vivid illustration of the danger of promoting what should have stayed an instrumental value to the the status of a terminal value. Eliezer likes to refer to this all-too-common mistake as losing purpose. I like to refer to it as adding a false terminal value.
For example, eating babies was a valid instrumental goal when the Babyeaters were at an early state of technological development. It is not IMHO evil to eat babies when the only alternative is chronic severe population pressure which will eventually either lead to your e... (read more)