Anna, it takes very little effort to rattle off a numerical probability -- and then most readers take away an impression (usually false) of precision of thought.
At the start of Causality Judea Pearl explains why humans (should and usually do) use "causal" concepts rather than "statistical" ones. Although I do not recall whether he comes right out and says it, I definitely took away from Pearl the heuristic that stating your probability about some question is basically useless unless you also state the calculation that led to the number...
Instead of describing my normative reasoning as guided by the criterion of non-arbitrariness, I prefer to describe it as guided by the criterion of minimizing or pessimizing algorithmic complexity. And that is a reply to steven's question right above: there is nothing unstable or logically inconsistent about my criterion for the same reason that there is nothing unstable about Occam's Razor.
Roko BTW had a conversion experience and now praises CEV and the Fun Theory sequence.
Let me clarify that what horrifies me is the loss of potential. Once our space-time continuum becomes a bunch of supermassive black holes, it remains that way till the end of time. It is the condition of maximum physical entropy (according to Penrose). Suffering on the other hand is impermanent. Ever had a really bad cold or flu? One day you wake up and it is gone and the future is just as bright as it would have been if the cold had never been.
And pulling numbers (80%, 95%) out of the air on this question is absurd.
Richard, I'd take the black holes of course.
As I expected. Much you (Eliezer) have written entails it, but it still gives me a shock because piling as much ordinary matter as possible into supermassive black holes is the most evil end I have been able to imagine. In contrast, suffering is merely subjective experience and consequently, according to my way of assigning value, unimportant.
Transforming ordinary matter into mass inside a black hole is a very potent means to create free energy, and I can imagine applying that free energy to ends that justify...
Question for Eliezer. If the human race goes extinct without leaving any legacy, then according to you, any nonhuman intelligent agent that might come into existence will be unable to learn about morality?
If your answer is that the nonhuman agent might be able to learn about morality if it is sentient then please define "sentient". What is it about a paperclip maximizer that makes it nonsenient? What is it about a human that makes it sentient?
Speaking of compressing down nicely, that is a nice and compressed description of humanism. Singularitarians, question humanism.
trying to distance ourselves from, control, or delete too much of ourselves - then having to undo it.
I cannot recall ever trying to delete or even control a large part of myself, so no opinion there, but "distancing ourselves from ourselves" sounds a lot like developing what some have called an observing self, which is probably a very valuable thing for an person wishing to make a large contribution to the world IMHO.
A person worried about not feeling alive enough would probably get more bang for his buck by avoiding exposure to mercury, which binds permanently to serotonin receptors, causing a kind of deadening.
Did that make sense?
Yes, and I can see why you would rather say it that way.
My theory is that most of those who believe quantum suicide is effective assign negative utility to suffering and also assign a negative utility to death, but knowing that they will continue to live in one Everett branch removes the sting of knowing (and consequently the negative utility of the fact) that they will die in a different Everett branch. I am hoping Cameron Taylor or another commentator who thinks quantum suicide might be effective will let me know whether I have described his utility function.
OK, my previous comment was too rude. I won't do it again, OK?
Rather than answer your question about fitness, let me take back what I said and start over. I think you and I have different terminal values.
I am going to assume -- and please correct me if I am wrong -- that you assign an Everett branch in which you painless wink out of existence a value of zero (neither desirable or undesirable) and that consequently, under certain circumstances (e.g., at least one alternative Everett branch remains in which you survive) you would prefer painlessly winking...
At some point the most profitable avenue of research in the pursuit of friendly AI would become the logistics of combining a mechanism for quantum suicide with a random number generator.
Usually learning new true information increases a person's fitness, but learning about the many-worlds interpretation seems to decrease the fitness of many who learn it.
Whoever (E or Friedman) chose the title, "Prediction vs. Explanation", was probably thinking along the same lines.
The way science is currently done, experimental data that the formulator of the hypothesis did not know about is much stronger evidence for a hypothesis than experimental data he did know about.
A hypothesis formulated by a perfect Bayesian reasoner would not have that property, but hypotheses from human scientists do, and I know of no cost-effective way to stop human scientists from generating the effect. Part of the reason human scientists do it is because the originator of a hypothesis is too optimistic about the hypothesis (and this optimism stems in ...
in a previous [comment] in this thread I argued that one should be surprised by externally improbable survival, at least in the sense that it should make one increase the probability assigned to alternative explanations of the world that do not make survival so unlikely.
Simon, I think that the previous comment you refer to was the smartest thing anyone has said in this comment section. Instead of continuing to point out the things you got right, I hope you do not mind if I point out something you got wrong, namely,
Richard: your first criticism has too...
An unusually moderate and temperate exchange.
I disagree with the last 2 comments.
Eliezer's priority has gradually shifted over the last 5 years or so from increasing his own knowledge to transmitting what he knows to others, which is exactly the behavior I would expect from someone with his stated goals who knows what he is doing.
Yes, he has suggested or implied many times that he expects to implement the intelligence explosion more or less by himself (and I do not like that) but ever since the Summer of AI his actions (particularly all the effort he has put into blogging and his references to 15-to-...
I will probably have to stop reading this blog for a while because my life has gotten very tricky and precarious. I am still available for more personal communication with rationalists and scientific generalists especially those living in the Bay Area.
There have been 3 comments on this blog by men to the effect that sex is not that important or that the writer has given up on sex. Those comments suggest what I would consider a lack of sufficient respect for the importance of sex. I tend to believe that for a young man to learn how to have a satisfying a...
It seems the ultimate confusion here is that we are talking about instrumental values . . . before agreeing on terminal values . . .If we could agree on some well-defined goal, e.g. maximization of human happiness, we could much more easily theorize on whether a particular case of murder would benefit or harm that goal.
denis bider, under the CEV plan for singularity, no human has to give an unambiguous definition or enumeration of his or her terminal values before the launch of the seed of the superintelligence. Consequently, those who lean toward th...
My comment is not charitable enough towards the CEVists. I ask the moderator to delete it, I will now submit a replacement.
Eliezer's novella provides a vivid illustration of the danger of promoting what should have stayed an instrumental value to the the status of a terminal value. Eliezer likes to refer to this all-too-common mistake as losing purpose. I like to refer to it as adding a false terminal value.
For example, eating babies was a valid instrumental goal when the Babyeaters were at an early state of technological development. It is not IMHO evil to eat babies when the only alternative is chronic severe population pressure which will eventually either lead to your e... (read more)