Eliezer, you know perfectly well that the theory you are suggesting here leads to circular preferences. On another occasion when this came up, I started to indicate the path that would show this, and you did not respond. If circular preferences are justified on the grounds that you are confused, then you are justifying those who said that dust specks are preferable to torture.
Eliezer: c/o Singularity Institute P.O. Box 50182 Palo Alto, CA 94303 USA
I hope that works.
Eliezer, I am sending you the $10. I will let you know how to pay when you lose the bet. I have included in the envelope a means of identifying myself when I claim the money, so that it cannot be claimed by someone impersonating me.
Your overconfidence will surely cost you on this occasion, even though I must admit that I was forced to update (a very small amount) in favor of your position, on seeing the surprising fact that you were willing to engage in such a wager.
When someone designs a superintelligent AI (it won't be Eliezer), without paying any attention to Friendliness (the first person who does it won't), and the world doesn't end (it won't), it will be interesting to hear Eliezer's excuses.
Eliezer, "changes in my progamming that seem to result in improvements" are sufficently arbitrary that you may still have to face the halting problem, i.e. if you are programming an intelligent being, it is going to be sufficiently complicated that you will never prove that there are no bugs in your original programming, i.e. even ones that may show no effect until it has improved itself 1,000,000 times, and by then it will be too late.
Apart from this, no intelligent entity can predict in own actions, i.e. it will always have a feeling of "free will." This is necessary because whenever it looks at a choice between A and B, it will always say, "I could do A, if I thought it was better," and "I could also do B, if I thought it was better." So it's own actions are surely unpredictable to it, it can't predict the choice until it actually makes the choice, just like us. But this implies that "insight into intelligence" may be impossible, or at least full insight into one's own intelligence, and that is enough to imply that your whole project may be impossible, or at least that it may go very slowly, so Robin will turn our to be right.
Eliezer, your basic error regarding the singularity is the planning fallacy. And a lot of people are going to say "I told you so" sooner or later.
Komponisto: that definition includes human beings, so Eliezer is not an atheist according to that.
Psy-Kosh, your new definition doesn't help. For example Eliezer Yudkowsky believes in God according to the definition you have just given, both according to the deist part, and according to theist part. Let's take those one at a time, to illustrate the point:
First part of the definition:
"An ontologically fundamental unique entity that has, in some sense something resembling desire/will, further, this entity deliberately, as an act of will, created the reality we experience."
Does Eliezer believe in ontologically fundamental entities? Yes. So that's one element. Does Eliezer believe in an ontologically fundamental unique entity? Yes, he believes in at least one: he has stated that the universe consists of one unique mathematical object, and as far as I can tell, he thinks it is fundamental. This is clear from the fact that he denies the fundamental nature of anything else. An electron, for example, is not fundamental, since it is simply a part of a larger wave function. It is really the wave function, the whole of it, which is fundamental, and unique.
Does this unique being have something resembling will, by which it created the world? First it is clear that it created the world. I shouldn't have to argue for this point, it follows directly from Eliezer's ideas. But does it have anything resembling will? Well, one thing that will does is that it tends to produce something definite, namely the thing that you will. So anything that produces definite results, rather than random results, resembles will in at least one way. And this wave function produces definite results: according to Eliezer all of reality is totally deterministic. Thus, Eliezer believes in a fundamental, unique entity, which created the world by means of something resembling will or desire, i.e. by your definition, he believes in God.
Next question: does this entity directly orchestrate all of reality? It should be obvious that according to Eliezer, yes.
So Eliezer is a theist.
As far as I can tell, atheists and theists don't even disagree, for the most part. Ask an atheist, "What do you understand the word 'God' to mean?" Then ask a theist if he thinks that this thing exists, giving the definition of the word given by the atheist. The theist will say, "No."
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Eliezer, also consider this: suppose I am a mad scientist trying to decide between making one copy of Eliezer and torturing it for 50 years, or on the other hand, making 1000 copies of Eliezer and torturing them all for 50 years.
The second possibility is much, much worse for you personally. For in the first possibility, you would subjectively have a 50% chance of being tortured. But in the second possibility, you would have a subjective chance of 99.9% of being tortured. This implies that the second possibility is much worse, so creating copies of bad experiences multiplies, even without diversity. But this implies that copies of good experiences should also multiply: if I make a million copies of Eliezer having billions of units of utility, this would be much better than making only one, which would give you only a 50% chance of experiencing this.