I just had another idea: maybe I would begin to design an Unfriendly AI. After all, being an evil genius would at least be fun, and besides, it would be a way to get revenge on Eliezer for proving that morality doesn't exist.
It seems people are interpreting the question in two different ways, one that we don't have any desires any more, and therefore no actions, and the other in the more natural way, namely that "moral philosophy" and "moral claims" have no meaning or are all false. The first way of interpreting the question is useless, and I guess Eliezer intended the second.
Most commenters are saying that it would make no difference to them. My suspicion is that this is true, but mainly because they already believe that moral claims are meaningless or false.
Possibly (I am not sure of this) Eliezer hopes that everyone will answer in this way, so that he can say that morality is unnecessary.
Personally, I agree with Dynamically Linked. I would start out by stealing wallets and purses, and it would just go downhill from there. In other words, if I didn't believe that such things were wrong, the bad feeling that results from doing them, and the idea that it hurts people, wouldn't be strong enough to stop me, and once I got started, the feeling would go away too-- this much I know from the experience of doing wrong. And once I had changed the way I feel about these things, the way I feel about other things (too horrible to mention at the moment) would begin to change too. So I can't really tell where it would end, but it would be bad (according to my present judgment).
There are others who would follow or have followed the same course. TGGP says that over time his life did change after he ceased to believe in morality, and at one point he said that he would torture a stranger to avoid stubbing his toe, which presumably he would not have done when he believed in morality.
So if it is the case that Eliezer hoped that morality is unnecessary to prevent such things, his hope is in vain.
TGGP, the evidence is that Eliezer suggested the reason to avoid this error is to avoid converting to Christianity. Presumably the real reason to avoid the error (if it is one, which he hasn't shown convincingly yet) is to avoid turning the universe into paperclips.
In regard to AIXI: One should consider more carefully the fact that any self-modifying AI can be exactly modeled by a non-self modifying AI.
One should also consider the fact that no intelligent being can predict its own actions-- this is one of those extremely rare universals. But this doesn't mean that it can't recognize itself in a mirror, despite its inability to predict its actions.
Just to be clear, as far as I can remember after reading every post on OB, no one else has posted specifically under the title "Unknown." So there's only one of me.
Prase, I think I would agree with that. But it seems Eliezer isn't quite seeing is that even if mind-space in general is completely arbitrary, people programming an AI aren't going to program something completely arbitrary. They're going to program it to use assumptions and ways of argument that they find acceptable, and so it will also draw conclusions that they find acceptable, even if it does this better than they do themselves.
Also, Eliezer's conclusion, "And then Wright converted to Christianity - yes, seriously. So you really don't want to fall into this trap!" seems to suggest that a world where the AI converts everyone to Christianity is worse than a world that the AI fills with paperclips, by suggesting that converting to Christianity is the worst thing that can happen to you. I wonder if Eliezer really believes this, and would rather be made into paperclips than into a Christian?
Roko is basically right. In a human being, the code that is executing when we try to decide what is right or what is wrong is the same type of code that executes when we try decide how much are 6 times 7. The brain has a general pattern signifying "correctness," whatever that may be, and it uses this identical pattern to evaluate "6 times 7 is 49" and "murder is wrong."
Of course you can ask why the human brain matches "murder is wrong" to the "correctness" pattern, and you might say that it is arbitrary (or you might not.) Either way, if we can program an AGI at all, it will be able to reason about ethical issues using the same code that it uses when it reasons about matters of fact. It is true that it is not necessary for a mind to do this. But our mind does it, and doubtless the first mind-programmers will imitate our minds, and so their AI will do it as well.
So it is simply untrue that we have to give the AGI some special ethical programming. If we can give it understanding, packaged into this is also understanding of ethics.
Naturally, as Roko says, this does not imply the existence of any ghost, anymore than the fact that Deep Blue makes moves unintelligible to its programmers implies a ghost in Deep Blue.
This also gives some reason for thinking that Robin's outside view of the singularity may be correct.
Phil Goetz was not saying that all languages have the word "the." He said that the word "the" is something every ENGLISH document has in common. His criticism is that this does not mean that Hamlet is more similar to an English restaurant menu than an English novel is to a Russian novel. Likewise, Eliezer's argument does not show that we are more like petunias then like an AI.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
"I mean... if an external objective morality tells you to kill babies, why should you even listen?"
This is an incredibly dangerous argument. Consider this : "I mean... if some moral argument, whatever the source, tells me to prefer 50 years of torture to any number of dust specks, why should I even listen?"
And we have seen many who literally made this argument.