Posts

Sorted by New

Wiki Contributions

Comments

Eliezer, why didn't you answer the question I asked at the beginning of the comment section of this post?

Unknown315y-20

I would greatly prefer that there be Babyeaters, or even to be a Babyeater myself, than the black hole scenario, or a paperclipper scenario. This strongly suggests that human morality is not as unified as Eliezer believes it is... like I've said before, he will horrified by the results of CEV.

Or the other possibility is just that I'm not human.

Unknown316y-10

About the comments on compromise: that's why I changed my mind. The functions are so complex that they are bound to be different in the complex portions, but they also have simplifying terms in favor of compromise, so it is possible that everyone's morality will end up the same when this is taken into account.

As for the probability that Eliezer will program an AI, it might not be very low, but it is extremely low that his will be the first, simply because so many other people are trying.

I wonder if Eliezer is planning to say that morality is just an extrapolation of our own desires? If so, then my morality would be an extrapolation of my desires, and your morality would be an extrapolation of yours. This is disturbing, because if our extrapolated desires don't turn out to be EXACTLY the same, something might be immoral for me to do which is moral for you to do, or moral for me and immoral for you.

If this is so, then if I programmed an AI, I would be morally obligated to program it to extrapolate my personal desires-- i.e. my personal desires, not the desires of the human race. So Eliezer would be deceiving us about FAI: his intention is to extrapolate his personal desires, since he is morally obligated to do so. Maybe someone should stop him before it's too late?

For all those who have said that morality makes no difference to them, I have another question: if you had the ring of Gyges (a ring of invisibility) would that make any difference to your behavior?

Some people on this blog have said that they would do something different. Some people on this blog have said that they actually came to that conclusion, and actually did something different. Despite these facts, we have commenters projecting themselves onto other people, saying that NO ONE would do anything different under this scenario.

Of course, people who don't think that anything is right or wrong also don't think it's wrong to accuse other people of lying, without any evidence.

Once again, I most certainly would act differently if I thought that nothing was right or wrong, because there are many things that I restrain myself from doing precisely because I think they are wrong, and for no other reason-- or at least for no other reason strong enough to stop me from doing them.

Pablo, according to many worlds, even if it is now raining in Oxford, yesterday "it will rain in Oxford tomorrow" and "it will not rain in Oxford tomorrow" were both equally true, or both equally false, or whatever. In any case, according to many worlds, there is no such thing as "what will happen", if this is meant to pick some particular possibility like rain in Oxford.

Nick Tarleton, what is your definition of free will? You can't even say the concept is incoherent without a definition. According to my definition, randomness definitely gives free will.

Z.M. Davis, "I am consciously aware that 2 and 2 make 4" is not a different claim from "I am aware that 2 and 2 make 4." One can't make one claim without making the other. In other words, "I am unconsciously aware that 2 and 2 make 4" is a contradiction in terms.

If an AI were unconscious, it presumably would be a follower of Daniel Dennett; i.e. it would admit that it had no qualia, but would say that the same was true of human beings. But then it would say that it is conscious in the same sense that human beings are. Likewise, if it were conscious, it would say it was conscious. So it would say it was conscious whether it was or not.

I agree in principle that there could be an unconscious chatbot that could pass the Turing test; but it wouldn't be superintelligent.

Ben, what do you mean by "measurable"? In the zombie world, Ben Jones posts a comment on this blog, but he never notices what he is posting. In the real world, he knows what he is posting. So the difference is certainly noticeable, even if it isn't measurable. Why isn't "noticeable" enough for the situation to be a useful consideration?

Load More