I would greatly prefer that there be Babyeaters, or even to be a Babyeater myself, than the black hole scenario, or a paperclipper scenario. This strongly suggests that human morality is not as unified as Eliezer believes it is... like I've said before, he will horrified by the results of CEV.
Or the other possibility is just that I'm not human.
About the comments on compromise: that's why I changed my mind. The functions are so complex that they are bound to be different in the complex portions, but they also have simplifying terms in favor of compromise, so it is possible that everyone's morality will end up the same when this is taken into account.
As for the probability that Eliezer will program an AI, it might not be very low, but it is extremely low that his will be the first, simply because so many other people are trying.
I wonder if Eliezer is planning to say that morality is just an extrapolation of our own desires? If so, then my morality would be an extrapolation of my desires, and your morality would be an extrapolation of yours. This is disturbing, because if our extrapolated desires don't turn out to be EXACTLY the same, something might be immoral for me to do which is moral for you to do, or moral for me and immoral for you.
If this is so, then if I programmed an AI, I would be morally obligated to program it to extrapolate my personal desires-- i.e. my personal desi...
For all those who have said that morality makes no difference to them, I have another question: if you had the ring of Gyges (a ring of invisibility) would that make any difference to your behavior?
Some people on this blog have said that they would do something different. Some people on this blog have said that they actually came to that conclusion, and actually did something different. Despite these facts, we have commenters projecting themselves onto other people, saying that NO ONE would do anything different under this scenario.
Of course, people who don't think that anything is right or wrong also don't think it's wrong to accuse other people of lying, without any evidence.
Once again, I most certainly would act differently if I thought that nothi...
Pablo, according to many worlds, even if it is now raining in Oxford, yesterday "it will rain in Oxford tomorrow" and "it will not rain in Oxford tomorrow" were both equally true, or both equally false, or whatever. In any case, according to many worlds, there is no such thing as "what will happen", if this is meant to pick some particular possibility like rain in Oxford.
Nick Tarleton, what is your definition of free will? You can't even say the concept is incoherent without a definition. According to my definition, randomness definitely gives free will.
Z.M. Davis, "I am consciously aware that 2 and 2 make 4" is not a different claim from "I am aware that 2 and 2 make 4." One can't make one claim without making the other. In other words, "I am unconsciously aware that 2 and 2 make 4" is a contradiction in terms.
If an AI were unconscious, it presumably would be a follower of Daniel Dennett; i.e. it would admit that it had no qualia, but would say that the same was true of human beings. But then it would say that it is conscious in the same sense that human beings are. Likewise...
Ben, what do you mean by "measurable"? In the zombie world, Ben Jones posts a comment on this blog, but he never notices what he is posting. In the real world, he knows what he is posting. So the difference is certainly noticeable, even if it isn't measurable. Why isn't "noticeable" enough for the situation to be a useful consideration?
Paul: "we are morally obliged to kill everyone we meet" has no scientific implications, but it definitely has moral implications. To speak plainly, your position is false, and obviously so.
Some children (2-4 years of age) assume that other human beings are zombies, because they do not meet their model of a conscious observer, e.g. adults don't go and eat the ice cream in the freezer, even though no one can stop them, and even though any conscious being would of course eat that ice cream, if it were in its power.
This fact actually is one proof tha...
Roko, I strongly suspect that a limitedly caring universe just reduces to a materialist universe with very complex laws. For example, isn't it kind of like magic that when I want to lift my hand, it actually moves? What would be the difference if I could levitate or change lead into gold? If the universe obeys my will about lifting my hand, why shouldn't it obey in other things, and if it did, why would this be an essential difference?
Jeffrey, do you really think serial killing is no worse than murdering a single individual, since "Subjective experience is restricted to individuals"?
In fact, if you kill someone fast enough, he may not subjectively experience it at all. In that case, is it no worse than a dust speck?
Jeffrey, on one of the other threads, I volunteered to be the one tortured to save the others from the specks.
As for "Real decisions have real effects on real people," that's absolutely correct, and that's the reason to prefer the torture. The utility function implied by preferring the specks would also prefer lowering all the speed limits in the world in order to save lives, and ultimately would ban the use of cars. It would promote raising taxes by a small amount in order to reduce the amount of violent crime (including crimes involving torture...
Eliezer, I have a question about this: "There is no finite amount of life lived N where I would prefer a 80.0001% probability of living N years to an 0.0001% chance of living a googolplex years and an 80% chance of living forever. This is a sufficient condition to imply that my utility function is unbounded."
I can see that this preference implies an unbounded utility function, given that a longer life has a greater utility. However, simply stated in that way, most people might agree with the preference. But consider this gamble instead:
A: Live 5...
About the slugs, there is nothing strange in asserting that the utility of the existence of something depends partly on what else exists. Consider chapters in a book: one chapter might be useless without the others, and one chapter repeated several times would actually add disutility.
So I agree that a world with human beings in it is better than one with only slugs: but this says nothing about the torture and dust specks.
Eisegetes, we had that discussion previously in regard to the difference between comparing actions and comparing outcomes. I am fairly su...
Z.M. Davis, that's an interesting point about the slugs, I might get to it later. However, I suspect it has little to do with the torture and dust specks.
Doug, here's another problem for your proposed function: according to your function, it doesn't matter whether a single person takes all the pain or if it is distributed, as long as it sums to the same amount according to your function.
So let's suppose that the pain of solitary confinement without anything interesting to do can never add up to the pain of 50 years torture. According to this, would you hon...
Notice that in Doug's function, suffering with intensity less than 0.393 can never add up to 50 years of torture, even when multiplied infinitely, while suffering of 0.394 will be worse than torture if it is sufficiently multiplied. So there is some number of 0.394 intensity pains such that no number of 0.393 intensity pains can ever be worse, despite the fact that these pains differ by 0.001, stipulated by Doug to be the pain of a dust speck. This is the conclusion that I pointed out follows with mathematical necessity from the position of those who prefer the specks.
Doug, do you actually accept this conclusion (about the 0.393 and 0.394 pains), or you just trying to show that the position is not logically impossible?
Ben P: the arrangement of the scale is meant to show that the further you move the lever toward 3^^^3 dust specks, the worse things get. The torture decreases linearly simply because there's no reason to decrease it by more; the number of people increases in the way that it does because of the nature of 3^^^3 (i.e. the number is large enough to allow for this). The more we can increase it at each stop, the more obvious it is that we shouldn't move the lever at all, but rather we should leave it at torturing 1 person 50 years.
Ben, here's my new and improved lever. It has 7,625,597,484,987 settings. On setting 1, 1 person is tortured for 50 years plus the pain of one dust speck. On setting 2, 3 persons are tortured for 50 years minus the pain of (50-year torture/7,625,597,484,987), i.e. they are tortured for a minute fraction of a second less than 50 years, again plus the pain of one dust speck. On setting 3, 3^3 persons, i.e. 27 persons, are tortured for 50 years minus two such fractions of a second, plus the pain of one dust speck. On setting 4, 3^27, i.e. 7,625,597,484,987 pe...
Eliezer, why didn't you answer the question I asked at the beginning of the comment section of this post?