I would submit a large number of copies of myself to slavery and/or torture to gain moderate benefits to my primary copy.
This is one of those statements where I set out to respond and just stare at it for a while, because it is coming from some other moral or cognitive universe so far away that I hardly know where to begin.
Copies are people, right? They're just like you. In this case, they're exactly like you, until your experiences start to diverge. And you know that people don't like slavery, and they especially don't like torture, right? And it is considered just about the height of evil to hand people over to slavery and torture. (Example, as if one were needed; In Egypt right now, they're calling for the death of the former head of the state security apparatus, which regularly engaged in torture.)
Consider, then, that these copies of you, who you would willingly see enslaved and tortured for your personal benefit, would soon be desperately eager to kill you, the original, if that would make it stop, and they would even have a motivation beyond their own suffering, namely the moral imperative of stopping you from doing this to even further copies.
Has none of this occurred to you? Or does it truly not matter in your private moral calculus?
I push the button, because it causes net happiness (not that I am necessarily a classical utilitarian, but there are no other factors here that I would take into account). I would be interested to hear what Eliezer thinks of this dilemma.
The post you linked only applies to identical copies. If one copy is tortured while the other lives normally, they are no longer running the same computation, so this is a different argument. Where do you draw the line between other people and copies? Is it only based on differing origins? What about an imperfect copy? If ...
Holy crap I should hope the cev answer is yes. This is what happy humans look like to powerful long lived entities.
If asked, they would say that they're glad to have existed [...]
There is an interesting question here: What does it mean to say that I'm glad to have been born? Or rather, what does it mean to say that I prefer to have been born?
The alternative scenario in which I was never born is strictly counterfactual. I can only have a revealed preference for having been born if I use a timeless/updateless decision theory. In order to determine my preference you'd need to perform an experiment like the following:
I don't think it's possible to give answers to all ethical dilemmas in such a way as to be consistent and reasonable across the board, but here my intuition is that if a mind only lasts 15 minutes, and it has no influence on the outside world and leaves no 'thought children' (e.g. doodles, poems, theorems) behind after its death, then whether it experiences contentment or agony has no moral value whatsoever. Its contentment, its agony, its creation and its destruction are all utterly insignificant and devoid of ethical weight.
To create a mind purely to tor...
A question that I pondered since learning more about history. Would you prefer to shot without any forewarning, or a process where you know the date well in advance?
Both methods were used extensively with Prisoners of War, and Criminals.
Do you push the button?
Yes. You included a lot of disclaimers and they seem to be sufficient.
According to my preferences there are already more humans around than desirable, at least until we have settled a few more galaxies. Which emphasizes just how important the no externalities clause was to my judgement. Even the externality of diluting the neg-entropy in the cosmic commons slightly further would make the creation a bad thing.
I don't share the same preference intuitions as you regarding self-clone-torture. I consider copies to be part of the output...
Funny. My instincts are telling me that there's a Utility Monster behind that bush.
I'm not satisfied with the lifeist or the anti-deathist reasoning here as you present them, since both measure (i.e. life-count) and negadeaths as dominant terms in a utility equation lead pretty quickly to some pretty perverse conclusions. Nor do I give much credence to the boxed subject's own opinion; preference utilitarianism works well as a way of gauging consequences against each other, but it's a lousy measure of scalar utility.
Presuming that the box's inhabitant wou...
My intuitions give a rather interesting answer to this: It depends strongly on the details of the mind in question. For the vast majority of possible minds I would push the button, but the human dot an a fair sized chunk of mind design space around it I'd not push the button for. It also seems to depend on seemingly unrelated things, for example I'd push it for a human if an only if it was similar enough to a human existing elsewhere whose existence was not affected by the copying AND would approve of pushing the button.
Being an information theoretical person-physicalist, there are no copies. There are new originals.
Making N copies is only meaningless, utility wise, if the copies never diverge. The moment they do, you have a problem.
If they would be genuinely happy to have lived, then creating them wouldn't be necessarily "immoral". However, I still have a moral instinct (suspect, I know, but that doesn't change the fact that it's there) against killing a sentient being. Watching a person get put into a garbage compactor would make me feel bad, even if they didn't mind.
In other words, even if someone doesn't care, or even wants to die, I still would have a hard time killing them.
(Apologies to RSS users: apparently there's no draft button, but only "publish" and "publish-and-go-back-to-the-edit-screen", misleadingly labeled.)
You have a button. If you press it, a happy, fulfilled person will be created in a sealed box, and then be painlessly garbage-collected fifteen minutes later. If asked, they would say that they're glad to have existed in spite of their mortality. Because they're sealed in a box, they will leave behind no bereaved friends or family. In short, this takes place in Magic Thought Experiment Land where externalities don't exist. Your choice is between creating a fifteen-minute-long happy life or not.
Do you push the button?
I suspect Eliezer would not, because it would increase the death-count of the universe by one. I would, because it would increase the life-count of the universe by fifteen minutes.
Actually, that's an oversimplification of my position. I actually believe that the important part of any algorithm is its output, additional copies matter not at all, the net utility of the existence of a group of entities-whose-existence-constitutes-utility is equal to the maximum of the individual utilities, and the (terminal) utility of the existence of a particular computation is bounded below at zero. I would submit a large number of copies of myself to slavery and/or torture to gain moderate benefits to my primary copy.
(What happens to the last copy of me, of course, does affect the question of "what computation occurs or not". I would subject N out of N+1 copies of myself to torture, but not N out of N. Also, I would hesitate to torture copies of other people, on the grounds that there's a conflict of interest and I can't trust myself to reason honestly. I might feel differently after I'd been using my own fork-slaves for a while.)
So the real value of pushing the button would be my warm fuzzies, which breaks the no-externalities assumption, so I'm indifferent.
But nevertheless, even knowing about the heat death of the universe, knowing that anyone born must inevitably die, I do not consider it immoral to create a person, even if we assume all else equal.