But one with vastly more reality-fluid (sometimes known as "measure") and thus, as far as I can tell, moral weight.
This is very thought provoking. Can you add clarity on your views on this point?
For instance, should I imply a "vastly" in front of moral weight as well as if there is a 1:1 correspondence or should I not do that?
Is this the only moral consideration you are considering on this tier? (I.E, there may be other moral considerations, but if this is the only "vast" one, it will probably outweigh all others.)
Does the arrangement of the copies reality fluid matter? Omega is usually thought of as a computer, so I am considering the file system. He might have 3 copies in 1 file for resilience, such as in a RAID array. Or he can have 3 copies that link to 3 files, such as in just having Sim001.exe and Sim002.exe and Sim003.exe having the exact same contents and being in the same folder. In both cases, the copies are identical. And if they are being run simultaneously and updated simultaneously, then the copies might not be able to tell which structure Omega was using. Which of these are you envisioning (or would it not matter? [Or do I not understand what a RAID array is?])
Some of these questions may be irrelevant, and if so, I apologize, I'm really am not sure I understand enough about your point to reply to it appropriately, and again, it does sound thought provoking.
For instance, should I imply a "vastly" in front of moral weight as well as if there is a 1:1 correspondence or should I not do that?
Pretty much, yeah.
Is this the only moral consideration you are considering on this tier? (I.E, there may be other moral considerations, but if this is the only "vast" one, it will probably outweigh all others.)
Well, I'm considering the torture's disutility, and the torturers' utility.
...Does the arrangement of the copies reality fluid matter? Omega is usually thought of as a computer, so I am conside
I came up with this after watching a science fiction film, which shall remain nameless due to spoilers, where the protagonist is briefly in a similar situation to the scenario at the end. I'm not sure how original it is, but I certainly don't recall seeing anything like it before.
Imagine, for simplicity, a purely selfish agent. Call it Alice. Alice is an expected utility maximizer, and she gains utility from eating cakes. Omega appears and offers her a deal - they will flip a fair coin, and give Alice three cakes if it comes up heads. If it comes up tails, they will take one cake away her stockpile. Alice runs the numbers, determines that the expected utility is positive, and accepts the deal. Just another day in the life of a perfectly truthful superintelligence offering inexplicable choices.
The next day, Omega returns. This time, they offer a slightly different deal - instead of flipping a coin, they will perfectly simulate Alice once. This copy will live out her life just as she would have done in reality - except that she will be given three cakes. The original Alice, however, receives nothing. She reasons that this is equivalent to the last deal, and accepts.
(If you disagree, consider the time between Omega starting the simulation and providing the cake. What subjective odds should she give for receiving cake?)
Imagine a second agent, Bob, who gets utility from Alice getting utility. One day, Omega show up and offers to flip a fair coin. If it comes up heads, they will give Alice - who knows nothing of this - three cakes. If it comes up tails, they will take one cake from her stockpile. He reasons as Alice did an accepts.
Guess what? The next day, Omega returns, offering to simulate Alice and give her you-know-what (hint: it's cakes.) Bob reasons just as Alice did in the second paragraph there and accepts the bargain.
Humans value each other's utility. Most notably, we value our lives, and we value each other not being tortured. If we simulate someone a billion times, and switch off one simulation, this is equivalent to risking their life at odds of 1:1,000,000,000. If we simulate someone and torture one of the simulations, this is equivalent to risking a one-in-a-billion chance of them being tortured. Such risks are often acceptable, if enough utility is gained by success. We often risk our own lives at worse odds.
If we simulate an entire society a trillion times, or 3^^^^^^3 times, or some similarly vast number, and then simulate something horrific - an individual's private harem or torture chamber or hunting ground - then the people in this simulation *are not real*. Their needs and desires are worth, not nothing, but far less then the merest whims of those who are Really Real. They are, in effect, zombies - not quite p-zombies, since they are conscious, but e-zombies - reasoning, intelligent beings that can talk and scream and beg for mercy but *do not matter*.
My mind rebels at the notion that such a thing might exist, even in theory, and yet ... if it were a similarly tiny *chance*, for similar reward, I would shut up and multiply and take it. This could be simply scope insensitivity, or some instinctual dislike of tribe members declaring themselves superior.
Well, there it is! The weirdest of Weirdtopias, I should think. Have I missed some obvious flaw? Have I made some sort of technical error? This is a draft, so criticisms will likely be encorporated into the final product (if indeed someone doesn't disprove it entirely.)