MugaSofer comments on DRAFT:Ethical Zombies - A Post On Reality-Fluid - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (116)
Well, obviously this post is not aimed at you, but I must admit I am curious as to why you hold this belief. What makes "downstream" sims unworthy of ethical consideration?
Maybe I've got a different concept of 'simulation'. I consider a simulation to be fully analogous to a sufficiently well-written computer program, and I don't believe that representations of numbers are morally comparable to living creatures, even if those numbers undergo transformations completely analogous to those creatures.
Why should I care if you calculate f(x) or f'(x), where x is the representation of the current state of the universe, f() is the standard model, and f'() is the model with all the cake?
Does that stay true if those representations are implemented in a highly distributed computer made out of organic cells?
Are you trying to blur the distinction between a simulated creature and a living one, or are you postulating a living creature which is also a simulator? I don't have moral obligation regarding my inner Slytherin beyond any obligations I have regarding myself.
I'm not so much trying to blur the distinction, as I am trying to figure out what the relevant parameters are. I started with "made of organic cells" because that's often the parameter people have in mind.
Given your clarification, I take it that "living" is the parameter you have in mind, in which case I'm interested in is how you decide that something is a living system. For example, are you a living system? Can you be certain of that?
If you can't be certain, does it follow that there's a possibility that you don't in fact have a moral obligation to yourself (because you might not be the sort of thing to which you can have such obligations)?
If I am a number in a calculation, I priviledge the simulation I am in above all others. I expect residents of all other simulations to priviledge their own simulation above all others.
Being made of carbon chains isn't relevant; being made of matter instead of information or an abstraction is important, and even if there exists a reference point from which my matter is abstract information, I, the abstract information, insrinically value my flavor of abstraction more so than any other reference. (there is an instrumental value to manipulating the upstream contexts, however)
Ah, OK. Sure, I can understand local-context privileging. Thanks for clarifying.
I can't understand the lack of local-universe privilege.
Suppose that literally everything I observe is a barely imperfect simulation made by IBM, as evidenced by the observation that a particular particle interaction leaves traces which reliably read "World sim version 7.00.1.5 build 11/11/11 Copyright IBM, special thanks JKR" instead of the expected particle traces. Also, invoking certain words and gestures allows people with a certain genetic expression to break various physical laws.
Now, suppose that a golden tablet appeared before me explicitly stating that Omega has threatened the world which created our simulation. However, we, the simulation, are able to alter the terms of this threat. If a selected resident (me) of Sim-Earth decides to destroy Sim-Earth, Meta-1 Earth will suffer no consequences other than one instance of an obsolete version of one of their simulations crashing. If I refuse, then Omega will roll a fair d6, and on a result of 3 or higher will destroy Meta-1 Earth, along with all of their simulations including mine.
Which is the consequentialist thing to do? (I dodge the question by not being consequentialist; I am not responsible for Omega's actions, even if Omega tells me how to influence him. I am responsible for my own actions.)
Undefined. Legitimate and plausible consequentialist value systems can be conceived that go either way.
To prefer a 60% chance of the destruction of more than two existences to the certainty of the extinction of humanity in one of them is an interesting position.
Clearly, however, either such a preference either incurs local privilege, or it should be just as logical to prefer the 60% destruction of more than everything over the certain destruction of a different simulation, one that would never have interaction with the one that the agent experiences.
Just to make sure I understand, let me restate your scenario: there's a world ("Meta-1 Earth") which contains a simulation ("Sim-Earth"), and I get to choose whether to destroy Sim-Earth or not. If I refuse, there's a 50% chance of both Sim-Earth and Meta-1 Earth being destroyed. Right?
So, the consequentialist thing to do is compare the value of Sim-Earth (V1) to the value of Meta-1 Earth (V2), and destroy Sim-Earth iff V2/2 > V1.
You haven't said much about Meta-1 Earth, but just to pick an easily calculated hypothetical, if Omega further informs me that there are ten other copies of World sim version 7.00.1.5 build 11/11/11 running on machines in Meta-1 Earth (not identical to Sim-Earth, because there's some randomness built into the sim, but roughly equivalent), I would conclude that destroying Sim-Earth is the right thing to do if everything is as Omega has represented it.
I might not actually do that, in the same way that I might not kill myself to save ten other people, or even give up my morning latte to save ten other people, but that's a different question.
Subtle distinctions. We have no knowledge about Meta-1 Earth. We only have the types of highly persuasive but technically circumstantial evidence provided; Omega exists in this scenario and is known by name, but he is silent on the question of whether the inscription on the massive solid gold tablet is truthful. The doomseday button is known to be real.
What would evidence regarding the existence of M1E look like?
(Also:4/6 chance of a 3 or higher. I don't think the exact odds are critical.)
Once again: why? Why privilege your simulation? Why not do the same for your planet? Your species? Your country? (Do you implement some of these?)
Because my simulation (if I am in one) includes all of my existence. Meanwhile, a simulation run inside this existence contains only mathematical constructs or the equivalent.
Surely you don't think that your mental model of me deserves to have its desires considered in addition to mine? You use that model of me to estimate what I value, which enters into your utility function. To also include the model's point of view is double-counting the map.
My "mental model of you" consists of little more than a list of beliefs, which I then have my brain pretend it believes. In your case, it is woefully incomplete; but even the most detailed of those models are little more than characters I play to help predict how people would really respond to them. My brain lacks the knowledge and computing power to model people on the level of neurons or atoms, and if it had such power I would refuse to use it (at least for predictive purposes.)
OTOH, I don't see what the difference is between two layers of simulation just because I happen to be in one of them. Do you think they don't have qualia? Do you think they don't have souls? Do you think they are exactly the same as you, but don't care?
Does Dwarf Fortress qualify as a simulation? If so, is there a moral element to running it?
Does f'(), which is the perfect simulation function f(), modified such that a cake appears in my cupboard every night, qualify?