You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Decius comments on DRAFT:Ethical Zombies - A Post On Reality-Fluid - Less Wrong Discussion

0 Post author: MugaSofer 09 January 2013 01:38PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (116)

You are viewing a single comment's thread. Show more comments above.

Comment author: Decius 14 January 2013 01:47:47PM 0 points [-]

With the caveat that I am not a simulation for the purposes if that judgement. I care only about my layer and the layers which are upstream of (simulating) me, if any.

Comment author: MugaSofer 14 January 2013 02:38:01PM -2 points [-]

Well, obviously this post is not aimed at you, but I must admit I am curious as to why you hold this belief. What makes "downstream" sims unworthy of ethical consideration?

Comment author: Decius 14 January 2013 03:35:30PM 0 points [-]

Maybe I've got a different concept of 'simulation'. I consider a simulation to be fully analogous to a sufficiently well-written computer program, and I don't believe that representations of numbers are morally comparable to living creatures, even if those numbers undergo transformations completely analogous to those creatures.

Why should I care if you calculate f(x) or f'(x), where x is the representation of the current state of the universe, f() is the standard model, and f'() is the model with all the cake?

Comment author: TheOtherDave 14 January 2013 03:39:52PM 0 points [-]

I don't believe that representations of numbers are morally comparable to living creatures

Does that stay true if those representations are implemented in a highly distributed computer made out of organic cells?

Comment author: Decius 14 January 2013 03:45:41PM 0 points [-]

Are you trying to blur the distinction between a simulated creature and a living one, or are you postulating a living creature which is also a simulator? I don't have moral obligation regarding my inner Slytherin beyond any obligations I have regarding myself.

Comment author: TheOtherDave 14 January 2013 03:53:29PM 0 points [-]

I'm not so much trying to blur the distinction, as I am trying to figure out what the relevant parameters are. I started with "made of organic cells" because that's often the parameter people have in mind.

Given your clarification, I take it that "living" is the parameter you have in mind, in which case I'm interested in is how you decide that something is a living system. For example, are you a living system? Can you be certain of that?

If you can't be certain, does it follow that there's a possibility that you don't in fact have a moral obligation to yourself (because you might not be the sort of thing to which you can have such obligations)?

Comment author: Decius 14 January 2013 04:48:41PM 0 points [-]

If I am a number in a calculation, I priviledge the simulation I am in above all others. I expect residents of all other simulations to priviledge their own simulation above all others.

Being made of carbon chains isn't relevant; being made of matter instead of information or an abstraction is important, and even if there exists a reference point from which my matter is abstract information, I, the abstract information, insrinically value my flavor of abstraction more so than any other reference. (there is an instrumental value to manipulating the upstream contexts, however)

Comment author: TheOtherDave 14 January 2013 05:27:18PM 0 points [-]

Ah, OK. Sure, I can understand local-context privileging. Thanks for clarifying.

Comment author: Decius 14 January 2013 09:54:47PM 0 points [-]

I can't understand the lack of local-universe privilege.

Suppose that literally everything I observe is a barely imperfect simulation made by IBM, as evidenced by the observation that a particular particle interaction leaves traces which reliably read "World sim version 7.00.1.5 build 11/11/11 Copyright IBM, special thanks JKR" instead of the expected particle traces. Also, invoking certain words and gestures allows people with a certain genetic expression to break various physical laws.

Now, suppose that a golden tablet appeared before me explicitly stating that Omega has threatened the world which created our simulation. However, we, the simulation, are able to alter the terms of this threat. If a selected resident (me) of Sim-Earth decides to destroy Sim-Earth, Meta-1 Earth will suffer no consequences other than one instance of an obsolete version of one of their simulations crashing. If I refuse, then Omega will roll a fair d6, and on a result of 3 or higher will destroy Meta-1 Earth, along with all of their simulations including mine.

Which is the consequentialist thing to do? (I dodge the question by not being consequentialist; I am not responsible for Omega's actions, even if Omega tells me how to influence him. I am responsible for my own actions.)

Comment author: wedrifid 16 January 2013 02:50:36AM 0 points [-]

Which is the consequentialist thing to do?

Undefined. Legitimate and plausible consequentialist value systems can be conceived that go either way.

Comment author: TheOtherDave 14 January 2013 10:43:22PM 0 points [-]

Just to make sure I understand, let me restate your scenario: there's a world ("Meta-1 Earth") which contains a simulation ("Sim-Earth"), and I get to choose whether to destroy Sim-Earth or not. If I refuse, there's a 50% chance of both Sim-Earth and Meta-1 Earth being destroyed. Right?

So, the consequentialist thing to do is compare the value of Sim-Earth (V1) to the value of Meta-1 Earth (V2), and destroy Sim-Earth iff V2/2 > V1.

You haven't said much about Meta-1 Earth, but just to pick an easily calculated hypothetical, if Omega further informs me that there are ten other copies of World sim version 7.00.1.5 build 11/11/11 running on machines in Meta-1 Earth (not identical to Sim-Earth, because there's some randomness built into the sim, but roughly equivalent), I would conclude that destroying Sim-Earth is the right thing to do if everything is as Omega has represented it.

I might not actually do that, in the same way that I might not kill myself to save ten other people, or even give up my morning latte to save ten other people, but that's a different question.

Comment author: MugaSofer 14 January 2013 05:30:31PM *  -2 points [-]

Once again: why? Why privilege your simulation? Why not do the same for your planet? Your species? Your country? (Do you implement some of these?)

Comment author: Decius 14 January 2013 09:31:24PM 0 points [-]

Because my simulation (if I am in one) includes all of my existence. Meanwhile, a simulation run inside this existence contains only mathematical constructs or the equivalent.

Surely you don't think that your mental model of me deserves to have its desires considered in addition to mine? You use that model of me to estimate what I value, which enters into your utility function. To also include the model's point of view is double-counting the map.

Comment author: MugaSofer 15 January 2013 09:59:41AM -2 points [-]

My "mental model of you" consists of little more than a list of beliefs, which I then have my brain pretend it believes. In your case, it is woefully incomplete; but even the most detailed of those models are little more than characters I play to help predict how people would really respond to them. My brain lacks the knowledge and computing power to model people on the level of neurons or atoms, and if it had such power I would refuse to use it (at least for predictive purposes.)

OTOH, I don't see what the difference is between two layers of simulation just because I happen to be in one of them. Do you think they don't have qualia? Do you think they don't have souls? Do you think they are exactly the same as you, but don't care?