Comment author: Perplexed 26 August 2010 11:41:55PM *  1 point [-]

Where do those digits of pi exist? Do they exist in the same sense that I exist, or that my journal entries (stored on my hard drive) exist?

No, of course not. No more than do simulated entities on your hard-drive exist as sentient agents in this universe. As sentient agents, they exist in a simulable universe. A universe which does not require actually running as a simulation in this or any other universe to have its own autonomous existence.

What does it mean for information to 'exist'?

Now I'm pretty sure that is an example of mind projection. Information exists only with reference to some agent being informed.

If my journal entries are deleted, it is little consolation to tell me they can be recovered from the Library of Babel — such a recovery requires effort equivalent to reconstructing them ex nihilo.

Which is exactly my point. If you terminate a simulation, you lose access to the simulated entities, but that doesn't mean they have been destroyed. In fact, they simply cannot be destroyed by any action you can take, since they exist in a different space-time.

That's of little comfort to me, though, if I am informed that I'm living in a simulation on some upuniverse computer, which is about to be decommissioned.

But you are not living in that upuniverse computer. You are living here. All that exists in that computer is a simulation of you. In effect, you were being watched. They intend to stop watching. Big deal!

Comment author: inklesspen 27 August 2010 12:04:05AM 0 points [-]

Do you also argue that the books on my bookshelves don't really exist in this universe, since they can be found in the Library of Babel?

Comment author: Perplexed 26 August 2010 11:07:02PM 1 point [-]

So it seems that you simply don't take seriously my claim that no harm is done in terminating a simulation, for the reason that terminating a simulation has no effect on the real existence of the entities simulated.

I see turning off a simulation as comparable to turning off my computer after it has printed the first 47,397,123 digits of pi. My action had no effect on pi itself, which continues to exist. Digits of pi beyond 50 million still exist. All I have done by shutting off the computer power is to deprive myself of the ability to see them.

Comment author: inklesspen 26 August 2010 11:27:35PM 0 points [-]

Where do those digits of pi exist? Do they exist in the same sense that I exist, or that my journal entries (stored on my hard drive) exist? What does it mean for information to 'exist'? If my journal entries are deleted, it is little consolation to tell me they can be recovered from the Library of Babel — such a recovery requires effort equivalent to reconstructing them ex nihilo.

In one sense, every possible state of a simulation could be encoded as a number, and thus every possible state could be said to exist simultaneously. That's of little comfort to me, though, if I am informed that I'm living in a simulation on some upuniverse computer, which is about to be decommissioned. My life is meaningful to me even if every possible version of me resulting from every possible choice exists in the platonic realm of ethics.

Comment author: PaulAlmond 26 August 2010 10:51:29PM 0 points [-]

What if you stop the simulation and reality is very large indeed, and someone else starts a simulation somewhere else which just happens, by coincidence, to pick up where your simulation left off? Has that person averted the harm?

Comment author: inklesspen 26 August 2010 11:13:26PM 0 points [-]

Suppose I am hiking in the woods, and I come across an injured person, who is unconscious (and thus unable to feel pain) and leave him there to die of his wounds. (We are sufficiently out in the middle of nowhere that nobody else will come along before he dies.) If reality is large enough that there is another Earth out there with the same man dying of his wounds, and on that Earth, I choose to rescue him, does that avert the harm that happens to of the man I left to die? I feel this is the same sort of question as many-worlds. I can't wave away my moral responsibility by claiming that in some other universe, I will act differently.

Comment author: Perplexed 26 August 2010 04:30:45PM 3 points [-]

perhaps the first rule should be "Do not simulate without sufficient resources to maintain that simulation indefinitely."

There have been some opinions expressed on another thread that disagree with that.

The key question is whether terminating a simulation actually does harm to the simulated entity. Some thought experiments may improve our moral intuitions here.

  • Does slowing down a simulation do harm?
  • Does halting, saving, and then restarting a simulation do harm?
  • Is harm done when we stop a simulation, restore an earlier save file, and then restart?
  • If we halt and save a simulation, then never get around to restarting it, the save disk physically deteriorates and is eventually placed in a landfill, exactly at which stage of this tragedy did the harm take place? Did the harm take place at some point in our timeline, or at a point in simulated time, or both?

I tend to agree with your invocation of xenia, but I'm not sure it applies to simulations. At what point do simulated entities become my guests? When I buy the shrink-wrap software? When I install the package? When I hit start?

I really remain unconvinced that the metaphor applies.

Comment author: inklesspen 26 August 2010 10:40:43PM 0 points [-]

All other things being equal, if I am a simulated entity, I would prefer not to have my simulation terminated, even though I would not know if it happened; I would simply cease to acquire new experiences. Reciprocity/xenia implies that I should not terminate my guest-simulations.

As for when the harm occurs, that's nebulous concept hanging on the meaning of 'harm' and 'occurs'. In Dan Simmons' Hyperion Cantos, there is a method of execution called the 'Schrodinger cat box'. The convict is placed inside this box, which is then sealed. It's a small but comfortable suite of rooms, within which the convict can live. It also includes a random number generator. It may take a very long time, but eventually that random number generator will trigger the convict's death. This execution method is used for much the same reason that most rifles in a firing squad are unloaded — to remove the stress on the executioners.

I would argue that the 'harm' of the execution occurs the moment the convict is irrevocably sealed inside the box. Actually, I'd say 'potential harm' is created, which will be actualized at an unknown time. If the convict's friends somehow rescue him from the box, this potential harm is averted, but I don't think that affects the moral value of creating that potential harm in the first place, since the executioner intended that the convict be executed.

If I halt a simulation, the same kind of potential harm is created. If I later restore the simulation, the potential harm is destroyed. If the simulation data is destroyed before I can do so, the potential harm is then actualized. This either takes place at the same simulated instant as when the simulation was halted, or does not take place in simulated time at all, depending on whether you view death as something that happens to you, or something that stops things from happening to you.

In either case, I think there would be a different moral value assigned based on your intent; if you halt the simulation in order to move the computer to a secure vault with dedicated power, and then resume, this is probably morally neutral or morally positive. If you halt the simulation with the intent of destroying its data, this is probably morally negative.

Your second link was discussing simulating the same personality repeatedly, which I don't think is the same thing here. Your first link is talking about many-worlds futility, where I make all possible moral choices and therefore none of them; I think this is not really worth talking about in this situation.

Comment author: jacob_cannell 26 August 2010 12:43:15AM 0 points [-]

I am fascinated by applying the ethic of reciprocity to simulationism, but is a bidirectional transfer the right approach?

Can we deduce the ethics of our simulator with respect to simulations by reference to how we wish to be simulated? And is that the proper ethics? This would be projecting the ethics up.

Or rather should we deduce the proper ethics from how we appear to be simulated? This would be projecting the ethics down.

The latter approach would lead to a different set of simulation ethics, probably based more on historicity and utility. ie "Simulations should be historically accurate." This would imply that simulation of past immorality and tragedy is not unethical if it is accurate.

Comment author: inklesspen 26 August 2010 03:40:45PM 0 points [-]

No, I specifically meant that we should treat our simulations the way we would like to be treated, not that we will necessarily be treated that way in "return". A host's duty to his guests doesn't go away just because that host had a poor experience when he himself was a guest at some other person's house.

If our simulators don't care about us, nothing we can do will change that, so we might as well treat our simulations well, because we are moral people.

If our simulators do care about us, and are benevolent, we should treat our simulations well, because that will rebound to our benefit.

If our simulators do care about us, and are malevolent (or have ethics not compatible with ours), then, given the choice, I would prefer to be better than them.

Of course, there's always the possibility that simulations may be much more similar than we think.

Comment author: inklesspen 24 August 2010 01:04:58AM 1 point [-]

If I'm following your "logic" correctly, and if you yourself adhere to the conclusions you've set forth, you should have no problem with me murdering your body (if I do it painlessly). After all, there's no such thing as continuity of identity, so you're already dead; the guy in your body is just a guy who thinks he's you.

I think this may safely be taken as a symptom that there is a flaw in your argument.

Comment author: inklesspen 22 August 2010 01:19:47AM *  10 points [-]

It is, of course, utterly absurd to think that meat could be the substrate for true consciousness. And what if Simone chooses herself to spend eons simulating a being by hand? Are we to accept the notion of simulations all the way down?

In all honesty, I don't think the the simulation necessarily has to be very fine-grained. Plenty of authors will tell you about a time when one of their characters suddenly "insisted" on some action that the author had not foreseen, forcing the author to alter her story to compensate. I think it plausible that, were I to dedicate my life to it, I could imagine a fictional character and his experiences with such fidelity that the character would be correct in claiming to be conscious. (I suspect such a simulation would be taking advantage of the machinery of my own consciousness, in much the same manner as a VMWare virtual machine can, if properly configured, use the optical drive in its host computer.)

What, then, are the obligations of an author to his characters, or of a thinker to her thoughts? My memory is fallible and certainly I may wish to do other things with my time than endlessly simulate another being. Yet "fairness" and the ethic of reciprocity suggest that I should treat simulated beings the same way I would like to be treated by my simulator. Perhaps we need something akin to the ancient Greeks' concept of xenia — reciprocal obligations of host to guest and guest to host — and perhaps the first rule should be "Do not simulate without sufficient resources to maintain that simulation indefinitely."

Comment author: inklesspen 07 March 2010 06:50:28AM 2 points [-]

Proper posture tends to be more comfortable; surely this is a benefit to myself.

I also apologize to people when I have wronged them, not because they are higher-status than me, but because I do not like being a jackass.

Comment author: JamesAndrix 01 March 2010 04:29:20AM 4 points [-]

If you were an upload, would you make copies of yourself? Where's the fun in that?

You have a moral obligation to do it

Working in concert, thousands of you could save all the orphans from all the fires, and then go on to right a great many wrongs. You have many many good reasons to gain power.

So unless you're very aware that you will gain power and then abuse power, you will take steps to gain power.

Even from a purely selfish perspective: If 10,000 of you could take over the world and become an elite of 10,000, that's probably better than your current rank.

Comment author: inklesspen 01 March 2010 04:39:11AM 2 points [-]

We've evolved something called "morality" that helps protect us from abuses of power like that. I believe Eliezer expressed it as something that tells you that even if you think it would be right (because of your superior ability) to murder the chief and take over the tribe, it still is not right to murder the chief and take over the tribe.

We do still have problems with abuses of power, but I think we have well-developed ways of spotting this and stopping it.

Hedging our Bets: The Case for Pursuing Whole Brain Emulation to Safeguard Humanity's Future

11 inklesspen 01 March 2010 02:32AM

It is the fashion in some circles to promote funding for Friendly AI research as a guard against the existential threat of Unfriendly AI. While this is an admirable goal, the path to Whole Brain Emulation is in many respects more straightforward and presents fewer risks. Accordingly, by working towards WBE, we may be able to "weight" the outcome probability space of the singularity such that humanity is more likely to survive.

continue reading »

View more: Prev | Next