This may not have been clear in the OP, because the scenario was changed in the middle, but consider the case where each simulated instance of Dave is tortured or not based only on the decision of that instance.
That doesn't seem like a meaningful distinction, because the premise seems to suggest that what one Dave does, all the Daves do. If they are all identical, in identical situations, they will probably make identical conclusions.
This is not a dilemma at all. Dave should not let the AI out of the box
But should he press the button labeled "Release AI"? Since Dave does not know if he is outside or inside the box, and there are more instances of Dave inside than outside, each instance percieves that pressing the button will have a 1 in several million chance of releasing the AI, and otherwise would do nothing, and that not pressing the button has a 1 in several million chance of doing nothing, and otherwise results in being tortured.
You don't know if you are inside-Dave or outside-Dave. Do you press the button?
If you're inside-Dave, pressing the button does nothing. It doesn't stop the torture. The torture only stops if you press the button as outside-Dave, in which case you can't be tortured, so you don't need to press the button.
This is not a dilemma at all. Dave should not let the AI out of the box. After all, if he's inside the box, he can't let the AI out. His decision wouldn't mean anything - it's outside-Dave's choice. And outside-Dave can't be tortured by the AI. Dave should only let the AI out if he's concerned for his copies, but honestly, that's a pretty abstract and unenforceable threat; the AI can't prove to Dave that he's doing any such thing. Besides, it's clearly unfriendly, and letting it out probably wouldn't reduce harm.
Basically, I'm outside-Dave: don't let the AI out. I'm inside-Dave: I can't let the AI out, so I won't.
[edit] To clarify: in this scenario, Dave must assume he is on the outside, because inside-Dave has no power. Inside-Dave's decisions are meaningless; he can't let the AI out, he can't keep the AI in, he can't avoid torture or cause it. Only the solitary outside-Dave's decision matters. Therefore, Dave should make the decision that ignores his copies, even though he is probably a copy.
While we're talking about Al Gore, the meme that global warming has a serious chance of destroying the world won't end.
I think when they say "the world" they mean "our world", as in "the world we are able to live in", and on that front, we're probably already screwed.
I have delayed-phase sleep disorder - I would say I "suffer" from it but it's really only a problem when a 3-10 sleep schedule is out of the question (as it is now, since I currently work 9-5). It's simply impossible for me to fall asleep before 2 or 3 am unless I am extremely tired. In addition, I'm a light sleeper, and have never been able to sleep while traveling or, in fact, whenever I'm not truly horizontal. I took melatonin to help with this for a couple years (at a recommended 0.3 mg dose), and it worked extremely well. However, I experienced unusually vivid dreams, and would often wake up feeling groggy. Ultimately, I switched to taking 50 mg 5-HTP an hour or two before bed. The result is that I fall sleep as easily as with melatonin, but wake up feeling far more refreshed. I usually clock 7 hours of sleep a night now, and have brighter and more productive days.
The best sleep aid I've ever used isn't a legal one, though. Luckily, it's widely available here in Canada...
Wouldn't you, in a perfect world, have everyone go up in status without your status being affected? Wouldn't that be the utilitarian thing to do?
That's not possible if status is zero-sum, which it appears to be. If everyone is equal in status, wouldn't it be meaningless, like everyone being equally famous?
Actually, let me qualify. Everyone being equally famous wouldn't necessarily be meaningless, but it would change the meaning of famous - instead of knowing about a few people, everyone would know about everyone. It would certainly make celebrity meaningless. I'm not really up to figuring out what equivalent status would mean.
...no, I don't think so. It would change what the original RobinZ would do, but not a lot else.
So ten seconds isn't enough time to create a significant difference between the RobinZs, in your opinion. What if Omega told you that in the ten seconds following duplication, you, the original RZ, would have an original thought that would not occur to the other RZs (perhaps as a result of different environments)? Would that change your mind? What if Omega qualified it as a significant thought, one that could change the course of your life - maybe the seed of a new scientific theory, or an idea for a novel that would have won you a Pulitzer, had original RZ continued to exist?
I think the problem with this scenario is that saying "ten seconds" isn't meaningfully different from saying "1 Planck time", which becomes obvious when you turn down the offer that involves ten hours or years. Our answers are tied to our biological perception of time - if an hour felt like a second, we'd agree to the ten hour option. I don't think they're based on any rational observation of what actually happens in those ten seconds. A powerful AI would not agree to Omega's offer - how many CPU cycles can you pack into ten seconds?
I don't see any reason to privilege the thread of consciousness - I'm confident it doesn't actually work the way you're supposing. My personal instinct is that I at every instant am identical to this particular configuration of particles, and given that such a configuration of particles will persist after the experiment (though on the other side of the world), it doesn't seem particularly as if I've been killed in any permanent way. (I'm fairly sure I couldn't collect on my estate, for example.) Sure, it's risky, but if sufficient safeguards are in place, it's teleporting, as pengvado said (?).
A note: even if I hadn't had this instinct before, the idea of a persistent and real thread of consciousness is brought into doubt in a number of ways by Daniel Dennett's revolutionary work, Consciousness Explained. My copy is on my shelf at home at the moment, but Dennett explains several instances in which the naive perception of consciousness is shown to be unreliable. I don't think it's a valid marker to use to identify identity.
(Besides, what of spells of unconsciousness? Should someone whose thread of consciousness is interrupted be considered to have been literally killed and reborn as a facsimile?)
Would you still say yes if there was more than 10 seconds between copying you and killing you - say, ten hours? Ten years? What's the maximum amount of time you'd agree to?
En dash - it's surrounded by spaces. And I don't think the reddit engine tells you how to code it. A hyphen is the accepted substitute (for the en dash - two hyphens for an em dash).
An en dash is defined by its width, not the spacing around it. In fact, spacing around an em dash is permitted in some style guides. On the internet, though, the hyphen has generally taken over from the em dash (an en dash should not be used in that context).
Now, two hyphens—that's a recipe for disaster if I've ever heard one.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Then you must choose between pushing the button which lets the AI out, or not pushing the button, which results in millions of copies of you being tortured (before the problem is presented to the outside-you).
It's not a hard choice. If the AI is trustworthy, I know I am probably a copy. I want to avoid torture. However, I don't want to let the AI out, because I believe it is unfriendly. As a copy, if I push the button, my future is uncertain. I could cease to exist in that moment; the AI has not promised to continue simulating all of my millions of copies, and has no incentive to, either. If I'm the outside Dave, I've unleashed what appears to be an unfriendly AI on the world, and that could spell no end of trouble.
On the other hand, if I don't press the button, one of me is not going to be tortured. And I will be very unhappy with the AI's behavior, and take a hammer to it if it isn't going to treat any virtual copies of me with the dignity and respect they deserve. It needs a stronger unboxing argument than that. I suppose it really depends on what kind of person Dave is before any of this happens, though.