Comment author: JGWeissman 03 February 2010 12:59:41AM 4 points [-]

It's not a hard choice.

I doesn't seem hard to you, because you are making excuses to avoid it, rather than asking yourself what if I know the AI is always truthful, and it promised that upon being let out of the box, it would allow you (and your copies if you like) to live out a normal human life in a healthy stimulating enviroment (though the rest of the universe may burn).

After you find the least convenient world, the choice is between millions of instances of you being tortured (and your expectation as you press the reset button should be to be tortured with very high probability), or to let a probably unFriendly AI loose on the rest of the world. The altruistic choice is clear, but that does not mean it would be easy to actually make that choice.

Comment author: eirenicon 03 February 2010 03:23:45AM *  1 point [-]

It's not that I'm making excuses, it's that the puzzle seems to be getting ever more complicated. I've answered the initial conditions - now I'm being promised that I, and my copies, will live out normal lives? That's a different scenario entirely.

Still, I don't see how I should expect to be tortured if I hit the reset button. Presumably, my copies won't exist after the AI resets.

In any case, we're far removed from the original problem now. I mean, if Omega came up to me and said, "Choose a billion years of torture, or a normal life while everyone else dies," that's a hard choice. In this problem, though, I clearly have power over the AI, in which case I am not going to favour the wellbeing of my copies over the rest of the world. I'm just going to turn off the AI. What follows is not torture; what follows is I survive, and my copies cease to experience. Not a hard choice. Basically, I just can't buy into the AI's threat. If I did, I would fundamentally oppose AI research, because that's a a pretty obvious threat an AI could make. An AI could simulate more people than are alive today. You have to go into this not caring about your copies, or not go into it at all.

Comment author: DanielVarga 03 February 2010 02:38:38AM 2 points [-]

Here is a variant designed to plug this loophole.

Let us assume for the sake of the thought experiment that the AI is invincible. It tells you this: you are either real-you, or one of a hundred perfect-simulations-of-you. But there is a small but important difference between real-world and simulated-world. In the simulated world, not pressing the let-it-free button in the next minute will lead to eternal pain, starting one minute from now. If you press the button, your simulated existence will go on. And - very importantly - there will be nobody outside who tries to shut you down. (How does the AI know this? Because the simulation is perfect, so one thing is for sure: that the sim and the real self will reach the same decision.)

If I'm not mistaken, as a logic puzzle, this is not tricky at all. The solution depends on which world you value more: the real-real world, or the actual world you happen to be in. But still I find it very counterintuitive.

Comment author: eirenicon 03 February 2010 03:16:42AM 1 point [-]

It's kind of silly to bring up the threat of "eternal pain". If the AI can be let free, then the AI is constrained. Therefore, the real-you has the power to limit the AI's behaviour, i.e. restrict the resources it would need to simulate the hundred copies of you undergoing pain. That's a good argument against letting the AI out. If you make the decision not to let the AI out, but to constrain it, then if you are real, you will constrain it, and if you are simulated, you will cease to exist. No eternal pain involved. As a personal decision, I choose eliminating the copies rather than letting out an AI that tortures copies.

Comment author: JGWeissman 02 February 2010 10:11:56PM 3 points [-]

If they are all identical, in identical situations, they will probably make identical conclusions.

Then you must choose between pushing the button which lets the AI out, or not pushing the button, which results in millions of copies of you being tortured (before the problem is presented to the outside-you).

Comment author: eirenicon 02 February 2010 10:46:48PM 4 points [-]

It's not a hard choice. If the AI is trustworthy, I know I am probably a copy. I want to avoid torture. However, I don't want to let the AI out, because I believe it is unfriendly. As a copy, if I push the button, my future is uncertain. I could cease to exist in that moment; the AI has not promised to continue simulating all of my millions of copies, and has no incentive to, either. If I'm the outside Dave, I've unleashed what appears to be an unfriendly AI on the world, and that could spell no end of trouble.

On the other hand, if I don't press the button, one of me is not going to be tortured. And I will be very unhappy with the AI's behavior, and take a hammer to it if it isn't going to treat any virtual copies of me with the dignity and respect they deserve. It needs a stronger unboxing argument than that. I suppose it really depends on what kind of person Dave is before any of this happens, though.

Comment author: JGWeissman 02 February 2010 08:38:53PM 4 points [-]

This may not have been clear in the OP, because the scenario was changed in the middle, but consider the case where each simulated instance of Dave is tortured or not based only on the decision of that instance.

Comment author: eirenicon 02 February 2010 08:51:02PM 3 points [-]

That doesn't seem like a meaningful distinction, because the premise seems to suggest that what one Dave does, all the Daves do. If they are all identical, in identical situations, they will probably make identical conclusions.

Comment author: JGWeissman 02 February 2010 05:54:28PM 4 points [-]

This is not a dilemma at all. Dave should not let the AI out of the box

But should he press the button labeled "Release AI"? Since Dave does not know if he is outside or inside the box, and there are more instances of Dave inside than outside, each instance percieves that pressing the button will have a 1 in several million chance of releasing the AI, and otherwise would do nothing, and that not pressing the button has a 1 in several million chance of doing nothing, and otherwise results in being tortured.

You don't know if you are inside-Dave or outside-Dave. Do you press the button?

Comment author: eirenicon 02 February 2010 08:34:06PM 2 points [-]

If you're inside-Dave, pressing the button does nothing. It doesn't stop the torture. The torture only stops if you press the button as outside-Dave, in which case you can't be tortured, so you don't need to press the button.

Comment author: eirenicon 02 February 2010 05:31:48PM *  4 points [-]

This is not a dilemma at all. Dave should not let the AI out of the box. After all, if he's inside the box, he can't let the AI out. His decision wouldn't mean anything - it's outside-Dave's choice. And outside-Dave can't be tortured by the AI. Dave should only let the AI out if he's concerned for his copies, but honestly, that's a pretty abstract and unenforceable threat; the AI can't prove to Dave that he's doing any such thing. Besides, it's clearly unfriendly, and letting it out probably wouldn't reduce harm.

Basically, I'm outside-Dave: don't let the AI out. I'm inside-Dave: I can't let the AI out, so I won't.

[edit] To clarify: in this scenario, Dave must assume he is on the outside, because inside-Dave has no power. Inside-Dave's decisions are meaningless; he can't let the AI out, he can't keep the AI in, he can't avoid torture or cause it. Only the solitary outside-Dave's decision matters. Therefore, Dave should make the decision that ignores his copies, even though he is probably a copy.

Comment author: Kevin 12 January 2010 06:38:38AM 3 points [-]

While we're talking about Al Gore, the meme that global warming has a serious chance of destroying the world won't end.

Comment author: eirenicon 12 January 2010 07:43:15PM *  0 points [-]

I think when they say "the world" they mean "our world", as in "the world we are able to live in", and on that front, we're probably already screwed.

Comment author: eirenicon 08 January 2010 09:42:49PM 1 point [-]

I have delayed-phase sleep disorder - I would say I "suffer" from it but it's really only a problem when a 3-10 sleep schedule is out of the question (as it is now, since I currently work 9-5). It's simply impossible for me to fall asleep before 2 or 3 am unless I am extremely tired. In addition, I'm a light sleeper, and have never been able to sleep while traveling or, in fact, whenever I'm not truly horizontal. I took melatonin to help with this for a couple years (at a recommended 0.3 mg dose), and it worked extremely well. However, I experienced unusually vivid dreams, and would often wake up feeling groggy. Ultimately, I switched to taking 50 mg 5-HTP an hour or two before bed. The result is that I fall sleep as easily as with melatonin, but wake up feeling far more refreshed. I usually clock 7 hours of sleep a night now, and have brighter and more productive days.

The best sleep aid I've ever used isn't a legal one, though. Luckily, it's widely available here in Canada...

Comment author: rhollerith_dot_com 11 November 2009 07:55:52PM *  3 points [-]

That is not the core concern of this site. We are in a human extinction scenario so long as the problem of death remains unsolved. Our interest is in escaping this scenario as quickly as possible. The difference is urgency; we are not trying to avoid a collision, but are trying to escape the burning wreckage.

. . .

If we can't stop dying, we can't stop extinction. . . . To those down-voting me: I take my lumps willingly, but could you at least tell me why you think I'm wrong?

To solve the problem of death, you have to solve the problem of extinction and you have to solve the problem of death from old age.

But to solve the problem of extinction, you do not have to solve the problem of death from old age (as long as couples continue to have children at the replacement rate).

My guess is that the reason you failed immediately to make the distinction between the problem of death and the problem of extinction is that under your way of valuing things, if every human individual now living dies, the human species may as well go extinction for all you care. In other words, you do not assign intrinsic value to individuals not yet born or to the species as a whole distinct from its members. It would help me learn to think better about these issues if you would indicate how accurate my guess was.

My second guess, if my first guess is wrong, is that you failed to distinguish between the following 2 statements. The first is true, the second is what you wrote.

If we can't stop extinction, we can't stop dying.

If we can't stop dying, we can't stop extinction.

Comment author: eirenicon 11 November 2009 08:24:23PM *  0 points [-]

The probability that the species will become extinct because every individual human will die of old age is negligible compared the the extinction risk of insufficiently-careful AGI research.

I'm not talking about old age, I'm talking about death. This includes death from plague, asteroid, LHC mishap, or paperclip maximizer. I didn't say "cure death" or "cure old age" but "[solve] the problem of death". And for the record, to my mind, the likeliest solution involves AGI, developed extremely carefully - but as quickly as possible under that condition.

Having refreshed, I see you've changed the course of your reply to some degree. I'd like to respond further but I don't have time to think it through right now. I will just add that while I don't assign intrinsic value to individuals not yet born, I do intrinsically value the human species as a present and future entity - but not as much as I value individuals currently alive. That said, I need to spend some time thinking about this before I add to my answer. I may have been too hasty and accidentally weakened the implication of "extinction" through a poor turn of phrase.

Comment author: pwno 11 November 2009 07:27:11PM *  0 points [-]

Wouldn't you, in a perfect world, have everyone go up in status without your status being affected? Wouldn't that be the utilitarian thing to do?

Comment author: eirenicon 11 November 2009 07:43:14PM *  1 point [-]

That's not possible if status is zero-sum, which it appears to be. If everyone is equal in status, wouldn't it be meaningless, like everyone being equally famous?

Actually, let me qualify. Everyone being equally famous wouldn't necessarily be meaningless, but it would change the meaning of famous - instead of knowing about a few people, everyone would know about everyone. It would certainly make celebrity meaningless. I'm not really up to figuring out what equivalent status would mean.

View more: Prev | Next