I'm not sure I follow your objection here, but my best guess is something like "the upload can't be me, because I'm experiencing a thousand years of agony, and the upload isn't."
Is that even close to right?
I won't presume to speak for the LW consensus, but personally I would say that the upload is me, and the body is also me. When the body dies in the cataclysm, I have died, and I've also survived. This sounds paradoxical because I'm used to thinking of my identity as traveling along only one track, but in the case you're describing Omega's device has made that no longer true, and in that case I need to stop thinking as though it were.
I am not sure whether either of me, after pressing the button, considers the other me to be them... but I suspect probably not.
Does any of that help?
Oh, and, yes, I press the button. Shortly after pressing it, I both deeply regret having pressed it, and am enormously grateful to myself for having pressed it.
Why does every other hypothetical situation on this site involve torture or horrible pain? What is wrong with you people?
Edit: I realize I've been unduly inflammatory about this. I'll restrict myself in the future to offering non-torture alternative formulations of scenarios when appropriate.
Why does every other hypothetical situation on this site involve torture or horrible pain? What is wrong with you people?
We understand why edge cases and extremes are critical when testing a system - be that a program, a philosophy, a decision theory or even just a line of logic.
I'm hoping the above hypothetical illustrates why I'm having trouble accepting that.
I'm sorry, but I don't understand the illustration. My answer would be the same if my original mind/body was immediately and painlessly dissolved, and it was my uploaded (copied?) mind that experienced the thousand years of pain. Same answer in a more realistic scenario in which I remain physically embodied, but the pain and immortality are caused by ordinary vampire venom rather than some bogus cryonics scheme orchestrated by Omega. :)
I would probably request painles...
consider the weirdness of 'someone' remembering that his younger self didn't really care for him.
Well, that happens all the time in the actual world. It may be weird, but it's a weird we're accustomed to.
Damn. I laughed so hard at your comment that my dentures fell out. I should have flossed more.
Hmm, speaking as someone who sort of buys into the cryonics part but doesn't buy into the rest of what you label as the "LW consensus" I think for all of these issues the consensus level to them here is probably overestimated. Note that the QM sequence is the sequence which by far has the most subject matter experts who would disagree.
As for the button, I'm not sure if I'd push it or not, I suspect no. But that may indicate irrationality on my part more than any coherent notion of what constitutes "me".
I would push the button. I'd also feel very grateful to myself for having pushed it and undergone that torment for my sake. Probably similar to the gratefulness that christians feel for Jesus when they think of the crucifixion. The survivors would probably create a holiday to memorialize their own sacrifice for themselves, which sounds kinda self-serving, but hell... I'd think I deserve it.
I, for one, would not say that an upload is "me," or at least doesn't fulfill all of the parts of how I use "me." The most notable lack, since I think I do disagree with LW consensus here, is continuity.
Do you push the button?
My understanding of the Lesswrong consensus on this issue is that my uploaded consciousness is me, not just a copy of me. I'm hoping the above hypothetical illustrates why I'm having trouble accepting that.
I would consider both consciousnesses you. The problem seems to be one of preference. I would press the button but I can understand why people would not.
For a definition of "effectively" such that future lifespan >> 1000 years, yes. The uploading process as described will be that painful for everyone, so either:
a) Everyone will spend roughly the same amount of time getting over the pain, and I wouldn't miss much of significance or be specifically disadvantaged.
or
b) Being uploaded would afford us the capability to delete our memories of the pain; so, though it would be experienced, it wouldn't have to be remembered, reducing its overall effect.
This response assumes that the experience of the...
Well, I would NOT press the button. The average copy gets 500 years of being a creationist, plus half of an immortality. My values prefer "short but good".
I identify with my upload on an intellectual level. On the emotional level, I can't really say. Whether I push the button depends on whether I judge "1000 years of agony, then immortality with no memory of the pain" to be better or worse than dying tomorrow, and then on whether I had the guts in the moment to push the button. I want to say I'd go for it, but I don't think I know myself that well.
Oh, by the way: is it one branch of me dying tomorrow and the other being painfully uploaded, or is there only one me with a choice between the two? I in...
I'd like to add a short poll to the question, assuming Prismattic doesn't mind (in which case I will delete these posts).
Upvote this if you would press the button AND you would NOT be willing to attempt a quantum suicide (with a 'perfect' suicide method that will leave you either dead or unharmed), if you were offered a moderately high personal payoff for the version(s) of you that will survive.
It seems like moral problems get a negative phrasing more often than not in general, not just when Yudkowsky is writing them. I mean, you have the Trolley problem, the violinist, pretty much all of these, the list goes on. Have you ever looked at the morality subsections of any philosophy forums? Everything is about rape, torture, murder etc. I just assumed that fear is a bigger motivator than potential pleasantness and is a common aspect of rhetoric in general. I think that at least on some level it's just the name of the game, moral dilemma -> reasoning over hard decisions during very negative situations, not because ethicist are autistic, but because that is the hard part of morality for most humans. When I overhear people arguing over moral issues, I hear them talking about whether torture is ever justified or if murder is ever o.k.
Arguing about whether the tradeoff of killing one fat man to save five people is justified is more meaningful to us as humans than debating whether, say; we should give children bigger lollipops if it means there can't be as much raw material for puppy chow (ergo, we will end up with fewer puppies since we are all responsible and need to feed our puppies plenty, but we want as many puppies as possible because puppies are cute, but so are happy children).
This isn't to say that simply because this is how it's done currently means that it is the most rational way to carry on a moral dialogue, only that you seem to be committing a fundamental attribution error due to a lack of general exposure to moral dilemmas and the people arguing them.
Besides, it's not like I'm thinking about torture all the time just because I'm considering moral dilemmas in the abstract. I think that most people can differentiate between an illustration meant to show a certain sort of puzzle and reality. I don't get depressed or anxious after reading Lesswrong, if anything; I'm happier and more excited and revitalized. So I'm just not picking up on the neurosis angle at all, seems like it might be a mind projection fallacy?
Considering this style of thinking has lead lesswrong to redact whole sets of posts out of (arguably quite delusional) cosmic horror, I think there's plenty of neurosis to go around, and that it runs all the way to the top.
I can certainly believe not everybody here is part of it, but even then, it seems in poor taste. The moral problems you link to don't strike me as philosophically illuminating, they just seem like something to talk about at a bad party.
I have been trying to absorb the Lesswrong near-consensus on cryonics/quantum mechanics/uploading, and I confess to being unpersuaded by it. I'm not hostile to cryonics; just indifferent, and having a bit of trouble articulating why the insights on identity that I have been picking up from the quantum mechanics sequence aren't compelling to me. I offer the following thought experiment in hopes that others may be able to present the argument more effectively if they understand the objection here.
Suppose that Omega appears before you and says, “All life on Earth is going to be destroyed tomorrow by [insert cataclysmic event of your choice here]. I offer you the chance to push this button, which will upload your consciousness to a safe place out of reach of the cataclysmic event, preserving all of your memories, etc. up to the moment you pushed the button and optimizing you such that you will be effectively immortal. However, the uploading process is painful, and because it interferes with your normal perception of time, your original mind/body will subjectively experience the time after you pushed the button but before the process is complete as a thousand years of the most intense agony. Additionally, I can tell you that a sufficient number of other people will choose to push the button that your uploaded existence will not be lonely.”
Do you push the button?
My understanding of the Lesswrong consensus on this issue is that my uploaded consciousness is me, not just a copy of me. I'm hoping the above hypothetical illustrates why I'm having trouble accepting that.