I have been trying to absorb the Lesswrong near-consensus on cryonics/quantum mechanics/uploading, and I confess to being unpersuaded by it. I'm not hostile to cryonics; just indifferent, and having a bit of trouble articulating why the insights on identity that I have been picking up from the quantum mechanics sequence aren't compelling to me. I offer the following thought experiment in hopes that others may be able to present the argument more effectively if they understand the objection here.
Suppose that Omega appears before you and says, “All life on Earth is going to be destroyed tomorrow by [insert cataclysmic event of your choice here]. I offer you the chance to push this button, which will upload your consciousness to a safe place out of reach of the cataclysmic event, preserving all of your memories, etc. up to the moment you pushed the button and optimizing you such that you will be effectively immortal. However, the uploading process is painful, and because it interferes with your normal perception of time, your original mind/body will subjectively experience the time after you pushed the button but before the process is complete as a thousand years of the most intense agony. Additionally, I can tell you that a sufficient number of other people will choose to push the button that your uploaded existence will not be lonely.”
Do you push the button?
My understanding of the Lesswrong consensus on this issue is that my uploaded consciousness is me, not just a copy of me. I'm hoping the above hypothetical illustrates why I'm having trouble accepting that.
Yeah, really what I find to be the ugliest thing about lesswrong by far is the sense of self-importance, which contributed to the post deletion quite a bit as well.
Maybe it's the combination of these factors that's the problem. When I read mainstream philosophical discourse about pushing a fat man in front of a trolley, it just seems like a goofy hypothetical example.
But lesswrong seems to believe that it carries the world on its shoulders, and that when they talk about deciding between torture and dust specks, or torture and alien invasion, or torture and more torture, i get the impression people are treating this at least in part as though they actually expect to have to make this kind of decision.
If all the situations you think about involve horrible things, regardless of the reason for it, you will find your intuitions gradually drifting into paranoia. There's a certain logic to "hope for the best, prepare for the worst", but I get the impression that for a lot of people, thinking about horrible things is simply instinctual and the reasons they give for it are rationalizations.
Do you think that maybe it could also be tied up with this sort of thing? Most of the ethical content of this site seems to be heavily related to the sort of approach Eliezer takes to FAI. This isn't surprising.
Part of the mission of this site is to proselytize the idea that FAI is a dire issue that isn't getting anywhere near enough attention. I tend to agree with that idea.
Existential risk aversion is really the backbone of this site. The flow of conversation is driven by it, and you see its influence everywhere. The point of being rational in the Lessw... (read more)