If you've recently joined the Less Wrong community, please leave a comment here and introduce yourself. We'd love to know who you are, what you're doing, or how you found us. Tell us how you came to identify as a rationalist, or describe what it is you value and work to achieve.
If you'd like to meet other LWers in real life, there's a meetup thread and a Facebook group. If you've your own blog or other online presence, please feel free to link it. If you're confused about any of the terms used on this site, you might want to pay a visit to the LW Wiki, or simply ask a question in this thread. Some of us have been having this conversation for a few years now, and we've developed a fairly specialized way of talking about some things. Don't worry -- you'll pick it up pretty quickly.
You may have noticed that all the posts and all the comments on this site have buttons to vote them up or down, and all the users have "karma" scores which come from the sum of all their comments and posts. Try not to take this too personally. Voting is used mainly to get the most useful comments up to the top of the page where people can see them. It may be difficult to contribute substantially to ongoing conversations when you've just gotten here, and you may even see some of your comments get voted down. Don't be discouraged by this; it happened to many of us. If you've any questions about karma or voting, please feel free to ask here.
If you've come to Less Wrong to teach us about a particular topic, this thread would be a great place to start the conversation, especially until you've worked up enough karma for a top level post. By posting here, and checking the responses, you'll probably get a good read on what, if anything, has already been said here on that topic, what's widely understood and what you might still need to take some time explaining.
A note for theists: you will find LW overtly atheist. We are happy to have you participating but please be aware that other commenters are likely to treat religion as an open-and-shut case. This isn't groupthink; we really, truly have given full consideration to theistic claims and found them to be false. If you'd like to know how we came to this conclusion you may find these related posts a good starting point.
A couple technical notes: when leaving comments, you may notice a 'help' link below and to the right of the text box. This will explain how to italicize, linkify, or quote bits of text. You'll also want to check your inbox, where you can always see whether people have left responses to your comments.
Welcome to Less Wrong, and we look forward to hearing from you throughout the site.
(Note from MBlume: though my name is at the top of this page, the wording in various parts of the welcome message owes a debt to other LWers who've helped me considerably in working the kinks out)
I'm talking exactly about a process that is so flawless you can't tell the difference. Where my concern comes from is that if you don't destroy the original you now have two copies. One is the original (although you can't tell the difference between the copy and the original) and the other is the copy.
Now where I'm uncomfortable is this: If we then kill the original by letting Freddie Krueger or Jason do his evil thing then though the copy is still alive AND is/was indistinguishable from the original then the alternative hypothesis which I oppose states that the original is still alive and yet I can see the dead body there.
Simply speeding the process up perhaps by vaporizing the original doesn't make the outcome any different, the original is still dead.
It gets murkier if the original is destructively scanned and then rebuilt from the same atoms but I'd still be reluctant to do this myself.
That said, I'd be willing to become a hybrid organism slowly by replacing parts of me and although it wouldn't be the original me at the end of the total replacement process it would still be the hybrid "me".
Interesting position on the killing of the NPCs and in terms of usefulness that's why it doesn't matter to me if a being is sentient or not in order to meet my definition of AI.
If I make a perfect copy of myself, then at the instant of duplication there exists one person at two locations. A moment later, the entities at those two locations start having non-identical experiences and entering different mental states, and thereby become different people (who aren't one another, although both of them are me). If prior to duplication I program a device to kill me once and only once, then I die, and I have killed myself, and I continue to live.
I agree that this is a somewhat confusing way of talking, because we're not used to life and death and identity working that way, but we have a long history of technological innovations changing the way we talk about things.