Bakkot comments on Welcome to Less Wrong! (2012) - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (1430)
Problem: There's no particular reason to expect speciation to be as widespread or as clear-cut as it is in the case of Earth and humans in particular. Certainly not for machine intelligences.
It might so happen that there could be software written for the computer I'm typing this on which could give it intelligence and consciousness. (Unlikely, but not out of the realm of possibility.) Should this machine be considered a person?
The reason I'm being so nit-picky is that what I consider the natural definition (namely, "an agent capable of intelligence and consciousness", or something like that) doesn't have this problem at all. I think it's a problem your definition has only because you were forced to deviate from the natural definition to include something that doesn't really seem like it belongs in that group - namely, newborns.
For computers, hardware and software can be separated in a way that is not possible with humans (with current technology). When the separation is possible, I agree personhood should be attributed to the software rather than the hardware, so your machine should not be considered a person. If in the future it becomes routinely possible to scan, duplicate and emulate human minds, then killing a biological human will probably also be less of a crime than it is now, as long as his/her mind is preserved. (Maybe there would be a taboo instead about deleting minds with no backup, even when they are not "running" on hardware).
It is also possible than in such a future where the concept of a person is commonly associated with a mind pattern, legalizing infanticide before brain development seats in would be acceptable. So perhaps we are not in disagreement after all, since on a different subthread you have said you do not really support legalization of infanticide in our current society.
I still think there is a bit of a meta diagreement: you seem to think that the laws and morality of this hypothetical future society would be better than our current ones, while I see it as a change in what are the appropriate Schelling points for the law to rule, in response to technological changes, without the end point being more "correct" in any absolute sense than our current law.
Well, yes. This seems obvious to me.
I think I must have been unclear - the machine I'm currently typing on should obviously be a person, just because it has the potential to become a person? That seems absurd to me.
Oh, of course. I've taken it that you were asking about a case where such software had indeed been installed on the machine. The potential of personhood on its own seems hardly worth anything to me.