Bakkot comments on Welcome to Less Wrong! (2012) - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (1430)
Posted this above as well.
Here's another case to consider:
I assume you've granted that sufficiently advanced AIs ought to be counted as people. Say that I have running on my computer a script which is compiling an AI's source, and which will launch the resultant executable as soon as compilation finishes with no intervention on my part.
Am I killing a person if I terminate this script before compilation completes? That is, does "software which will compile and run an AI" belong to the "people" or the "not people" group?
I think babies are much closer to this than to any of the examples you've listed above.
In the interests of settling confusion, here's another example:
Suppose we let the above script finish and the AI go about its merry way for a few centuries. We shut down the computer it's running on - writing its current state to non-volatile memory - to transport it somewhere else. To me it seems that destroying that memory would constitute killing a person.
From these examples, I think "will become a person" is only significant for objects which were people in the past. This handles all of the examples you list (leaving aside 3-year-olds, which are too close to the issue at hand), as well as explaining why I don't think interrupting compilation as above is killing a person but destroying the state of a running-but-paused AI does.
Questions for you:
I've never seen a compiling AI, let alone an interrupted one, even in fiction, so your example isn't very available to me. I can imagine conditions that would make it OK or not OK to cancel the compilation process.
This is most interesting to me:
I know we're talking about intuitions, but this is one description that can't jump from the map into the territory. We know that the past is completely screened off by the present, so our decisions, including moral decisions, can't ultimately depend on it. Ultimately, there has to be something about the present or future states of these humans that makes it OK to kill the baby but not the guy in the coma. Could you take another shot at the distinction between them?
I'm having a hard time figuring out what you mean when you say that example isn't available to you. Are you familiar with the processes by which source code becomes programs we can execute? (I imagine you are, but if not, I suggest you read up on it - this is something everyone can benefit from a bare familiarity with, I think.)
This seems by far the most appropriate example, so I'm not going to readily give it up. I'd be happy to give a more detailed example, if you'd like - or, possibly better, you could give an example of a case where it would be OK to cancel the compilation process.
Extremely strongly disagree. There is absolutely no reason to exclude the past from our morality.
Here's just one trivial but hopefully obvious example of why the past is important: if everyone follows just the rule I listed, no one has to worry about getting killed.
Of course, that said, I can still try to present the rule another way. For example, we might say that the rule is that it's immoral to destroy anything which contains within it a complete and unique description (that it to say, it's the only copy) of a person. Thus the passed-out drunk is a person, the running AI is a person, but the compiling AI is not. (Nor is the baby, incidentally.)
Again, the compiling AI seems like an extremely useful example. I'm having difficulty coming up with any rule which includes the AI but not the baby, in fact. (Without referring to things that seems unrelated like "biologically alive", of course.) As such, I'd really like to discuss it with you. Could you do me a favor and explain what would need to happen for you to be able to discuss it?