RichardKennaway comments on Harry Potter and the Methods of Rationality discussion thread, part 2 - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (696)
Is it the author's opinion that the creation of house elves was a terribly evil deed? It would seem that to think that after their creation, they would want to do what they have been designed to do and so would be no more evil than creating an intelligence which would want to bowl and fish all day. Even if we accept that creating conscious entities which are forced by means of their preferences to do menial work is wrong, it would seem to be better to create them, than to force those who don't enjoy such work to do it. Is Harry just confused by his intuitions about the evil of slavery, without sufficient reflection?
ETA: While this argument works in the abstract and is useful for countering human biases against "slavery" and applies in the particular for the creation of Gammas, Deltas and Epsilons, house elves have addition features I wasn't considering which makes their creation morally evil.
Is it wrong to make a pig that wants to be eaten?
I'm not sure, but I wouldn't make one and would work to prevent one's creation. On the one hand, death is an intrinsic evil, unlike mere drudgery. On the other hand, I support the right to self terminate.
Have you ever closed an application on your computer? What distinguishes a person from any other computation, and why does that particular distinction carry so much moral weight?
A person is reflectively self aware.
Evolution built me to care about humans, and upon reflection, the values I have include non-humans who have features like being reflectively self aware.
Is that what you would want to want, given the option, or is that a lizard-brain instinct that gets in the way of your ability to evaluate what's really the right thing to do?
I can still interpret that either way. Do you mean that on reflection you realize that you emotionally desire that, or that on reflection you *decide" that that's what's important?
There's also Hayekian arguments-- self-aware agents are apt to accumulate information about their own desires and activities. Systems which allow that information to have an effect seem to be more capable.