uninverted comments on Hacking the CEV for Fun and Profit - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (194)
Simple solution: Build an FAI to optimize the universe to your own utility function instead of humanity's average utility function. They will be nearly the same thing anyway (remember, you were tempted to have the FAI use the average human utility function instead, so clearly, you sincerely care about other people's wishes). And in weird situations in which the two are radically different (like this one), your own utility function more closely tracks the intended purpose of an FAI.
"I was tempted not to kill all those orphans, so clearly, I'm a compassionate and moral person."
That's not an accurate parallel. The fact that you thought it was a good idea to use the average human utility function proves that you expect it to have a result almost identical to an FAI using your own utility function. If the average human wants you not to kill the orphans, and you also want not to kill the orphans, it doesn't matter which algorithm you use to decide not to kill the orphans.
I think that you're looking too deeply into this; what I'm trying to say is that accepting excuses of the form "I was tempted to do ~x before doing x, so clearly I have properties characteristic of someone who does ~x" is a slippery slope.
If you killed the orphans because otherwise Dr. Evil would have converted the orphans into clones of himself, and taken over the world, then your destruction of the orphanage is more indicative of a desire for Dr. Evil not to take over the world than any opinion on orphanages.
The fact you were tempted not to destroy the orphanage (despite the issue of Dr. Evil) is indicative of the fact you don't want to kill orphans.