MugaSofer comments on Rationality Quotes January 2013 - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (604)
Well, sacred value is a technical term.
If you genuinely attached infinite utility to your family's lives, then we could remove the finite terms in your utility function without affecting it's output. You are not valuing their lives above all else, you are refusing to trade them to gain anything else. There is a difference. Rejecting certain deals because the cost is emotionally charged is suboptimal. Human, but stupid. I (probably) wouldn't kill to save the sparrows, or for that matter to steal money for children dying in Africa, but that's not the right choice. That's just bias/akrasia/the sort of this this site is supposed to fight. If I could press a button and turn into an FAI, then I would. Without question. The fact that I'm not perfectly Friendly is a bad thing.
</rant>
Anyway.
Considering you're not typing from a bunker, and indeed probably drive a car, I'm guessing you're willing to accept small risks to your family. So my question for you is this: how small?
Incidentally, considering the quote this particular branch of this discussion sprouted from, you do realize that killing your son might be the only way to save the rest of your family? Now, if He was claiming that you terminally value killing your son, that would be another thing ...
You do have a point, but there is another explanation to resolve that, see this comment.
We still have a fundamental disagreement on whether rationality is in any way involved when reflecting on your terminal values. I claim that rationality will help the closet murderer who is firm in valuing pain and suffering the same as the altruist, the paperclipper or the FAI. It helps us in pursuing our goals, not in setting the axioms of our value systems (the terminal values).
There is no aspect of Bayes or any reasoning mechanism that tells you whether to value happy humans or dead humans. Reasoning helps you in better achieving your goals, nefarious or angelic as they may be.