SilasBarta comments on Morality as Parfitian-filtered Decision Theory? - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (270)
Thanks for the reasoned reply. I guess I wasn't clear, because I actually agree with a lot of what you just said! To reply to your points as best I can:
Natural selection filtered us for at least one omniscience/desert situation: the decision to care for offspring (in one particular domain of attraction). Like Omega, it prevents us (though with only near-perfect rather than perfect probability) from being around in the n-th generation if we don't care about the (n+1)th generation.
Also, why do you say that giving weight to SAMELs doesn't count as rational?
Difficulty of lying actually counts as another example of Parfitian filtering: from the present perspective, you would prefer to be able to lie (as you would prefer having slightly more money). However, by having previously sabotaged your ability to lie, people now treat you better. "Regarding it as suboptimal to lie" is one form this "sabotage" takes, and it is part of the reason you received previous benefits.
Ditto for keeping promises.
But I didn't make it that easy for you -- in my version of PH, there is no direct communication; Omega only goes by your conditional behavior. If you find this unrealistic, again, it's no different than what natural selection is capable of.
But my point was that the revealed preference does not reveal a unique utility function. If someone pays Omega, you can say this reveals that they like Omega, or that they don't like Omega, but view paying it as a way to benefit themselves. But at the point where you start positing that each happens-to-win decision is made in order to satisfy yet-another terminal value, your description of the situation becomes increasingly ad hoc, to the point where you have to claim that someone terminally values "keeping a promise that was never received".
I find it totally unrealistic. And therefore I will totally ignore it. The only realistic scenario, and the one that natural selection tries out enough times so that it matters, is the one with an explicit spoken promise. That is how the non-omniscient driver gets the information he needs in order to make his rational decision.
Sure it does ... As long as there has or has not been an explicit promise made to pay the driver, you can easily distinguish how much the driver gets due to the promise from what the driver gets because you like him.