Nick_Tarleton comments on Formalizing reflective inconsistency - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (13)
In this case, for your future selves to care about each other is no worse than for them not to, so if the future might not be flat it increases expectations.
This scenario introduces a direct dependence of outcomes on your goal system, not just your actions; this does complicate things and it's common to assume it's not the case.
I don't know how your (or my) morality answers these questions, but however it answers them is what it would want to bind future selves to use. The real underlying reason that EY's statement is a special case of is "see to it that other agents share your utility function, or something as close to it as possible."
Would you argue that it is always better to assist one's xerox-sibs, than not?
My intention in offering those two "pathological" scenarios was to argue that there is an aspect of scenario-dependence in the general injunction "assist your xerox-sibs".
You've disposed of my two counterexamples with two separate counterarguments. However, you haven't offered an argument for scenario-INDEPENDENCE of the injunction.
Your last sentence contains very interesting guideline. I don't think it's really an analysis of the original statement, but that's a side question. I'll have to think about it some more.