Nick_Tarleton comments on Formalizing reflective inconsistency - Less Wrong

3 Post author: Johnicholas 13 September 2009 04:23AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (13)

You are viewing a single comment's thread. Show more comments above.

Comment author: Johnicholas 13 September 2009 07:53:03PM 0 points [-]

Exactly. There is a difference between assistance and nonassistnace, and the only way one can recommend assistance is if the SCENARIO is such that assistance leads to better results, whatever "better" means to you. For paperclip maximizers, that's paperclips.

If assistance were unavailable, of zero use, or of actively negative use, then one would not endorse it over nonassistance. I've been trying to convince people that the injunction to prefer assisting one's sibs over not assisting is scenario-dependent.

Comment author: Nick_Tarleton 13 September 2009 09:08:59PM *  1 point [-]

But EY's statement is about terminal values, not injunctions.

Say that at time t=0, you don't care about any other entities that exist at t=0, including close copy-siblings; and that you do care about all your copy-descendants; and that your implementation is such that if you're copied at t=1, by default, at t=2 each of the copies will only care about itself. However, since you care about both of those copies, their utility functions differ from yours. As a general principle, your goals will be better fulfilled if other agents have them, so you want to modify yourself so that your copy-descendants will care about their copy-siblings.

Comment author: Johnicholas 13 September 2009 09:51:10PM -1 points [-]

I disagree with your first claim (The statement is too brief and ambiguous to say what definitively what it "is about"), but I don't want to argue it. Let's leave that kind of interpretationism to the scholastic philosophers, who spend vast amounts of effort figuring out what various famous ancients "really meant".

The principle "Your goals will be better fulfilled if other agents have them" is very interesting, and I'll have to think about it.