Wei_Dai comments on indexical uncertainty and the Axiom of Independence - Less Wrong

9 Post author: Wei_Dai 07 June 2009 09:18AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (70)

You are viewing a single comment's thread. Show more comments above.

Comment author: Vladimir_Nesov 08 June 2009 09:15:10PM *  3 points [-]

Together with Eliezer's idea that agents who know each other's source code ought to play cooperate in one-shot PD, doesn't it imply that all sufficiently intelligent and reflective agents across all possible worlds should do a global trade and adopt a single set of preferences that represents a compromise between all of their individual preferences?

It does, and I discussed that here. An interesting implication that I noticed a few weeks back is that an UFAI would want to cooperate with a counterfactual FAI, so we get a slice of the future even if we fail to build FAI, depending on how probable it was that we would be able to do that. A Paperclip maximizer might wipe out humanity, then catch up on its reflective consistency, look back, notice that there was a counterfactual future where a FAI is built, allot some of the collective preference to humanity, and restore it from the info remaining after the initial destruction (effectively constructing a FAI in the process). (I really should make a post on this. Some of the credit due to Rolf Nelson for UFAI deterrence idea.)

Comment author: Wei_Dai 09 June 2009 07:38:26AM 2 points [-]

I'd like to note a connection between Vladimir's idea, and Robin Hanson's moral philosophy, which also involves taking into account the wants of counterfactual agents.

I'm also reminded of Eliezer's Three Worlds Collide story. If Vladimir's right, many more worlds (in the sense of possible worlds) will be colliding (i.e., compromising/cooperating).

I look forward to seeing the technical details when they've been worked out.