Wei_Dai comments on indexical uncertainty and the Axiom of Independence - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (70)
It does, and I discussed that here. An interesting implication that I noticed a few weeks back is that an UFAI would want to cooperate with a counterfactual FAI, so we get a slice of the future even if we fail to build FAI, depending on how probable it was that we would be able to do that. A Paperclip maximizer might wipe out humanity, then catch up on its reflective consistency, look back, notice that there was a counterfactual future where a FAI is built, allot some of the collective preference to humanity, and restore it from the info remaining after the initial destruction (effectively constructing a FAI in the process). (I really should make a post on this. Some of the credit due to Rolf Nelson for UFAI deterrence idea.)
I'd like to note a connection between Vladimir's idea, and Robin Hanson's moral philosophy, which also involves taking into account the wants of counterfactual agents.
I'm also reminded of Eliezer's Three Worlds Collide story. If Vladimir's right, many more worlds (in the sense of possible worlds) will be colliding (i.e., compromising/cooperating).
I look forward to seeing the technical details when they've been worked out.