lackofcheese comments on Simulation argument meets decision theory - Less Wrong

14 Post author: pallas 24 September 2014 10:47AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (54)

You are viewing a single comment's thread. Show more comments above.

Comment author: lackofcheese 24 September 2014 08:45:24PM *  5 points [-]

So, in other words: If I am D and all I want is to be king of the universe, then before stepping into a copying machine I should self-modify so that my utility function will say "+1000 if D is king of the universe" rather than "+1000 if I am king of the universe", because then my copy D2 will have a utility function of "+1000 if D is king of the universe", and that maximises my chances of being king of the universe.

That is what you mean, right?

I guess the anthropic counter is this: What if, after stepping into the machine, I will end up being D2 instead of being D!? If I was to self-modify to care only about D then I wouldn't end up being king of the universe, D would!