lackofcheese comments on Simulation argument meets decision theory - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (54)
A simpler version of the original post is this. Let there be a single, consistent utility function shared by all copies of the agent (X and all Xi). It assigns these utility values:
Of course, the post's premise is that the only actually possible universe in category 1 is that where all 1000 Xi instances choose "sim" (because they can't tell if they're in the simulation or not), so the total utility is then 1 + 0.2*1000 = 201.
This is a simple demonstration of TDT giving the right answer which maximizes the utility ("sim") while CDT doesn't (I think?)
What didn't make sense to me was saying X and Xi somehow have "different" utility functions. Maybe this was just confusion generated by imprecise use of words, and not any real difference.
The post then says:
I'm not sure if this is intended to change the situation. Once you have a utility function that gives out actual numbers, you don't care how it works on the inside and whether it takes into accounts another agent's utility or anything else.
The idea is that they have the same utility function, but the utility function takes values over anthropic states (values of "I").
U(I am X and X chooses sim) = 1
U(I am Xi and Xi chooses sim) = 0.2 etc.
I don't like it, but I also don't see an obvious way to reject the idea.