JGWeissman comments on Another attempt to explain UDT - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (50)
This is a good explanation, but I am wary of "I didn't understand it completely until two days ago." You might think that was kind of silly if you look back at it after your next insight.
One thing I would like to see from a "complete understanding" is a way that a computationally bounded agent could implement an approximation of uncomputable UDT.
In counterfactual mugging problems, we, trying to use UDT, assign equal weights to heads-universe and tails-universe, because we don't see any reason to expect one to have a higher Solomonoff prior than the other. So we are using our logical uncertainty about the Solomonoff prior rather than the Solomonoff prior directly, as ideal UDT would. Understanding how handle and systematically reduce this logical uncertainty would be useful.
I object to "magically", but this is otherwise correct.