Eliezer_Yudkowsky comments on Timeless Decision Theory: Problems I Can't Solve - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (153)
Well, yeah, this is primarily a theory for AIs dealing with other AIs.
You could possibly talk about human applications if you knew that the N of you had the same training as rationalists, or if you assigned probabilities to the others having such training.
For X to be able to model the decisions of Y with 100% accuracy, wouldn't X require a more sophisticated model?
If so, why would supposedly symmetrical models retain this symmetry?
Nope. http://arxiv.org/abs/1401.5577