Wei_Dai comments on Metaphilosophical Mysteries - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (255)
That looks like the same position that Eliezer took, and I think I already refuted it. Let me know if you've read the one-logic thread and found my argument wrong or unconvincing.
The idea is that universal prior is really about observation-predicting algorithms that agents run, and not about prediction of what will happen in the world. So, for any agent that runs a given anticipation-defining algorithm and rewards/punishes the universal prior-based agent according to it, we have an anticipation-computing program that will obtain higher and higher probability in the universal prior-based agent.
This by the way again highlights the distinction between what will actually happen, and what a person anticipates - predictions are about capturing the concept of anticipation, an aspect of how people think, and are not about what in fact can happen.