Squark comments on Will AGI surprise the world? - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (129)
I'm not entirely sure what you're saying here. UDT suggests that subjective probabilities are meaningless (thus taking the third horn of the anthropic trilemma although it can be argued that selfish utility functions are still possible). "What is the probability I am clone #n" is not a meaningful question. "What is the (updated/posteriori) probability I am in a universe with property P" is not a meaningful question in general but has approximate meaning in contexts where anthropic considerations are irrelevant. "What is the a priori probability the universe has property P" is a question that might be meaningful but is probably also approximate since there is a freedom of redefining the prior and the utility function simultaneously (see this). The single fully meaningful type of question is "what is the expected utility I should assign to action A?" which is OK since it is the only question you have to answer in practice.
Boltzmann brains exist very far in the future wrt "normal" brains, therefore their contribution to utility is very small. The discount depends on absolute time.
If "measure" here equals "probability wrt prior" (e.g. Solomonoff prior) then this is just another way to define a satisficing agent (utility equals either 0 or 1).
Good point. Surely we need to understand these baby universes better.