John_Maxwell_IV comments on Fundamentals of kicking anthropic butt - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (60)
The last time I had an anthropic principle discussion on Less Wrong I was pointed at the following paper: http://arxiv.org/abs/1110.6437 (See http://lesswrong.com/lw/9ma/selfindication_assumption_still_doomed/5sbv)
This struck me as interesting since it relates the Sleeping Beauty problem to a choice of utility function. Is Beauty a selfish utility maximizer with very high discount rate, or a selfish utility maximizer with low discount rate, or a total utility maximizer, or an average utility maximizer? The type of function affects what betting odds Beauty should accept.
Incidentally, one thing that is not usually spelled out in the story (but really should be) is whether there are other sentient people in the universe apart from Beauty, and how many of them there are. Also, does Beauty have any/many experiences outside the context of the coin-toss and awakening? These things make a difference to SSA (or to Bostrom's SSSA).
I haven't read the paper, but it seems like one could just invent payoff schemes customized for her utility function and give her arbitrary dilemmas that way, right?