John_Maxwell_IV comments on Fundamentals of kicking anthropic butt - Less Wrong

18 Post author: Manfred 26 March 2012 06:43AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (60)

You are viewing a single comment's thread. Show more comments above.

Comment author: drnickbone 26 March 2012 08:11:59PM *  1 point [-]

The last time I had an anthropic principle discussion on Less Wrong I was pointed at the following paper: http://arxiv.org/abs/1110.6437 (See http://lesswrong.com/lw/9ma/selfindication_assumption_still_doomed/5sbv)

This struck me as interesting since it relates the Sleeping Beauty problem to a choice of utility function. Is Beauty a selfish utility maximizer with very high discount rate, or a selfish utility maximizer with low discount rate, or a total utility maximizer, or an average utility maximizer? The type of function affects what betting odds Beauty should accept.

Incidentally, one thing that is not usually spelled out in the story (but really should be) is whether there are other sentient people in the universe apart from Beauty, and how many of them there are. Also, does Beauty have any/many experiences outside the context of the coin-toss and awakening? These things make a difference to SSA (or to Bostrom's SSSA).

Comment author: John_Maxwell_IV 27 March 2012 12:00:29AM -2 points [-]

I haven't read the paper, but it seems like one could just invent payoff schemes customized for her utility function and give her arbitrary dilemmas that way, right?