# Manfred comments on Fundamentals of kicking anthropic butt - Less Wrong

17 26 March 2012 06:43AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Sort By: Best

Comment author: 27 March 2012 08:28:29AM *  0 points [-]

Not sure I follow that... what did you mean by an "ordinary" utility maximizer"? Is it a selfish or a selfless utility function, and if selfish what is the discount rate? The point about Armstrong's paper is that really does matter.

So you have this utility function U, and it's a function of different outcomes, which we can label by a bunch of different numbers "x". And then you pick the option that maximizes the sum of U(x) * P(x | all your information).

There are two ways this can fail and need to be extended - either there's an outcome you don't have a utility for, or there's an outcome you don't have a probability for. Stuart's paper is what you can do if you don't have some probabilities. My post is how to get those probabilities.

If something is unintuitive, ask why it is unintuitive. Eventually either you'll reach something wrong with the problem (does it neglect model uncertainty?), or you'll reach something wrong with human intuitions (what is going on in peoples' heads when they get the monty hall problem wrong?). In the meanwhile, I still think you should follow the math - unintuitiveness is a poor signal in situations that humans don't usually find themselves in.

Comment author: 27 March 2012 09:20:58AM 0 points [-]

This looks like what Armstrong calls a "selfless" utility function i.e. it has no explicit term for Beauty's welfare here/now or at any other point in time.. The important point here is that if Beauty bets tails, and the coin fell Tails, then there are two increments to U, whereas if the coin fell Heads then there is only one decrement to U. This leads to a 2/3 betting probability.

In the trillion Beauty case, the betting probability may depend on the shape of U and whether it is bounded (e.g. whether winning 1 trillion bets really is a trillion times better than winning one).

Comment author: 27 March 2012 09:45:49AM *  0 points [-]

This looks like what Armstrong calls a "selfless" utility function i.e. it has no explicit term for Beauty's welfare here/now or at any other point in time.

Stuart's terms are a bit misleading because they're about decision-making by counting utilities, which is not the same as decision-making by maximizing expected utility. His terms like "selfish" and "selfless" and so on are only names for counting rules for utilities, and have no direct counterpart in expected utility maximizers.

So U can contain terms like "I eat a candy bar. +1 utility." Or it could only contain terms like "a sentient life-form eats a candy bar. +1 utility." It doesn't actually change what process Sleeping Beauty uses to make decisions in anthropic situations, because those ideas only applied to decision-making by counting utilities. Additionally, Sleeping Beauty makes identical decisions in anthropic and non-anthropic situations, if the utilities and the probabilities are the same.

Comment author: 27 March 2012 10:40:11AM 0 points [-]

OK, I think this is clearer. The main point is that whatever this "ordinary" U is scoring (and it could be more or less anything) then winning the tails bet scores +2 whereas losing the tails bet scores -1. This leads to 2/3 betting probability. If subjective probabilities are identical to betting probabilities (a common position for Bayesians) then the subjective probability of tails has to be 2/3.

The point about alternative utility functions though is that this property doesn't always hold i.e. two Beauties winning doesn't have to be twice as good as one Beauty winning. And that's especially true for a trillion Beauties winning.

Finally, if you adopt a relative frequency interpretation (the coin-toss is repeated multiple times, and take limit to infinity) then there are obviously two relative frequencies of interest. Half the coins fall Tails, but two thirds of Beauty awakenings are after Tails. Either of these can be interpreted as a probability.

Comment author: 27 March 2012 11:24:54AM *  0 points [-]

If subjective probabilities are identical to betting probabilities (a common position for Bayesians)

If we start with an expected utility maximizer, what does it do when deciding whether to take a bet on, say, a coin flip? Expected utility is the utility times the probability, so it checks whether P(heads) * U(heads) > P(tails) * U(tails). So betting can only tell you the probability if you know the utilities. And changing the utility function around is enough to get really interesting behavior, but it doesn't mean you changed the probabilities.

Half the coins fall Tails, but two thirds of Beauty awakenings are after Tails. Either of these can be interpreted as a probability.

What sort of questions, given what sorts of information, would give you these two probabilities? :D

Comment author: 27 March 2012 04:35:10PM *  0 points [-]

For the first question: if I observe multiple coin-tosses and count what fraction of them are tails, then what should I expect that fraction to be? (Answer one half). Clearly "I" here is anyone other than Beauty herself, who never observes the coin-toss.

For the second question: if I interview Beauty on multiple days (as the story is repeated) and then ask her courtiers (who did see the toss) whether it was heads or tails, then what fraction of the time will they tell me tails? (Answer two thirds.)

What information is needed for this? None except what is defined in the original problem, though with the stipulation that the story is repeated often enough to get convergence.

Incidentally, these questions and answers aren't framed as bets, though I could use them to decide whether to make side-bets.