A near-final version of my Anthropic Decision Theory paper is available on the arXiv. Since anthropics problems have been discussed quite a bit on this list, I'll be presenting its arguments and results in this, subsequent, and previous posts 1 2 3 4 5 6.
In the previous post, I looked at a decision problem when Sleeping Beauty was selfless or a (copy-)total utilitarian. Her behaviour was reminiscent of someone following SIA-type odds. Here I'll look at situations where her behaviour is SSA-like.
Altruistic average utilitarian Sleeping Beauty
In the incubator variant, consider the reasoning of an Outside/Total agent who is an average utilitarian (and there are no other agents in the universe apart from the Sleeping Beauties).
"If the various Sleeping Beauties decide to pay £x for the coupon, they will make -£x in the heads world. In the tails world, they will each make £(1-x) each, so an average of £(1-x). This give me an expected utility of £0.5(-x+(1-x))= £(0.5-x), so I would want them to buy the coupon for any price less than £0.5."
And this will then be the behaviour the agents will follow, by consistency. Thus they would be behaving as if they were following SSA odds, and putting equal probability on the heads versus tails world.
For a version of this that makes senses for the classical Sleeping Beauty problem, one could imagine that she to be awaknened a week after the experiment. Further imagine she would take her winnings and losses during the experiment in the form of chocolate, consumed immediatly. Then because of the amneia drug, she would only remember one instance of this in the tails world. Hence if she valued memory of pleasure, she would want to be average utilitarian towards the pleasures of her different versions, and would follow SSA odds.
Standard SSA has a problem with reference classes. For instance, the larger the reference class becomes, the more the results of SSA in small situations become similar to SIA. The above setup mimics the effect: if there is a very large population of outsider individuals that Sleeping Beauty is altruistic towards, then the gains to two extra copies will tend to add, rather than average: if Ω is large, then 2x/(2+Ω) (averaged gain to two created agents each gaining x) is approximately twice x/(1+Ω) (averaged gain to one created agent gaining $x$), so she will behave more closely to SIA odds.
This issue is not present for copy-altruistic average utilitarian Sleeping Beauties, as she doesn't care about any outsiders.
Selfish Sleeping Beauty
In all of the above example, the goals of one Sleeping Beauty were always in accordance with the goals of her copies or the past and future versions of herself. But what happens when this fails? What happens when the different versions are entirely selfish towards each other? Very easy to understand in the incubator variant (the different created copies feel no mutual loyalty), it can also be understood in the standard Sleeping Beauty problem if she is a hedonist with a high discount rates.
Since the different copies do have different goals, the consistency axioms no longer apply. It seems that we cannot decide what the correct decision is in this case. There is, however, a tantalising similarity between this case and the altruistic average utilitarian Sleeping Beauty. The setups (including probabilities) are the same. By `setup' we mean the different worlds, their probabilities, the number of agents in each world, and the decisions faced by these agents. Similarly, the possible 'linked' decisions are the same. See future posts for a proper definition of linked decisions; here it just means that all copies will have to make the same decision, being identical, so there is one universal 'buy coupon' or 'reject coupon'. And, given this linking, the utilities derived by the agents is the same for either outcome in the two cases.
To see this, consider the selfish situation. Each Sleeping Beauty will make a single decision, whether to buy the coupon at the price offered. Not buying the coupon nets her £0 in all worlds. Buying the coupon at price £x nets her -£x in the heads world, and £(1-x) in the tails world. The linking is present but has no impact on these selfish agents: they don't care what the other copies decide.
This is exactly the same for the altruistic average utilitarian Sleeping Beauties. In the heads world, buying the coupon at price £x nets her -£x worth of utility. In the tails world, it would net the current copy £(1-x) worth of individual utility. Since the copies are identical (linked decision), this would happen twice in the tails world, but since she only cares about the average, this grants both copies only £(1-x) worth of utility in total. The linking is present, and has an impact, but that impact is dissolved by the average utilitarianism of the copies.
Thus the two situations have the same setup, the same possible linked decisions and the same utility outcomes for each possible linked decision. It would seem there is nothing relevant to decision theory that distinguishes these two cases. This gives us the last axiom:
- Isomorphic decisions: If two situations have the same setup, the same possible linked decisions and the same utility outcomes for each possible linked decision, and all agents are aware of these facts, then agents should make the same decisions in both situations.
This axiom immediately solves the selfish Sleeping Beauty problem, implying that agents there must behave as they do in the altruistic average utilitarian Sleeping Beauty problem, namely paying up to £0.50 for the coupon. In this way, the selfish agents also behave as if they were following SSA probabilities, and believed that heads and tails were equally likely.
Summary of results
We have broadly four categories of agents, and they follow two different types of decisions (SIA-like and SSA-like). In the Sleeping Beauty problem (and in more general problems), the categories decompose as:
- Selfless agents who will follow SIA-type odds.
- (Copy-)Altruistic total utilitarians who will follow SIA-type odds.
- (Copy-)Altruistic average utilitarians who will follow SSA-type odds.
- Selfish agents who will follow SSA-type odds.
For the standard Sleeping Beauty problem, the first three decisions derived from consistency. The same result can be established for the incubator variants using the Outside/Total agent axioms. The selfish result, however, needs to make use of the Isomorphic decisions axiom.
EDIT: A good question from Wei Dai illustrates the issue of precommitment for selfish agents.
Okay. So, going the UDT route, what are the prices people would pay? (also known as the "correct" route - done by choosing the optimal strategy, not just the best available action)
In the selfish non-anthropic problem, we evaluate the payoff of the strategies "always yea" and "always nay." If heads (0.5), and if picked as decider (0.1), yea gives 100 and nay 700. If tails (0.5) and picked as decider (0.9), yea gives 1000 and nay 700. Adding these expected utilities gives an expected payoff of 455 from "always yea" and an expected payoff of 350 from "always nay." (corrected)
However, in the "isomorphic" altruistic case, you don't actually care about your personal reward - you simply care about the global result. Thus if heads (0.5), "always yea" always gives 100 and "always nay" always gives 700. And if tails, "always yea" always gives 1000 and "always nay" always gives 700. So in that case the payoffs are "yea" 550 and "nay" 700.
So this "isomorphic" stuff doesn't look so isomorphic.
In the selfish case, you forgot the 0.5: the payoff is 455 for "always yea", 350 for "always nay".
And you seem to be comparing selfish with selfless, not with average utilitarian.
For an average utilitarian, under "always yea", 100 is given out once in the heads world, and 1000 is given out 9 times in the tails world. These must be shared among 10 people, so the average is 0.5(100+1000x9)/10=455. For "always nay", 700 is give out once in the heads world, and 9 times in the tails world, giving 0.5(700 + 9x700)/10=350, same as for the selfish agent.