Not sure I understand your question, but:
Thanks, I fixed it.
Thanks a lot for your comments, they were very insightful for me. Let me play the Advocatus Diaboli here and argue from the perspective of a selfish agent against your reasoning (and thus also against my own, less refined version of it).
"I object to the identification 'S = $B'. I do not care about the money owned by the person in cell B, I only do so if that person is me. I do not know whether the coin has come up heads or tails, but I do not care about how much money the other person that may have been in cell B had the coin come up differently would have paid or won. I only care about the money owned by the person in cell B in "this world", where that person is me. I reject identifying myself with the other person that may have been in cell B had the coin come up differently, solely because that person would exist in the same cell as I do. My utility function thus cannot be expressed as a linear combination of $B and $C.
I would pay a counterfactual mugger. In that case, there is a transfer, as it were, between two possible selfes of mine that increases "our" total fortune. We are both both possible descendants of the same past-self, to which each of us is connected identically. The situation is quite different in the incubator case. There is no connection over a mutual past self between me and the other person that may have existed in cell B after a different outcome of the coin flip. This connection between past and future selves of mine is exactly what specifies my selfish goals. Actually, I don't feel like the person that may have existed in cell B after a different outcome of the coin flip is "me" any more than the person in cell C is "me" (if that person exists). Since I will pay and win as much as the person in cell C (if they exist), I cannot win any money from them, and I don't care about whether they exist at all, I think I should decide as an average utilitarian would. I will not pay more than $0.50."
Is the egoist arguing this way mistaken? Or is our everyday notion of selfishness just not uniquely defined when it comes to the possibility of subjectively indistinguishable agents living in different "worlds", since it rests on the dubious concept of personal identity? Can one understand selfishness both as caring about everyone living in subjectively identical circumstances as oneself (and their future selves), and as caring about everyone to whom one is directly connected only? Do these two possibilities correspond to SIA-egoists and SSA-egoists, respectively, which are both coherent possibilities?
The decision you describe in not stable under pre-commitments. Ahead of time, all agents would pre-commit to the $2/3. Yet they seem to change their mind when presented with the decision. You seem to be double counting, using the Bayesian updating once and the fact that their own decision is responsible for the other agent's decision as well.
Yes, this is exactly the point I was trying to make -- I was pointing out a fallacy. I never intended "lexicality-dependent utilitarianism" to be a meaningful concept, it's only a name for thinking in this fallacious way.
I elaborated on this difference here. However, I don't think this difference is relevant for my parent comment. With indexical utility functions I simply mean selfishness or "selfishness plus hating the other person if another person exists", while with lexicality-independent utility functions I meant total and average utilitarianism.
The broader question is "does bringing in gnomes in this way leave the initial situation invariant"? And I don't think it does. The gnomes follow their own anthropic setup (though not their own preferences), and their advice seems to reflect this fact (consider what happens when the heads world has 1, 2 or 50 gnomes, while the tails world has 2).
As I wrote (after your comment) here, I think it is prima facie very plausible for a selfish agent to follow the gnome's advice if a) conditional on the agent existing, the gnome's utility function agrees with the agent's and b) conditional on the agent not existing, the gnome's utility function is a constant. (I didn't have condition b) explicitly in mind, but your example showed that it's necessary.) Having the number of gnomes depend upon the coin flip invalidates their purpose. The very point of the gnomes is that from their perspective, the problem is not "anthropic", but a decision problem that can be solved using UDT.
I also don't see your indexical objection. The sleeping beauty could perfectly have an indexical version of total utilitarianism ("I value my personal utility, plus that of the sleeping beauty in the other room, if they exist"). If you want to proceed further, you seem to have to argue that indexical total utilitarianism gives different decisions than standard total utilitarianism.
That's what I tried in the parent comment. To be clear, I did not mean "indexical total utilitarianism" to be a meaningful concept, but rather a wrong way of thinking, a trap one can fall into. Very roughly, it corresponds to thinking of total utilitarianism as "I care for myself plus any other people that might exist" instead of "I care for all people that exist". What's the difference, you ask? A minimal non-anthropic example that illustrates the difference would be very much like the incubator, but without people being created. Imagine 1000 total utilitarians with identical decision algorithms waiting in separate rooms. After the coin flip, either one or two of them are offered to buy a ticket that pays $1 after heads. When being asked, the agents can correctly perform a non-anthropic Bayesian update to conclude that the probability of tails is 2/3. An indexical total utilitarian reasons: "If the coin has shown tails, another agent will pay the same amount $x that I pay and win the same $1, while if the coin has shown heads, I'm the only one who pays $x. The expected utility of paying $x is thus 1/3 (-x) + 2/3 2 * (1-x)." This leads to the incorrect conclusion that one should pay up to $4/5. The correct (UDT-) way to think about the problem is that after tails, one's decision algorithm is called twice. There's only one factor of 2, not two of them. This is all very similar to this post.
To put this again into context: You argued that selfishness is a 50/50 mixture of hating the other person, if another person exists, and total utilitarianism. My reply was that this is only true if one understands total utilitarianism in the incorrect, indexical way. I formalized this as follows: Let the utility function of a hater be vh - h vo (here, vh is the agent's own utility, vo the other person's utility, and h is 1 if the other person exists and 0 otherwise). Selfishness would be a 50/50 mixture of hating and total utilitarianism if the utility function of a total utilitarian were vh + h vo. However, this is exactly the wrong way of formalizing total utilitarianism. It leads, again, to the conclusion that a total utilitarian should pay up to $4/5.
First scenario: there is no such gnome. The number of gnomes is also determined by the coin flip, so every gnome will see a human. Then if we apply the reasoning from http://lesswrong.com/r/discussion/lw/l58/anthropic_decision_theory_for_selfish_agents/bhj7 , this will result with a gnome with a selfish human agreeing to x<$1/2.
If the gnomes are created after the coin flip only, they are in exactly the same situation like the humans and we cannot learn anything by considering them that we cannot learn from considering the humans alone.
Instead, let's now make the gnome in the head world hate the other human, if they don't have one themselves. The result of this is that they will agree to any x<$1, as they are (initially) indifferent to what happens in the heads world (potential gains, if they are the gnome with a human, as cancelled out by the potential loss, if they are the gnome without the human).
What this shows is that "Conditional on me existing, the gnome's utility function coincides with mine" is not a sufficient condition for "I should follow the advice that the gnome would have precommited to give".
What I propose is instead: "If conditional on me existing the gnome's utility function coincides with mine, and conditional on me not existing the gnome's utility function is a constant, then I should follow the advice that the gnome would have precommited to."
ETA: Speaking of indexicality-dependent utility functions here. For lexicality-independent utility functions, such as total or average utilitarianism, the principle simplifies to: "If the gnome's utility function coincides with mine, then I should follow the advice that the gnome would have precommited to."
Thanks for your reply.
Ok, I don't like gnomes making current decisions based on their future values.
For the selfish case, we can easily get around this by defining the gnome's utility function to be the amount of $ in the cell. If we stipulate that this can only change through humans buying lottery tickets (and winning lotteries) and that humans cannot leave the cells, the gnome's utility function coincides with the human's. Similarly, we can define the gnome's utility function to be the amount of $ in all cells (the average amount of $ in those cells inhabited by humans) in the total (average) utilitarian case.
This seems to be a much neater way of using the gnome heuristic than the one I used in the original post, since the gnome's utility function is now unchanging and unconditional. The only issue seems to be that before the humans are created, the gnome's utility function is undefined in the average utilitarian case ("0/0"). However, this is more a problem of average utilitarianism than of the heuristic per se. We can get around it by defining the utility to be 0 if there aren't any humans around yet.
The incubator always creates two people, but in the heads world, the second person can never gain (nor lose) anything, no matter what they agree to: any deal is nullified. This seems a gnome setup without the gnomes. If everyone is an average utilitarian, then they will behave exactly as the total utilitarians would (since population is equal anyway) and buy the ticket for x<$2/3. So this setup has changed the outcome for average utilitarians. If its the same as the gnome setup (and it seems to be) then the gnome setup is interfering with the decisions in cases we know about. The fact that the number of gnomes is fixed is the likely cause.
I don't follow. As I should have written in the original post, total/average utilitarianism includes of course the wellbeing and population of humans only, not of gnomes. Otherwise, it's trivial that the presence of gnomes affects the conclusions. That the presence of an additional human affects the conclusion for average utilitarians is not surprising, since in contrast to the presence of gnomes, an additional human changes the relevant population.
Incidentally, one reason for the selfish=average utilitarian is that I sometimes model selfish as the average between total utilitarian incubator and anti-incubator (where the two copies hate each other in the tail world). 50%-50% on total utilitarian vs hatred seems to be a good model of selfishness, and gives the x<$1/2 answer.
Hm, so basically one could argue as follows against my conclusion that both selfish and total utilitarians pay up to $2/3: A hater wouldn't pay anything for a ticket that pays $1 in the tails world. Since selfishness is a mixture of total utilitarianism and hating, a selfish person certainly cannot have the same maximal price as a total utilitarian.
However, I feel like "caring about the other person in the tail world in a total utilitarian sense" and "hating the other person in the tail world" are not exactly mirror images of each other. The difference is that total utilitarianism is lexicality-independent, while "hating the other person" isn't. My claim is: However you formalize "hating the person in the other room in the tail world" and "being a total utilitarian", the statements "a total utilitarian pays up to $2/3" and "selfishness is a mixture of total utilitarianism and hating" and "a hater would not pay more than $0 for the ticket" are never simultaneously true.
Imagine that the human formally writes down their utility function in order to apply the "if there were a gnome in my room, what maximal prize to pay would it advise me after asking itself what advice it would have precommited to?" heuristic. We introduce the variables 'vh' and 'vo' for "$-value in this/the other room". These are 0 if there's no human, -x after buying a ticket after head, and 1-x after buying a ticket after tail. We also introduce a variable 't' which is 1 after tail and 0 after head.
We can then write down the following utility functions with their respective expectation values (from the point of view of the gnome before the coin flip):
egoist: vh => 1/4 * (-x+0+(1-x)+(1-x))
total ut.: vh + t vo => 1/4 (-x+0+2 (1-x)+2 (1-x))
hate: vh - t vo => 1/4 (-x+0+0+0)
Here, we see that egoism is indeed a mixture of total utilitarianism and hating, the egoist pays up to 2/3, and the hater pays nothing. However, according to this definition of total utilitarianism, a t.u. should pay up to 4/5. Its utility function is lexicality-dependent (the variable t enters only the utility coming from the other person), in contrast to true total utilitarianism.
In order to write down a lexicality-independent utility function, we introduce new variables 'nh' and 'no', the number of people here and in the other room (0 or 1). Then, we could make the following definitions:
egoist: nh vh
total ut.: nh vh + no vo
hate: nh vh - no * vo
(The 'nh' and 'no' factors are actually redundant, since 'vh' is defined to be zero if 'nh' is.)
With these definitions, both an egoist and a t.u. pay up to 2/3 and egoism is a mixture of t.u. and hating. However, the expected utility of a hater is now 0 independent of x, such that there is no longer a contradiction. The reason is that we now count the winnings of the single head-human one time positively (if ze is in our room) and one time negatively (if ze is in the other room). This isn't what we meant by hating, so we could modify the utility function of the hater as follows:
hate: nh (vh - no vo)
This reproduces again what we mean by hating (it is equivalent to the old definition 'vh - t * vo'), but now egoism is no longer a combination of hating and t.u..
In conclusion, it doesn't seem to be possible to derive a contradiction between "a hater wouldn't pay anything for a lottery ticket" and "both egoists and total utilitarians would pay up to $2/3".
Not sure how much sense it makes to take the arithmetic mean of probabilities when the odds vary over many orders of magnitude. If the average is, say, 30%, then it hardly matters whether someone answers 1% or .000001%. Also, it hardly matters whether someone answers 99% or 99.99999%.
I guess the natural way to deal with this would be to average (i.e., take the arithmetic mean of) the order of magnitude of the odds (i.e., log[p/(1-p)], p someone's answer). Using this method, it would make a difference whether someone is "pretty certain" or "extremely certain" that a certain statement is true or false.
Does anyone know what the standard way for dealing with this issue is?
Dear Americans,
While spending a holiday in the New Orleans and Mississippi region, I was baffled by the typical temperatures in air-conditioned rooms. The point of air conditioning is to make people feel comfortable, right? It is obviously very bad at achieving this. I saw shivering girls with blue lips waiting in the airport. I saw ladies wearing a jacket with them which they put on as soon as they entered an air-conditioned room. The rooms were often so cold that I felt relieved the moment I left them and went back into the heat. Cooling down less than to the optimally comfortable temperature would make some economical and ecological sense, and would make the transition between outside and inside less brutal. Cooling down more seems patently absurd.
What is going on here? Some possible explanations that come to mind:
Still, the above points seem nowhere near sufficient to explain the phenomenon. The temperatures seem uncomfortably low even for people wearing a suit with a tie. Places like cinemas clearly want their customers to feel comfortable, and their employees don't wear suits.
Thanks for clarifying.