Not sure I understand your question, but:
Thanks, I fixed it.
Thanks a lot for your comments, they were very insightful for me. Let me play the Advocatus Diaboli here and argue from the perspective of a selfish agent against your reasoning (and thus also against my own, less refined version of it).
"I object to the identification 'S = $B'. I do not care about the money owned by the person in cell B, I only do so if that person is me. I do not know whether the coin has come up heads or tails, but I do not care about how much money the other person that may have been in cell B had the coin come up differently would...
The decision you describe in not stable under pre-commitments. Ahead of time, all agents would pre-commit to the $2/3. Yet they seem to change their mind when presented with the decision. You seem to be double counting, using the Bayesian updating once and the fact that their own decision is responsible for the other agent's decision as well.
Yes, this is exactly the point I was trying to make -- I was pointing out a fallacy. I never intended "lexicality-dependent utilitarianism" to be a meaningful concept, it's only a name for thinking in this fallacious way.
I elaborated on this difference here. However, I don't think this difference is relevant for my parent comment. With indexical utility functions I simply mean selfishness or "selfishness plus hating the other person if another person exists", while with lexicality-independent utility functions I meant total and average utilitarianism.
The broader question is "does bringing in gnomes in this way leave the initial situation invariant"? And I don't think it does. The gnomes follow their own anthropic setup (though not their own preferences), and their advice seems to reflect this fact (consider what happens when the heads world has 1, 2 or 50 gnomes, while the tails world has 2).
As I wrote (after your comment) here, I think it is prima facie very plausible for a selfish agent to follow the gnome's advice if a) conditional on the agent existing, the gnome's utility function agr...
First scenario: there is no such gnome. The number of gnomes is also determined by the coin flip, so every gnome will see a human. Then if we apply the reasoning from http://lesswrong.com/r/discussion/lw/l58/anthropic_decision_theory_for_selfish_agents/bhj7 , this will result with a gnome with a selfish human agreeing to x<$1/2.
If the gnomes are created after the coin flip only, they are in exactly the same situation like the humans and we cannot learn anything by considering them that we cannot learn from considering the humans alone.
...Instead, let'
Thanks for your reply.
Ok, I don't like gnomes making current decisions based on their future values.
For the selfish case, we can easily get around this by defining the gnome's utility function to be the amount of $ in the cell. If we stipulate that this can only change through humans buying lottery tickets (and winning lotteries) and that humans cannot leave the cells, the gnome's utility function coincides with the human's. Similarly, we can define the gnome's utility function to be the amount of $ in all cells (the average amount of $ in those cells ...
Not sure how much sense it makes to take the arithmetic mean of probabilities when the odds vary over many orders of magnitude. If the average is, say, 30%, then it hardly matters whether someone answers 1% or .000001%. Also, it hardly matters whether someone answers 99% or 99.99999%.
I guess the natural way to deal with this would be to average (i.e., take the arithmetic mean of) the order of magnitude of the odds (i.e., log[p/(1-p)], p someone's answer). Using this method, it would make a difference whether someone is "pretty certain" or "extremely certain" that a certain statement is true or false.
Does anyone know what the standard way for dealing with this issue is?
For the School Mark problem, the causal diagram I obtain from the description is one of these:
diagram
or
diagram
For the first of these, the teacher has waived the requirement of actually sitting the exam, and the student needn't > bother. In the second, the pupil will not get the marks except by studying for and taking the exam. See also the decision problem I describe at the end of this comment.
I think it's clear that Pallas had the first diagram in mind, and his point was exactly that the rational thing to do is to study despite the fact that the ...
The results you quote are very interesting and answer questions I've been worrying about for some time. Apologies for bringing up two purely technical inquiries:
Could you provide a reference for the result you quote? You referred to Eq. (34) in Everett's original paper in another comment, but this doesn't seem to make the link to the VNM axioms and decision theory.
<>
That seems wrong to me. There has to be a formulation of the form if the two initially perfectly entangled particles get only slightly entangled with other particles, then quantum ...
Dear Americans,
While spending a holiday in the New Orleans and Mississippi region, I was baffled by the typical temperatures in air-conditioned rooms. The point of air conditioning is to make people feel comfortable, right? It is obviously very bad at achieving this. I saw shivering girls with blue lips waiting in the airport. I saw ladies wearing a jacket with them which they put on as soon as they entered an air-conditioned room. The rooms were often so cold that I felt relieved the moment I left them and went back into the heat. Cooling down less than t... (read more)