My confidence bounds were 75% and 98% for defect, so my estimate was diametrically opposed to yours. If the admittedly low sample size of these comments is any indication, we were both way off.
Why do you think most would cooperate? I would expect this demographic to do a consequentialist calculation, and find that an isolated cooperation has almost no effect on expected value, whereas an isolated defection almost quadruples expected value.
Nice job on the survey. I loved the cooperate/defect problem, with calibration questions.
I defected, since a quick expected value calculation makes it the overwhelmingly obvious choice (assuming no communcation between players, which I am explicitly violating right now). Judging from comments, it looks like my calibration lower bound is going to be way off.
I agree that the statement is not crystal clear. It makes it possible to confuse the (change in the average) with the (average of the change).
Mathematically speaking, we represent our beliefs as a probability distribution on the possible outcomes, and change it upon seeing the result of a test (possibly for every outcome). The statement is that “if we average the possible posterior probability distributions weighted by how likely they are, we will end up with our original probability distribution.”
If that were not the case, it would imply that we were failing to make use of all of the prior information we have in our original distribution.
A misunderstood reading of the statement is that “the average of the absolute change in the probability distribution on measurement is zero.” This is not the case, as you rightly point out. It would imply that we expect the test to yield no information.
For the moment, I'm going to strike the comment from the post. I don't want to ascribe a viewpoint to VincentYu that he doesn't actually hold.
I added a section called "Deciding how to decide" that (hopefully) deals with this issue appropriately. I also amended the conclusion, and added you as an acknowledgement.
I'm not sure why it got moved: maybe not central to the thesis of LW, or maybe not high enough quality. I'm going to add some discussion of counter-arguments to the limit method. Maybe that will make a difference.
I noticed that the discussion picked up when it got moved, and I learned some useful stuff from it, so I'm not complaining.
Ok, I think I've got it. I'm not familiar with VNM utility, and I'll make sure to educate myself.
I'm going to edit the post to reflect this issue, but it may take me some time. It is clear (now that you point it out) that we can think of the ill-posedness coming from our insistence that the solution conform to aggregative utilitarianism, and it may be possible to sidestep the paradox if we choose another paradigm of decision theory. Still, I think it's worth working as an example, because, as you say, AU is a good general standard, and many readers will be familiar with it. At the minimum, this would be an interesting finite AU decision problem.
Thanks for all the time you've put into this.
I would like to include this issue in the post, but I want to make sure I understand it first. Tell me if this is right:
It is possible mathematically to represent a countably infinite number of immortal people, as well as the process of moving them between spheres. Further, we should not expect a priori that a problem involving such infinities would have a solution equivalent to those solutions reached by taking infinite limits of an analogous finite problem. Some confusion arises when we introduce the concept of “utility” to determine which of the two choices is better, since utility only serves as a basis on which to make decision for finite problems.
If that’s what you’re saying, I have a couple of questions.
Do you view the paradox as therefore unresolvable as stated, or would you claim that a different resolution is correct?
If I carefully restricted my claim about ill-posedness to the question of which choice is better from a utilitarian sense, would you agree with it?
The final section has been edited to reflect the concerns of some of the commenters.
It's true: if you're optimizing for altruism, cooperation is clearly better.
I guess it's not really a "dilemma" as such, since the optimal solution doesn't depend at all on what anyone else does. If you're trying to maximize EV, defect. If you're trying to maximize other people's EV, cooperate.