I don't understand how the examples given illustrate free-floating beliefs: they seem to have at least some predictive powers, and thus shape anticipation - (some comments by others below illustrate this better).
The phlogiston theory had predictive power (e.g. what kind of "air" could be expected to support combustion, and that substances would grow lighter when they burned), and it was falsifyable (and was eventually falsified). It had advantages over the theories it replaced and was replaced by another theory which represented a better understanding. (I base this reading on Jim Loy's page on Phlogiston Theory.
Literary genres don't have much predictive powers if you don't know anything about them - if you do, then they do. Classifying a writer as producing "science fiction" or "fantasy" creates anticipations that are statistically meaningful. For another comparison, saying some band plays "Death Metal" will shape our anticipation; somewhat differently for those who can distinguish Death Metal from Speed Metal as compared to those who merely know that "Metal" means "noise".
I can imagine beliefs leading to false anticipations, and they're obviously inferior to beliefs leading to more correct ones. That doesn't mean they're free-floating.
One example for the free-floating belief is actually about the tree falling in the forest: to believe that it makes a sound does not anticipate any sensory experience, since the tree falls explicitly where nobody is around to hear it, and whether there is sound or no sound will not change how the forest looks when we enter it later. However, to let go of the belief that the tree makes a sound does not seem to me to be very useful. What am I missing?
I understand that many beliefs are held not because they have predictive power, but because they generalize experiences (or thoughts) we have had into a condensed form: a sort of "packing algrithm" for the mind when we detect something common; and when we understand this commonality enough, we get to the point where we can make prediction, and if we don't yet, we can't, but may do so later. There is no belief or thought we can hold that we couldn't trace back to experiences; beliefs are not anticipatory, but formed from hindsight. They organize past experience. Can you predict which of these beliefs is not going to be helpful in organizing future experiences? How?
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
This is my attempt at a pedagogical exposition of “the solution”. It’s overly long, and I've lost perspective completely about what is understood by the group here and what isn't. But since I've written up this solution for myself, I'll go ahead and share it.
The cases I'm describing below are altered from the OP so that they completely non-metaphysical, in the sense that you could implement them in real life with real people. Thus there is an objective reality regarding whether money is collectively lost or won, so there is finally no ambiguity about what the correct calculation actually is.
Suppose that there are twenty different graduate students {Amy, Betty, Cindy, ..., Tony} and two hotels connected by a breezeway. Hotel Green has 18 green rooms and 2 red rooms. Hotel Red has 18 red rooms and 2 green rooms. Every night for many years, students will be assigned a room in either Hotel Green or Hotel Red depending on a coin flip (heads --> Hotel Green for the night, tails --> Hotel Red for the night). Students won’t know what hotel they are in but can see their own room color only. If a student sees a green room, that student correctly deduces they are in Hotel Green with 90% probability.
Case 1: Suppose that every morning, Tony is allowed to bet that he is in a green room. If he bets ‘yes’ and is correct, he pockets $12. If he bets ‘yes’ and is wrong, he has to pay $52. (In other words, his payoff for a correct vote is $12, the payoff for a wrong vote is -$52.) What is the expected value of his betting if he always says ‘yes’ if he is in a green room?
For every 20 times that Tony says ‘yes’, he wins 18 times (wins $12x18) and he loses twice (loses $52x2), consistent with his posterior. One average he wins $5.60 per bet , or $2.80 per night. (He says “yes” to the bet 1 out of every 2 nights, because that is the frequency with which he finds himself in a green room.) This is a steady money pump in the student’s favor.
The correct calculation for Case 1 is:
average payoff per bet = (probability of being right)x(payoff if right)+ (probability of being wrong)x(payoff if wrong) = .9x18+.1x-52 =5.6.
Case 2: Suppose that Tony doesn’t pocket the money, but instead the money is placed in a tip jar in the breezeway. Tony’s betting contributes $2.80 per night on average to the tip jar.
Case 3: Suppose there is nothing special about Tony, and all the students get to make bets. They will all make bets when they wake in green rooms, and add $2.80 per night to the tip jar on average. Collectively, the students add $56 per night to the tip jar on average. (If you think about it a minute, you will see that they add $216 to the tip jar on nights that they are assigned to hotel Green and lose $104 on nights that they are assigned to hotel Red.) If the money is distributed back to the students, they each are making $2.80 per night, the same steady money pump in their favor that Tony took advantage of in Case 1.
Case 4: Now consider the case described in the OP. We already understand that the students will vote “yes” if they wake in a green room and that they expect to make money doing so. Now the rules are going to change, however, so that when all the green roomers unanimously vote “yes”, $12 are added to the tip jar if they are correct and $52 are subtracted if they are wrong. Since the students are assigned to Hotel Green half the time and to Hotel Red half the time, on average the tip jar loses $20 every night. Suddenly, the students are losing $1 a night!
Each time a student votes correctly, it is because they are all in Hotel Green, as per the initial set up of the problem in the OP. So all 18 green roomer votes are correct and collectively earn $12 for that night. The payoff is $12/18 per correct vote. Likewise, the payoff per wrong vote is -$52/2.
So the correct calculation for case 4 is as follows:
average payoff per bet = (probability of being right)x(payoff if right)+ (probability of being wrong)x(payoff if wrong) = .9x(18/12)+.1x(-52/2) = -2.
So in conclusion, in the OP problem, the green roomer must recognize that he is dealing with case #4 and not Case #1, in which the payoff is different (but not the posterior).
I believe both of your computations are correct, and the fallacy lies in mixing up the payoff for the group with the payoff for the individual - which the frame of the problem as posed does suggest, with multiple identities that are actually the same person. More precisely, the probabilities for the individual are 90/10 , but the probabilities for the groups are 50/50, and if you compute payoffs for the group (+$12/-$52), you need to use the group probabilities. (It would be different if the narrator ("I") offered the guinea pig ("you") the $12/$52 odds individually.)
byrnema looked at the result from the group viewpoint; you get the same result when you approach it from the individual viewpoint, if done correctly, as follows:
For a single person, the correct payoff is not $12 vs. -$52, but rather ($1 minus $6/18 to reimburse the reds, making $0.67) * 90% and ($1 minus $54/2 = -$26) * 10%, so each of the copies of the guinea pig is going to be out of pocket by 2/3* 0.9 + (-26) * 0.1 = 0.6 - 2.6 = -2, on average.
The fallacy of Eliezer's guinea pigs is that each of them thinks they get the $18 each time, which means that the 18 goes into his computation twice (squared) for their winnings (18 * 18/20). This is not a problem with antropic reasoning, but with statistics.
A distrustful individual would ask themselves, "what is the narrator getting out of it", and realize that the narrator will see the -$12 / + $52 outcome, not the guinea pig - and that to the narrator, the 50/50 probability applies. Don't mix them up!