Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.
Several years ago, I posted about V.S. Ramachandran's 1996 theory explaining anosognosia through an "apologist" and a "revolutionary".
Anosognosia, a condition in which extremely sick patients mysteriously deny their sickness, occurs during right-sided brain injury but not left-sided brain injury. It can be extraordinarily strange: for example, in one case, a woman whose left arm was paralyzed insisted she could move her left arm just fine, and when her doctor pointed out her immobile arm, she claimed that was her daughter's arm even though it was obviously attached to her own shoulder. Anosognosia can be temporarily alleviated by squirting cold water into the patient's left ear canal, after which the patient suddenly realizes her condition but later loses awareness again and reverts back to the bizarre excuses and confabulations.
Ramachandran suggested that the left brain is an "apologist", trying to justify existing theories, and the right brain is a "revolutionary" which changes existing theories when conditions warrant. If the right brain is damaged, patients are unable to change their beliefs; so when a patient's arm works fine until a right-brain stroke, the patient cannot discard the hypothesis that their arm is functional, and can only use the left brain to try to fit the facts to their belief.
In the almost twenty years since Ramachandran's theory was published, new research has kept some of the general outline while changing many of the specifics in the hopes of explaining a wider range of delusions in neurological and psychiatric patients. The newer model acknowledges the left-brain/right-brain divide, but adds some new twists based on the Mind Projection Fallacy and the brain as a Bayesian reasoner.
Abstract: Exactly what is fallacious about a claim like ”ghosts exist because no one has proved that they do not”? And why does a claim with the same logical structure, such as ”this drug is safe because we have no evidence that it is not”, seem more plausible? Looking at various fallacies – the argument from ignorance, circular arguments, and the slippery slope argument - we find that they can be analyzed in Bayesian terms, and that people are generally more convinced by arguments which provide greater Bayesian evidence. Arguments which provide only weak evidence, though often evidence nonetheless, are considered fallacies.
As a Nefarious Scientist, Dr. Zany is often teleconferencing with other Nefarious Scientists. Negotiations about things such as ”when we have taken over the world, who's the lucky bastard who gets to rule over Antarctica” will often turn tense and stressful. Dr. Zany knows that stress makes it harder to evaluate arguments logically. To make things easier, he would like to build a software tool that would monitor the conversations and automatically flag any fallacious claims as such. That way, if he's too stressed out to realize that an argument offered by one of his colleagues is actually wrong, the software will work as backup to warn him.
Unfortunately, it's not easy to define what counts as a fallacy. At first, Dr. Zany tried looking at the logical form of various claims. An early example that he considered was ”ghosts exist because no one has proved that they do not”, which felt clearly wrong, an instance of the argument from ignorance. But when he programmed his software to warn him about sentences like that, it ended up flagging the claim ”this drug is safe, because we have no evidence that it is not”. Hmm. That claim felt somewhat weak, but it didn't feel obviously wrong the way that the ghost argument did. Yet they shared the same structure. What was the difference?
The argument from ignorance
One kind of argument from ignorance is based on negative evidence. It assumes that if the hypothesis of interest were true, then experiments made to test it would show positive results. If a drug were toxic, tests of toxicity of reveal this. Whether or not this argument is valid depends on whether the tests would indeed show positive results, and with what probability.
With some thought and help from AS-01, Dr. Zany identified three intuitions about this kind of reasoning.
1. Prior beliefs influence whether or not the argument is accepted.
A) I've often drunk alcohol, and never gotten drunk. Therefore alcohol doesn't cause intoxication.
B) I've often taken Acme Flu Medicine, and never gotten any side effects. Therefore Acme Flu Medicine doesn't cause any side effects.
Both of these are examples of the argument from ignorance, and both seem fallacious. But B seems much more compelling than A, since we know that alcohol causes intoxication, while we also know that not all kinds of medicine have side effects.
2. The more evidence found that is compatible with the conclusions of these arguments, the more acceptable they seem to be.
C) Acme Flu Medicine is not toxic because no toxic effects were observed in 50 tests.
D) Acme Flu Medicine is not toxic because no toxic effects were observed in 1 test.
C seems more compelling than D.
3. Negative arguments are acceptable, but they are generally less acceptable than positive arguments.
E) Acme Flu Medicine is toxic because a toxic effect was observed (positive argument)
F) Acme Flu Medicine is not toxic because no toxic effect was observed (negative argument, the argument from ignorance)
Argument E seems more convincing than argument F, but F is somewhat convincing as well.
"Aha!" Dr. Zany exclaims. "These three intuitions share a common origin! They bear the signatures of Bayonet reasoning!"
"Bayesian reasoning", AS-01 politely corrects.
"Yes, Bayesian! But, hmm. Exactly how are they Bayesian?"
Followup to: The Savage theorem and the Ellsberg paradox
In the previous post, I presented a simple version of Savage's theorem, and I introduced the Ellsberg paradox. At the end of the post, I mentioned a strong Bayesian thesis, which can be summarised: "There is always a price to pay for leaving the Bayesian Way."1 But not always, it turns out. I claimed that there was a method that is Ellsberg-paradoxical, therefore non-Bayesian, but can't be money-pumped (or "Dutch booked"). I will present the method in this post.
I'm afraid this is another long post. There's a short summary of the method at the very end, if you want to skip the jibba jabba and get right to the specification. Before trying to money-pump it, I'd suggest reading at least the two highlighted dialogues.
To recap the Ellsberg paradox: there's an urn with 30 red balls and 60 other balls that are either green or blue, in unknown proportions. Most people, when asked to choose between betting on red or on green, choose red, but, when asked to choose between betting on red-or-blue or on green-or-blue, choose green-or-blue. For some people this behaviour persists even after due calculation and reflection. This behaviour is non-Bayesian, and is the prototypical example of ambiguity aversion.
There were some major themes that came out in the comments on that post. One theme was that I Fail Technical Writing Forever. I'll try to redeem myself.
Another theme was that the setup given may be a bit too symmetrical. The Bayesian answer would be indifference, and really, you can break ties however you want. However the paradoxical preferences are typically strict, rather than just tie-breaking behaviour. (And when it's not strict, we shouldn't call it ambiguity aversion.) One suggestion was to add or remove a couple of red balls. Speaking for myself, I would still make the paradoxical choices.
A third theme was that ambiguity aversion might be a good heuristic if betting against someone who may know something you don't. Now, no such opponent was specified, and speaking for myself, I'm not inferring one when I make the paradoxical choices. Still, let me admit that it's not contrived to infer a mischievous experimenter from the Ellsberg setup. One commentator puts it better than me:
Betting generally includes an adversary who wants you to lose money so they win in. Possibly in psychology experiments [this might not apply] ... But generally, ignoring the possibility of someone wanting to win money off you when they offer you a bet is a bad idea.
Now betting is supposed to be a metaphor for options with possibly unknown results. In which case sometimes you still need to account for the possibility that the options were made available by an adversary who wants you to choose badly, but less often. And you should also account for the possibility that they were from other people who wanted you to choose well, or that the options were not determined by any intelligent being or process trying to predict your choices, so you don't need to account for an anticorrelation between your choice and the best choice. Except for your own biases.
We can take betting on the Ellsberg urn as a stand-in for various decisions under ambiguous circumstances. Ambiguity aversion can be Bayesian if we assume the right sort of correlation between the options offered and the state of the world, or the right sort of correlation between the choice made and the state of the world. In that case just about anything can Bayesian. But sometimes the opponent will not have extra information, nor extra power. There might not even be any opponent as such. If we assume there are no such correlations, then ambiguity aversion is non-Bayesian.
The final theme was: so what? Ambiguity aversion is just another cognitive bias. One commentator specifically complained that I spent too much time talking about various abstractions and not enough time talking about how ambiguity aversion could be money-pumped. I will fix that now: I claim that ambiguity aversion cannot be money-pumped, and the rest of this post is about my claim.
I'll start with a bit of name-dropping and some whig history, to make myself sound more credible than I really am2. In the last twenty years or so many models of ambiguity averse reasoning have been constructed. Choquet expected utility3 and maxmin expected utility4 were early proposed models of ambiguity aversion. Later multiplier preferences5 were the result of applying the ideas of robust control to macroeconomic models. This results in ambiguity aversion, though it was not explicitly motivated by the Ellsberg paradox. More recently, variational preferences6 generalises both multiplier preferences and maxmin expected utility. What I'm going to present is a finitary case of variational preferences, with some of my own amateur mathematical fiddling for rhetorical purposes.
The starting idea is simple enough, and may have already occurred to some LW readers. Instead of using a prior probability for events, can we not use an interval of probabilities? What should our betting behaviour be for an event with probability 50%, plus or minus 10%?
There are some different ways of filling in the details. So to be quite clear, I'm not proposing the following as the One True Probability Theory, and I am not claiming that the following is descriptive of many people's behaviour. What follows is just one way of making ambiguity aversion work, and perhaps the simplest way. This makes sense, given my aim: I should just describe a simple method that leaves the Bayesian Way, but does not pay.
Now, sometimes disjoint ambiguous events together make an event with known probability. Or even a certainty, as in an event and its negation. If we want probability intervals to be additive (and let's say that we do) then what we really want are oriented intervals. I'll use +- or -+ (pronounced: plus-or-minus, minus-or-plus) to indicate two opposite orientations. So, if P(X) = 1/2 +- 1/10, then P(not X) = 1/2 -+ 1/10, and these add up to 1 exactly.
Such oriented intervals are equivalent to ordered pairs of numbers. Sometimes it's more helpful to think of them as oriented intervals, but sometimes it's more helpful to think of them as pairs. So 1/2 +- 1/10 is the pair (3/5,2/5). And 1/2 -+ 1/10 is (2/5,3/5), the same numbers in the opposite order. The sum of these is (1,1), which is 1 exactly.
You may wonder, if we can use ordered pairs, can we use triples, or longer lists? Yes, this method can be made to work with those too. And we can still think in terms of centre, length, and orientation. The orientation can go off in all sorts of directions, instead of just two. But for my purposes, I'll just stick with two.
You might also ask, can we set P(X) = 1/2 +- 1/2? No, this method just won't handle it. A restriction of this method is that neither of the pair can be 0 or 1, except when they're both 0 or both 1. The way we will be using these intervals, 1/2 +- 1/2 would be the extreme case of ambiguity aversion. 1/2 +- 1/10 represents a lesser amount of ambiguity aversion, a sort of compromise between worst-case and average-case behaviour.
To decide among bets (having the same two outcomes), compute their probability intervals. Sometimes, the intervals will not overlap. Then it's unambiguous which is more likely, so it's clear what to pick. In general, whether they overlap or not, pick the one with the largest minimum -- though we will see there are three caveats when they do overlap. If P(X) = 1/2 +- 1/10, we would be indifferent between a bet on X and on not X: the minimum is 2/5 in either case. If P(Y) = 1/2 exactly, then we would strictly prefer a bet on Y to a bet on X.
Which leads to the first caveat: sometimes, given two options, it's strictly better to randomise. Let's suppose Y represents a fair coin. So P(Y) = 1/2 exactly, as we said. But also, Y is independent of X. P(X and Y) = 1/4 +- 1/20, and so on. This means that P((X and not Y) or (Y and not X)) = 1/2 exactly also. So we're indifferent between a bet on X and a bet on not X, but we strictly prefer the randomised bet.
In general, randomisation will be strictly better if you have two choices with overlapping intervals of opposite orientations. The best randomisation ratio will be the one that gives a bet with zero-length interval.
Now let us reconsider the Ellsberg urn. We did say the urn can be a metaphor for various situations. Generally these situations will not be symmetrical. But, even in symmetrical scenarios, we should still re-think how we apply the principle of indifference. I argue that the underlying idea is really this: if our information has a symmetry, then our decisions should have that same symmetry. If we switch green and blue, our information about the Ellsberg urn doesn't change. The situation is indistinguishable, so we should behave the same way. It follows that we should be indifferent between a bet on green and a bet on blue. Then, for the Bayesian, it follows that P(red) = P(green) = P(blue) = 1/3. Period.
But for us, there is a degree of freedom, even in this symmetrical situation. We know what the probability of red is, so of course P(red) = 1/3 exactly. But we can set, say7, P(green) = 1/3 +- 1/9, and P(blue) = 1/3 -+ 1/9. So we get P(red or green) = 2/3 +- 1/9, P(red or blue) = 2/3 -+ 1/9, P(green or blue) = 2/3 exactly, and of course P(red or green or blue) = 1 exactly.
So: red is 1/3 exactly, but the minimum of green is 2/9. (green or blue) is 2/3 exactly, but the minimum of (red or blue) is 5/9. So choose red over green, and (green or blue) over (red or blue). That's the paradoxical behaviour. Note that neither pair of choices offered in the Ellsberg paradox has the type of overlap that favours randomisation.
Once we have a decision procedure for the two-outcome case, then we can tack on any utility function, as I explained in the previous post. The result here is what you would expect: we get oriented expected utility intervals, obtained by multiplying the oriented probability intervals by the utility. When deciding, we pick the one whose interval has the largest minimum. So for example, a bet which pays 15U on red (using U for "utils", the abstract units of measurement of the utility function) has expected utility 5U exactly. A bet which pays 18U on green has expected utility 6U +- 2U, the minimum is 4U. So pick the bet on red over that.
Operationally, probability is associated with the "fair price" at which we are willing to bet. A probability interval indicates that there is no fair price. Instead we have a spread: we buy bets at their low price and sell at their high price. At least, we do that if we have no outstanding bets, or more generally, if the expected utility interval on our outstanding bets has zero-length. The second caveat is that if this interval has length, then it affects our price: we also sell bets of the same orientation at their low price, and buy bets of the opposite orientation at their high price, until the length of this interval is used up. The midpoint of the expected utility interval on our outstanding bets will be irrelevant.
This can be confusing, so it's time for an analogy.
If you are Bayesian and risk-neutral (and if bets pay in "utils" rather than cash, you are risk-neutral by definition) then outstanding bets have no effect on further betting behaviour. However, if you are risk-averse, as is the most common case, then this is no longer true. The more money you've already got on the line, the less willing you will be to bet.
But besides risk attitude, there could also be interference effects from non-monetary payouts. For example, if you are dealing in boots, then you wouldn't buy a single boot for half the price of a pair, and neither would you sell one of your boots for half the price of a pair. Unless you happened to already have unmatched boots, then you would sell those at a lower price, or buy boots of the opposite orientation at a higher price, until you had no more unmatched boots. If you were otherwise risk-neutral with respect to boots, then your behaviour would not depend on the number of pairs you have, just on the number and orientation of your unmatched boots.
This closely resembles the non-Bayesian behaviour above. In fact, for the Ellsberg urn, we could just say that a bet on red is worth a pair of boots, a bet on green is worth two left boots, and a bet on blue is worth two right boots. Without saying anything further, it's clear that we would strictly prefer red (a pair) over green (two lefts), but we would also strictly prefer green-or-blue (two pairs) over red-or-blue (one left and three rights). That's the paradoxical behaviour, but you know you can't money-pump boots.
A: I'll buy that pair of boots for 30 zorkmids.
So much for the static case. But what do we do with new information? How do we handle conditional probabilities?
We still get P(A|B) by dividing P(A and B) by P(B). It will be easier to think in terms of pairs here. So for example P(red) = 1/3 exactly = (1/3,1/3) and P(red or green) = 2/3 +- 1/9 = (7/9,5/9), so P(red|red or green) = (3/7,3/5) = 18/35 -+ 3/35. And similarly P(green|red or green) = (1/3 +- 1/9)/(2/3 +- 1/9) = 17/35 +- 3/35.
This rule covers the dynamic passive case, where we update probabilities based on what we observe, before betting. The third and final caveat is in the active case, when information comes in between bets. Now, we saw that the length and orientation of the interval on expected utility of outstanding bets affects further betting behaviour. There is actually a separate update rule for this quantity. It is about as simple as it gets: do nothing. The interval can change when we make choices, and its midpoint can shift due to external events, but its length and orientation do not update.
You might expect the update rule for this quantity to follow from the way the expected utility updates, which follows from the way probability updates. But it has a mind of its own. So even if we are keeping track of our bets, we'd still need to keep track of this extra variable separately.
Sometimes it may be easier to think in terms of the total expected utility interval of our outstanding bets, but sometimes it may be easier to think of this in terms of having a "virtual" interval that cancels the change in the length and orientation of the "real" expected utility interval. The midpoint of this virtual interval is irrelevant and can be taken to always be zero. So, on update, compute the prior expected utility interval of outstanding bets, subtract the posterior expected utility interval from it, and add this difference to the virtual interval. Reset its midpoint to zero, keeping only the length and orientation.
That can also be confusing, so let's have another analogy.
Yo' mama's so illogical...
I recently came across this example by Mark Machina:
M: Children, I only have one treat, I can only give it to one of you.
Instead of giving the treat to either child, she strictly prefers to toss a coin and give the treat to the winner. But after the coin is tossed, she strictly prefers to give the treat to the winner rather than toss again.
This cannot be explained in terms of maximising expected utility, in the typical sense of "utility". And of course only known probabilities are involved here, so there's no question as to whether her beliefs are probabilistically sophisticated or not. But it could be said that she is still maximising the expected value of an extended objective function. This extended objective function does not just consider who gets a treat, but also considers who "had a fair chance". She is unfair if she gives the treat to either child outright, but fair if she tosses a coin. That fairness doesn't go away when the result of the coin toss is known.
Or something like that. There are surely other ways of dissecting the mother's behaviour. But no matter what, it's going to have to take the coin toss into account, even though the coin, in and of itself, has no relevance to the situation.
Let's go back to the urn. Green and blue have the type of overlap that favours randomisation: P((green and heads) or (blue and tails)) = 1/3 exactly. A bet paying 9U on this event has expected utility of 3U exactly. Let's say we took this bet. Now say the coin comes up heads. We can update the probabilities as per above. The answer is that P(green) = 1/3 +- 1/9 as it was before. That makes sense because it's an independent event: knowing the result of the coin toss gives no information about the urn. The difference is that we now have an outstanding bet that pays 9U if the ball is green. The expected utility would therefore be 3U +- 1U. Except, the expected utility interval was zero-length before the coin was tossed, so it remains zero-length. Equivalently, the virtual interval becomes -+ 1U, so that the effective total is 3U exactly. (In this example, the midpoint of the expected utility interval didn't change either. That's not generally the case.) A bet randomised on a new coin toss would have expected utility 3U, plus the virtual interval of -+ 1U, for an effective total of 3U -+ 1U. So we would strictly prefer to keep the bet on green rather than re-randomise.
Let's compare this with a trivial example: let's say we took a bet that pays 9U if the ball drawn from the urn is green. The expected utility of this bet is 3U +- 1U. For some unrelated reason, a coin is tossed, and it comes up heads. The coin has also nothing to do with the urn or my bet. I still have a bet of 9U on green, and its expected utility is still 3U +- 1U.
But the difference between these two examples is just in the counterfactual: if the coin had come up tails, in the first example I would have had a bet of 9U on blue, and in the second example I would have had a bet of 9U on green. But the coin came up heads, and in both examples I end up with a bet of 9U on green. The virtual interval has some spooky dependency on what could have happened, just like "had a fair chance". It is the ghost of a departed bet.
I expect many on LW are wondering what happened. There was supposed to be a proof that anything that isn't Bayesian can be punished. Actually, this threat comes with some hidden assumptions, which I hope these analogies have helped to illustrate. A boot is an example of something which has no fair price, even if a pair of boots has one. A mother with two children and one treat is an example where some counterfactuals are not forgotten. The hidden assumptions fail in our case, just as they can fail in these other contexts where Bayesianism is not at issue. This can be stated more rigorously8, but that is basically how it's possible. Now We Know. And Knowing is Half the Battle.
Appendix A: method summary
Appendix B: obligatory image for LW posts on this topic
In 1961, Daniel Ellsberg, most famous for leaking the Pentagon Papers, published the decision-theoretic paradox which is now named after him 1. It is a cousin to the Allais paradox. They both involve violations of an independence or separability principle. But they go off in different directions: one is a violation of expected utility, while the other is a violation of subjective probability. The Allais paradox has been discussed on LW before, but when I do a search it seems that the first discussion of the Ellsberg paradox on LW was my comments on the previous post 2. It seems to me that from a Bayesian point of view, the Ellsberg paradox is the greater evil.
But I should first explain what I mean by a violation of expected utility versus subjective probability, and for that matter, what I mean by Bayesian. I will explain a special case of Savage's representation theorem, which focuses on the subjective probability side only. Then I will describe Ellsberg's paradox. In the next episode, I will give an example of how not to be Bayesian. If I don't get voted off the island at the end of this episode.
Rationality and Bayesianism
Bayesianism is often taken to involve the maximisation of expected utility with respect to a subjective probability distribution. I would argue this label only sticks to the subjective probability side. But mainly, I wish to make a clear division between the two sides, so I can focus on one.
Subjective probability and expected utility are certainly related, but they're still independent. You could be perfectly willing and able to assign belief numbers to all possible events as if they were probabilities. That is, your belief assignment obeys all the laws of probability, including Bayes' rule, which is, after all, what the -ism is named for. You could do all that, but still maximise something other than expected utility. In particular, you could combine subjective probabilities with prospect theory, which has also been discussed on LW before. In that case you may display Allais-paradoxical behaviour but, as we will see, not Ellsberg-paradoxical behaviour. The rationalists might excommunicate you, but it seems to me you should keep your Bayesianist card.
On the other hand your behaviour could be incompatible with any subjective probability distribution. But you could still maximise utility with respect to something other than subjective probability. In particular, when faced with known probabilities, you would be maximising expected utility in the normal sense. So you can not exhibit any Allais-paradoxical behaviour, because the Allais paradox involves only objective lotteries. But you may exhibit, as we will see, Ellsberg-paradoxical behaviour. I would say you are not Bayesian.
So a non-Bayesian, even the strictest frequentist, can still be an expected utility maximiser, and a perfect Bayesian need not be an expected utility maximiser. What I'm calling Bayesianist is just the idea that we should reason with our subjective beliefs the same way that we reason with objective probabilities. This also has been called having "probabilistically sophisticated" beliefs, if you prefer to avoid the B-word, or don't like the way I'm using it.
In a lot of what follows, I will bypass utility by only considering two outcomes. Utility functions are only unique up to a constant offset and a positive scale factor. With two outcomes, they evaporate entirely. The question of maximising expected utility with respect to a subjective probability distribution reduces to the question of maximising the probability, according to that distribution, of getting the better of the two outcomes. (And if the two outcomes are equal, there is nothing to maximise.)
And on the flip side, if we have a decision method for the two-outcome case, Bayesian or otherwise, then we can always tack on a utility function. The idea of utility is just that any intermediate outcome is equivalent to an objective lottery between better and worse outcomes. So if we want, we can use a utility function to reduce a decision problem with any (finite) number of outcomes to a decision problem over the best and worst outcomes in question.
Savage's representation theorem
Let me recap some of the previous post on Savage's theorem. How might we defend Bayesianism? We could invoke Cox's theorem. This starts by assuming possible events can be assigned real numbers corresponding to some sort of belief level on someone's part, and that there are certain functions over these numbers corresponding to logical operations. It can be proven that, if someone's belief functions obey some simple rules, then that person acts as if they were reasoning with subjective probability. Now, while the rules for belief functions are intuitive, the background assumptions are pretty sketchy. It is not at all clear why these mathematical constructs are requirements of rationality.
One way to justify those constructs is to argue in terms of choices a rational person must make. We imagine someone is presented with choices among various bets on uncertain events. Their level of belief in these events can be gauged by which bets they choose. But if we're going to do that anyway, then, as it turns out, we can just give some simple rules directly about these choices, and bypass the belief functions entirely. This was Leonard Savage's approach 3. To quote a comment on the previous post: "This is important because agents in general don't have to use beliefs or goals, but they do all have to choose actions."
Savage's approach actually covers both subjective probability and expected utility. The previous post discusses both, whereas I am focusing on the former. This lets me give a shorter exposition, and I think a clearer one.
We start by assuming some abstract collection of possible bets. We suppose that when you are offered two bets from this collection, you will choose one over the other, or express indifference.
As discussed, we will only consider two outcomes. So all bets have the same payout, the difference among them is just their winning conditions. It is not specified what it is that you win. But it is assumed that, given the choice between winning unconditionally and losing unconditionally, you would choose to win.
It is assumed that the collection of bets form what is called a boolean algebra. This just means we can consider combinations of bets under boolean operators like "and", "or", or "not". Here I will use brackets to indicate these combinations. (A or B) is a bet that wins under the conditions that make either A win, or B win, or both win. (A but not B) wins whenever A wins but B doesn't. And so on.
If you are rational, your choices must, it is claimed, obey some simple rules. If so, it can be proven that you are choosing as if you had a assigned subjective probabilities to bets. Savage's axioms for choosing among bets are 4:
- If you choose A over B, you shall not choose B over A; and, if you do not choose A over B, and do not choose B over C, you shall not choose A over C.
- If you choose A over B, you shall also choose (A but not B) over (B but not A); and conversely, if you choose (A but not B) over (B but not A), you shall also choose A over B.
- You shall not choose A over (A or B).
- If you choose A over B, then you shall be able to specify a finite sequence of bets C1, C2, ..., Cn, such that it is guaranteed that one and only one of the C's will win, and such that, for any one of the C's, you shall still choose (A but not C) over (B or C).
Rule 1 is a coherence requirement on rational choice. It is requires your preferences to be a total pre-order. One objection to Cox's theorem is that levels of belief could be incomparable. This objection does not apply to rule 1 in this context because, as we discussed above, we're talking about choices of bets, not beliefs. Faced with choices, we choose. A rational person's choices must be non-circular.
Rule 2 is an independence requirement. It demands that when you compare two bets, you ignore the possibilty that they could both win. In those circumstances you would be indifferent between the two anyway. The only possibilities that are relevant to the comparison are the ones where one bet wins and the other doesn't. So, you ought to compare A to B the same way you compare (A but not B) to (B but not A). Savage called this rule the Sure-thing principle.
Rule 3 is a dominance requirement on rational choice. It demands that you not choose something that cannot do better under any circumstance: whenever A would win, so would (A or B). Note that you might judge (B but not A) to be impossible a priori. So, you might legitimately express indifference between A and (A or B). We can only say it is never legitimate to choose A over (A or B).
Rule 4 is the most complicated. Luckily it's not going to be relevant to the Ellsberg paradox. Call it Mostly Harmless and forget this bit if you want.
What rule 4 says is that if you choose A over B, you must be willing to pay a premium for your choice. Now, we said there are only two outcomes in this context. Here, the premium is paid in terms of other bets. Rule 4 demands that you give a finite list of mutually exclusive and exhaustive events, and still be willing to choose A over B if we take any event on your list, cut it from A, and paste it to B. You can list as many events as you need to, but it must be a finite list.
For example, if you thought A was much more likely than B, you might pull out a die, and list the 6 possible outcomes of one roll. You would also be willing to choose (A but not a roll of 1) over (B or a roll of 1), (A but not a roll of 2) over (B or a roll of 2), and so on. If not, you might list the 36 possible outcomes of two consecutive rolls, and be willing to choose (A but not two rolls of 1) over (B or two rolls of 1), and so on. You could go to any finite number of rolls.
In fact rule 4 is pretty liberal, it doesn't even demand that every event on your list be equiprobable, or even independent of the A and B in question. It just demands that the events be mutually exclusive and exhaustive. If you are not willing to specify some such list of events, then you ought to express indifference between A and B.
If you obey rules 1-3, then that is sufficient for us construct a sort of qualitative subjective probability out of your choices. It might not be quantitative: for one thing, there could be infinitessimally likely beliefs. Another thing is that there might be more than one way to assign numbers to beliefs. Rule 4 takes care of these things. If you obey rule 4 also, then we can assign a subjective probability to every possible bet, prove that you choose among bets as if you were using those probabilities, and also prove that it is the only probability assignment that matches your choices. And, on the flip side, if you are choosing among bets based on a subjective probability assignment, then it is easy to prove you obey rules 1-3, as well as rule 4 if the collection of bets is suitably infinite, like if a fair die is avaialble to bet on.
Savage's theorem is impressive. The background assumptions involve just the concept of choice, and no numbers at all. There are only a few simple rules. Even rule 4 isn't really all that hard to understand and accept. A subjective probability distribution appears seemingly out of nowhere. In the full version, a utility function appears out of nowhere too. This theorem has been called the crowning glory of decision theory.
The Ellsberg paradox
Let's imagine there is an urn containing 90 balls. 30 of them are red, and the other 60 are either green or blue, in unknown proportion. We will draw a ball from the urn at random. Let us bet on the colour of this ball. As above, all bets have the same payout. To be specific, let's say you get pie if you win, and a boot to the head if you lose. The first question is: do you prefer to bet that the colour will be red, or that it will be green? The second question is: do you prefer to bet that it will be (red or blue), or that it will be (green or blue)?
The most common response5 is to choose red over green, and (green or blue) over (red or blue). And that's all there is to it. Paradox! 6
|A||pie||BOOT||BOOT||A is preferred to B|
|C||pie||BOOT||pie||D is preferred to C|
If choices were based solely on an assignment of subjective probability, then because the three colours are mutually exclusive, P(red or blue) = P(red) + P(blue), and P(green or blue) = P(green) + P(blue). So, since P(red) > P(green) then P (red or blue) > P(green or blue), but instead we have P(red or blue) < P(green or blue).
Knowing Savage's representation theorem, we expect to get a formal contradiction from the 4 rules above plus the 2 expressed choices. Something has to give, so we'd like to know which rules are really involved. You can see that we are talking only about rule 2, the Sure-thing principle. It says we shall compare (red or blue) to (green or blue) the same way as we compare red to green.
This behaviour has been called ambiguity aversion. Now, perhaps this is just a cognitive bias. It wouldn't be the first time that people behave a certain way, but the analysis of their decisions shows a clear error. And indeed, when explained, some people do repent of their sins against Bayes. They change their choices to obey rule 2. But others don't. To quote Ellsberg:
...after rethinking all their 'offending' decisions in light of [Savage's] axioms, a number of people who are not only sophisticated but reasonable decide that they wish to persist in their choices. This includes people who previously felt a 'first order commitment' to the axioms, many of them surprised and some dismayed to find that they wished, in these situations, to violate the Sure-thing Principle. Since this group included L.J. Savage, when last tested by me (I have been reluctant to try him again), it seems to deserve respectful consideration.
I include myself in the group that thinks rule 2 is what should be dropped. But I don't have any dramatic (de-)conversion story to tell. I was somewhat surprised, but not at all dismayed, and I can't say I felt much if any prior commitment to the rules. And as to whether I'm sophisticated or reasonable, well never mind! Even if there are a number of other people who are all of the above, and even if Savage himself may have been one of them for a while, I do realise that smart people can be Just Plain Wrong. So I'd better have something more to say for myself.
Well, red obviously has a probability of 1/3. Our best guess is to apply the principle of indifference to also assign probability 1/3 to green or blue. But our best guess is not necessarily a good guess. The probabilities we assign to red, and to (green or blue), are objective. We're guessing the probability of green, and of (red or blue). It seems wise to take this difference into account when choosing what to bet on, doesn't it? And surely it will be all the more wise when dealing with real-life, non- symetrical situations where we can't even appeal to the principle of indifference.
Or maybe I'm just some fool talking jibba jabba. Against this sort of talk, the LW post on the Allais paradox presents a version of Howard Raiffa's dynamic inconsistency argument. This makes no references to internal thought processes, it is a purely external argument about the decisions themselves. As stated in that post, "There is always a price to pay for leaving the Bayesian Way." 7 This is expanded upon in an earlier post:
Sometimes you must seek an approximation; often, indeed. This doesn't mean that probability theory has ceased to apply, any more than your inability to calculate the aerodynamics of a 747 on an atom-by-atom basis implies that the 747 is not made out of atoms. Whatever approximation you use, it works to the extent that it approximates the ideal Bayesian calculation - and fails to the extent that it departs.
Bayesianism's coherence and uniqueness proofs cut both ways ... anything that is not Bayesian must fail one of the coherency tests. This, in turn, opens you to punishments like Dutch-booking (accepting combinations of bets that are sure losses, or rejecting combinations of bets that are sure gains).
Now even if you believe this about the Allais paradox, I've argued that this doesn't really have much to do with Bayesianism one way or the other. The Ellsberg paradox is what actually strays from the Path. So, does God also punish ambiguity aversion?
Tune in next time8, when I present a two-outcome decision method that obeys rules 1, 3, and 4, and even a weaker form of rule 2. But it exhibits ambiguity aversion, in gross violation of the original rule 2, so that it's not even approximately Bayesian. I will try to present it in a way that advocates for its internal cognitive merit. But the main thing 9 is that, externally, it is dynamically consistent. We do not get booked, by the Dutch or any other nationality.
- Ellsberg's original paper is: Risk, ambiguity, and the Savage axioms, Quarterly Journal of Economics 75 (1961) pp 643-669
- Some discussion followed, in which I did rather poorly. Actually I had to admit defeat. Twice. But, as they say: fool me once, shame on me; fool me twice, won't get fooled again!
- Savage presents his theorem in his book: The Foundations of Statistics, Wiley, New York, 1954.
- To compare to Savage's setup: for the two outcome case, we deal directly with "actions" or equivalently "events", here called "bets". We can dispense with "states"; in particular we don't have to demand that the collection of bets be countably complete, or even a power-set algebra of states, just that it be some boolean algebra. Savage's axioms of course have a descriptive interpretation, but it is their normativity that is at issue here, so I state them as "you shall". Rules 1-3 are his P1-P3, and 4 is P6. P4 and P7 are irrelevant in the two- outcome case. P5 is included in the background assumption that you would choose to win. I do not call this normative, because the payoff wasn't specified.
- Ellsberg originally proposed this just as a thought experiment, and canvassed various victims for their thoughts under what he called "absolutely non-expiremental conditions". He used $100 and $0 instead of pie and a boot to the head. Which is dull of course, but it shouldn't make a difference10. The experiment has since been repeated under more experimental conditions. The expirementers also invariably opt for the more boring cash payouts.
- Some people will say this isn't "really" a paradox. Meh.
- Actually, I inserted "to pay". It wasn't in the original post. But it should have been.
- Sneak preview
- As a great decision theorist once said, "Stupid is as stupid does."
- ...or should it? Savage's rule P4 demands that it shall not. And the method I have in mind obeys this rule. But it turns out this is another rule that God won't enforce. And that's yet another post, if I get to it at all.
In J. Michael Straczynski's science fiction TV show Babylon 5, there's a character named Lennier. He's pretty Spock-like: he's a long-lived alien who avoids displaying emotion and feels superior to humans in intellect and wisdom. He's sworn to always speak the truth. In one episode, he and another character, the corrupt and rakish Ambassador Mollari, are chatting. Mollari is bored. But then Lennier mentions that he's spent decades studying probability. Mollari perks up, and offers to introduce him to this game the humans call poker.
Continuing my interest in tracking real-world predictions, I notice that the recent acquittal of Knox & Sollecito offers an interesting opportunity - specifically, many LessWrongers gave probabilities for guilt back in 2009 in komponisto’s 2 articles:
- “You Be the Jury: Survey on a Current Event”
- “The Amanda Knox Test: How an Hour on the Internet Beats a Year in the Courtroom”
Both were interesting exercises, and it’s time to do a followup. Specifically, there are at least 3 new pieces of evidence to consider:
- the failure of any damning or especially relevant evidence to surface in the ~2 years since (see also: the hope function)
- the independent experts’ report on the DNA evidence
- the freeing of Knox & Sollecito, and continued imprisonment of Rudy Guede (with reduced sentence)
Point 2 particularly struck me (the press attributes much of the acquittal to the expert report, an acquittal I had not expected to succeed), but other people may find the other 2 points or unmentioned news more weighty.
Before I read Probability is in the Mind and Probability is Subjectively Objective I was a realist about probabilities; I was a frequentest. After I read them, I was just confused. I couldn't understand how a mind could accurately say the probability of getting a heart in a standard deck of playing cards was not 25%. It wasn't until I tried to explain the contrast between my view and the subjective view in a comment on Probability is Subjectively Objective that I realized I was a subjective Bayesian all along. So, if you've read Probability is in the Mind and read Probability is Subjectively Objective but still feel a little confused, hopefully, this will help.
I should mention that I'm not sure that EY would agree with my view of probability, but the view to be presented agrees with EY's view on at least these propositions:
- Probability is always in a mind, not in the world.
- The probability that an agent should ascribe to a proposition is directly related to that agent's knowledge of the world.
- There is only one correct probability to assign to a proposition given your partial knowledge of the world.
- If there is no uncertainty, there is no probability.
And any position that holds these propositions is a non-realist-subjective view of probability.
Imagine a pre-shuffled deck of playing cards and two agents (they don't have to be humans), named "Johnny" and "Sally", which are betting 1 dollar each on the suit of the top card. As everyone knows, 1/4 of the cards in a playing card deck are hearts. We will name this belief F1; F1 stands for "1/4 of the cards in the deck are hearts.". Johnny and Sally both believe F1. F1 is all that Johnny knows about the deck of cards, but sally knows a little bit more about this deck. Sally also knows that 8 of the top 10 cards are hearts. Let F2 stand for "8 out of the 10 top cards are hearts.". Sally believes F2. John doesn't know whether or not F2. F1 and F2 are beliefs about the deck of cards and they are either true or false.
So, sally bets that the top card is a heart and Johnny bets against her, i.e., she puts her money on "Top card is a heart." being true; he puts his money on "~The top card is a heart." being true. After they make their bets, one could imagine Johnny making fun of Sally; he might say something like: "Are you nuts? You know, I have a 75% chance of winning. 1/4 of the cards are hearts; you can't argue with that!" Sally might reply: "Don't forget that the probability you assign to '~The top card is a heart.' depends on what you know about the deck. I think you would agree with me that there is an 80% chance that 'The top card is a heart' if you knew just a bit more about the state of the deck."
To be undecided about a proposition is to not know which possible world you are in; am I in the possible world where that proposition is true, or in the one where it is false? Both Johnny and Sally are undecided about "The top card is a heart."; their model of the world splits at that point of representation. Their knowledge is consistent with being in a possible world where the top card is a heart, or in a possible world where the top card is not a heart. The more statements they decide on, the smaller the configuration space of possible worlds they think they might find themselves in; deciding on a proposition takes a chunk off of that configuration space, and the content of that proposition determines the shape of the eliminated chunk; Sally's and Johnny's beliefs constrain their respective expected experiences, but not all the way to a point. The trick when constraining one's space of viable worlds, is to make sure that the real world is among the possible worlds that satisfy your beliefs. Sally still has the upper hand, because her space of viably possible worlds is smaller than Johnny's. There are many more ways you could arrange a standard deck of playing cards that satisfies F1 than there are ways to arrange a deck of cards that satisfies F1 and F2. To be clear, we don't need to believe that possible worlds actually exist to accept this view of belief; we just need to believe that any agent capable of being undecided about a proposition is also capable of imagining alternative ways the world could consistently turn out to be, i.e., capable of imagining possible worlds.
For convenience, we will say that a possible world W, is viable for an agent A, if and only if, W satisfies A's background knowledge of decided propositions, i.e., A thinks that W might be the world it finds itself in.
Of the possible worlds that satisfy F1, i.e., of the possible worlds where "1/4 of the cards are hearts" is true, 3/4 of them also satisfy "~The top card is a heart." Since Johnny holds that F1, and since he has no further information that might put stronger restrictions on his space of viable worlds, he ascribes a 75% probability to "~The top card is a heart." Sally, however, holds that F2 as well as F1. She knows that of the possible worlds that satisfy F1 only 1/4 of them satisfy "The top card is a heart." But she holds a proposition that constrains her space of viably possible worlds even further, namely F2. Most of the possible worlds that satisfy F1 are eliminated as viable worlds if we hold that F2 as well, because most of the possible worlds that satisfy F1 don't satisfy F2. Of the possible worlds that satisfy F2 exactly 80% of them satisfy "The top card is a heart." So, duh, Sally assigns an 80% probability to "The top card is a heart." They give that proposition different probabilities, and they are both right in assigning their respective probabilities; they don't disagree about how to assign probabilities, they just have different resources for doing so in this case. P(~The top card is a heart|F1) really is 75% and P(The top card is a heart|F2) really is 80%.
This setup makes it clear (to me at least) that the right probability to assign to a proposition depends on what you know. The more you know, i.e., the more you constrain the space of worlds you think you might be in, the more useful the probability you assign. The probability that an agent should ascribe to a proposition is directly related to that agent's knowledge of the world.
This setup also makes it easy to see how an agent can be wrong about the probability it assigns to a proposition given its background knowledge. Imagine a third agent, named "Billy", that has the same information as Sally, but say's that there's a 99% chance of "The top card is a heart." Billy doesn't have any information that further constrains the possible worlds he thinks he might find himself in; he's just wrong about the fraction of possible worlds that satisfy F2 that also satisfy "The top card is a heart.". Of all the possible worlds that satisfy F2 exactly 80% of them satisfy "The top card is a heart.", no more, no less. There is only one correct probability to assign to a proposition given your partial knowledge.
The last benefit of this way of talking I'll mention is that it makes probability's dependence on ignorance clear. We can imagine another agent that knows the truth value of every proposition, lets call him "FSM". There is only one possible world that satisfies all of FSM's background knowledge; the only viable world for FSM is the real world. Of the possible worlds that satisfy FSM's background knowledge, either all of them satisfy "The top card is a heart." or none of them do, since there is only one viable world for FSM. So the only probabilities FSM can assign to "The top card is a heart." are 1 or 0. In fact, those are the only probabilities FSM can assign to any proposition. If there is no uncertainty, there is no probability.
The world knows whether or not any given proposition is true (assuming determinism). The world itself is never uncertain, only the parts of the world that we call agents can be uncertain. Hence, Probability is always in a mind, not in the world. The probabilities that the universe assigns to a proposition are always 1 or 0, for the same reasons FSM only assigns a 1 or 0, and 1 and 0 aren't really probabilities.
In conclusion, I'll risk the hypothesis that: Where 0≤x≤1, "P(a|b)=x" is true, if and only if, of the possible worlds that satisfy "b", x of them also satisfy "a". Probabilities are propositional attitudes, and the probability value (or range of values) you assign to a proposition is representative of the fraction of possible worlds you find viable that satisfy that proposition. You may be wrong about the value of that fraction, and as a result you may be wrong about the probability you assign.
We may call the position summarized by the hypothesis above "Modal Satisfaction Frequency theory", or "MSF theory".
Edit: I think the P2c I wrote originally may have been a bit too weak; fixed that. Nevermind, rechecking, that wasn't needed.
More edits (now consolidate): Edited nontriviality note. Edited totality note. Added in the definition of numerical probability in terms of qualitative probability (though not the proof that it works). Also slight clarifications on implications of P6' and P6''' on partitions into equivalent and almost-equivalent parts, respectively.
One very late edit, June 2: Even though we don't get countable additivity, we still want a σ-algebra rather than just an algebra (this is needed for some of the proofs in the "partition conditions" section that I don't go into here). Also noted nonemptiness of gambles.
The idea that rational agents act in a manner isomorphic to expected-utility maximizers is often used here, typically justified with the Von Neumann-Morgenstern theorem. (The last of Von Neumann and Morgenstern's axioms, the independence axiom, can be grounded in a Dutch book argument.) But the Von Neumann-Morgenstern theorem assumes that the agent already measures his beliefs with (finitely additive) probabilities. This in turn is often justified with Cox's theorem (valid so long as we assume a "large world", which is implied by e.g. the existence of a fair coin). But Cox's theorem assumes as an axiom that the plausibility of a statement is taken to be a real number, a very large assumption! I have also seen this justified here with Dutch book arguments, but these all seem to assume that we are already using some notion of expected utility maximization (which is not only somewhat circular, but also a considerably stronger assumption than that plausibilities are measured with real numbers).
There is a way of grounding both (finitely additive) probability and utility simultaneously, however, as detailed by Leonard Savage in his Foundations of Statistics (1954). In this article I will state the axioms and definitions he gives, give a summary of their logical structure, and suggest a slight modification (which is equivalent mathematically but slightly more philosophically satisfying). I would also like to ask the question: To what extent can these axioms be grounded in Dutch book arguments or other more basic principles? I warn the reader that I have not worked through all the proofs myself and I suggest simply finding a copy of the book if you want more detail.
Peter Fishburn later showed in Utility Theory for Decision Making (1970) that the axioms set forth here actually imply that utility is bounded.
(Note: The versions of the axioms and definitions in the end papers are formulated slightly differently from the ones in the text of the book, and in the 1954 version have an error. I'll be using the ones from the text, though in some cases I'll reformulate them slightly.)
Part 1 was a tutorial for programming a simulation for the emergence and development of intelligent species in a universe 'similar to ours.' In part 2, we will use the model developed in part 1 to evaluate different explanations of the Fermi paradox. However, keep in mind that the purpose of this two-part series is for showcasing useful methods, not for obtaining serious answers.
We summarize the model given in part 1:
SIMPLE MODEL FOR THE UNIVERSE
- The universe is represented by the set of all points in Cartesian 4-space which are of Euclidean distance 1 from the origin (that is, the 3-sphere). The distance between two points is taken to be the Euclidean distance (an approximation to the spherical distance which is accurate at small scales)
- The lifespan of the universe consists of 1000 time steps.
- A photon travels s=0.0004 units in a time step.
- At the end of each time step, there is a chance that a Type 0 civilization will spontaneously emerge in an uninhabited region of space. The base rate for civilization birth is controlled by the parameter a. But this base rate is multiplied by the proportion of the universe which remains uncolonized by Type III civilizations.
- In each time step, a Type 0 civilization has a probability b of self-destructing, a probability c of transitioning to a non-expansionist Type IIa civilization, and a probability d of transitioning to a Type IIb civilization.
- Observers can detect all Type II and Type III civilizations within their past light cones.
- In each time step, a Type IIb civilization has a probability e of transitioning to an expansionist Type III civilization.
- In each time step, all Type III civilizations colonize space in all directions, expanding their sphere of colonization by k * s units per time step.
Section III. Inferential Methodology
In this section, no apologies are made for assuming that the reader has a solid grasp of the principles of Bayesian reasoning. Those currently following the tutorial from Part 1 may find it a good idea to skip to Section IV first.
To dodge the philosophical controversies surrounding anthropic reasoning, we will employ an impartial observer model. Like Jaynes, we introduce a robot which is capable of Bayesian reasoning, but here we imagine a model in which such a robot is instantaneously created and randomly injected into the universe at a random point in space, and at a random time point chosen uniformly from 1 to 1000 (and the robot is aware that it is created via this mechanism). We limit ourselves to asking what kind of inferences this robot would make in a given situation. Interestingly, the inferences made by this robot will turn out to be quite similar to the inferences that would be made under the self-indication assumption.
Are we alone in the universe? How likely is our species to survive the transition from a Type 0 to a Type II civilization? The answers to these questions would be of immense interest to our race; however, we have few tools to reason about these questions. This does not stop us from wanting to find answers to these questions, often by employing controversial principles of inference such as 'anthropic reasoning.' The reader can find a wealth of stimulating discussion about anthropic reasoning at Katja Grace's blog, the site from which this post takes its inspiration. The purpose of this post is to give a quantitatively oriented approach to anthropic reasoning, demonstrating how computer simulations and Bayesian inference can be used as tools for exploration.
The central mystery we want to examine is the Fermi paradox: the fact that
- we are an intelligent civilization
- we cannot observe any signs that other intelligent civilizations ever existed in the universe
One explanation for the Fermi paradox is that we are the only intelligent civilization in the universe. A far more chilling explanation is that intelligent civilizations emerge quite frequently, but that all other intelligent civilizations that have come before us ended up destroying themselves before they could manage to make their mark on their universe.
We can reason about which of the above two explanations are more likely if we have the audacity to assume a model for the emergence and development of civilizations in universe 'similar to ours.' In such a model, it is usually useful to distinguish different 'types' of civilizations. Type 0 civilizations are civilizations with similar levels of technology as ourselves. If a Type 0 civilization survives long enough and accumulates enough scientific knowledge, it can make a transition to a Type I civilization--a civilization which has attained mastery of their home planet. A Type I civilization, over time, can transition to a Type II civilization if it colonizes its solar system. We would suppose that a nearby civilization would have to have reached Type II in order for their activities to be prominent enough for us to be able to detect them. In the original terminology, a Type III civilization is one which has mastery of its galaxy, but in this post we take it to mean something else.
The simplest model for the emergence and development of civilizations would have to specify the following:
- the rate at which intelligent life appears in universes similar to ours;
- the rate at which these intelligent species transition from Type 0 to Type II, Type III civilizations--or self-destruct in the process;
- the visibility of Type II and Type III civilizations to Type 0 civilizations elsewhere
- the proportion of advanced civilizations which ultimately adopt expansionist policies;
- the speed at which those Type III civilizations can expand and colonize the universe.
In the model we propose in the post, the above parameters are held to be constant throughout the entire history of the universe. The importance of the model is that after given a particular specification of the parameters, we can apply Bayesian inference to see how well the model explains the Fermi paradox. The idea is to simulate many different histories of universes for a given set of parameters, so as to find the expected number of observers who observe the Fermi paradox given a particular specification of the parameters. More details about Bayesian inference given in Part 2 of this tutorial.
This post is targeted at readers who are interested in simulating the emergence and expansion of intelligent civilizations in 'universes similar to ours' but who lack the programming knowledge to code these simulations. In this post we will guide the reader through the design and production of a relatively simple universe model and the methodology for doing 'anthropic' Bayesian inference using the model.
View more: Next