Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Dark Arts 101: Be rigorous, on average

15 PhilGoetz 31 December 2014 12:37AM

I'm reading George Steiner's 1989 book on literary theory, Real Presences. Steiner is a literary theorist who achieved the trifecta of having appointments at Oxford, Cambridge, and Harvard. His book demonstrates an important Dark Arts method of argument.

So far, Steiner's argument appears to be:

  1. Human language is an undecidable symbol-system.
  2. Every sentence therefore carries with it an infinite amount of meaning, the accumulation of all connotations, contexts, and historical associations invoked, and invoked by those invocations, etc. Alternately, every sentence contains no meaning at all, since none of those words can refer to things in the world.
  3. The meaning of a sentence, therefore, is not finite or analyzable, but transcendent.
  4. The transcendent is the search for God.
  5. Therefore, all good literature is a search for God.

The critics quoted on the back of the book, and its reviews on Amazon, praise Steiner's rigor and learning. It is impressive. Within a single paragraph he may show the relationship between Homer, 12th-century theological works, Racine, Shakespeare, and Schoenberg. And his care and precision with words is exemplary; I have the impression, even when he speaks of meaning in music or other qualia-laden subjects, that I know exactly what he means.

He was intelligent enough to trace the problems he was grappling with out past the edges of his domain of expertise. The key points of his argument lie not in literary theory, but in information theory, physics, artificial intelligence, computability theory, linguistics, and transfinite math.

Unfortunately, he knows almost nothing about any of those fields, and his language is precise enough to be wrong, which he is when he speaks on any of those subjects. How did he get away with it?

Answer: He took a two-page argument about things he knew little about, spread it across 200 pages, and filled the gaps with tangential statements of impressive rigor and thoroughness on things he was expert in.

continue reading »

Productivity as a function of ability in theoretical fields

14 Stefan_Schubert 26 January 2014 01:16PM

I argued in this post that the differences in capability between different researchers are vast (Kaj Sotala provided me with some interesting empirical evidence that backs up this claim). Einstein's contributions to physics or John von Neumann's contributions to mathematics (and a number of other disciplines) are arguably at least hundreds of times greater than that of an average physicist or mathematician.

At the same time, Yudkowsky argues that "in the space of brain designs" the difference between the village idiot and Einstein is tiny. Their brains are extremely similar, with the exception of some "minor genetic tweaks". Hence we get the following picture:


The picture I am painting is rather something like this:

It would seem that these pictures are incompatible - something that would be a problem for my picture, since I think that Yudkowsky's picture is right. So how can they both be true? The answer is, obviously, that they are measuring different things. The first is measuring something like difference in brain design that is relevant for intelligence. The second is rather measuring the difference in capability to come up with physical theories that are of use for mankind. Here the village idiot is on par with the chimp and the mouse - all of whom have no such capability whatsover. The average physicist has some such capability, but it's just a fraction of Einstein's.

Why is this? Well it is not because the village idiot has no capability at all to come up with physical theories. In fact, a primitive physical theory that is quite useful is hard-wired into our brains. Rather, the reason is that the village idiot has no capability to come up with a physical theory that is not already well-known.

Problems in theoretical physics and mathematics are typically problems that are so complex that they are hard to solve for some of the world's smartest people. This means that unless you're quite smart, your chances of contributing anything at at all to these disciplines is very slim. But, if you are but a tiny bit smarter than everyone else, you'll be able to spot solutions to problem after problem that others have struggled with - these problems being problems precisely because they were hard to solve for people with a certain level of intelligence. Thus we get something like the following relationship between cognitive ability, in Yudkowsky's sense, and ability to come up with useful physical theories, i.e. productivity - what I'm talking about:



It is for this reason that people like von Neumann and Einstein are so vastly much more productive than the average mathematician/physicist. The difference in intelligence is tiny on Yudkowsky's scale - obviously much smaller than that between Einstein and the village idiot - but this tiny difference allowed von Neumann and Einstein to solve lots of problems that were just too hard for other mathematicians/physicists. (It follows that an artificial intelligence just a tiny bit smarter than Einstein and von Neumann would be as much more productive than them as they are in relation to other mathematician/physicists).

(Obviously other characteristics besides intelligence are very important in these fields - e.g.  work ethic. I put that complication aside here, though.)

The same pattern holds in many other fields - e.g. sports. In a sense, the difference in ability between Rafael Nadal and no 300 on the ATP ranking is very small - e.g. they are hitting the ball roughly as hard, are roughly as good at, say, shooting the ball within half a metre of the base-line when not under pressure, etc - but this small difference in ability makes for a huge difference in productivity (in the sense that lots of people want to watch Nadal - which means that his games generate a lot of utility - but few people want to watch no 300). 

But there are also fields where you have an entirely different pattern. The difference in productivity between the world's best cleaner and the average cleaner is, I'd guess, tiny. Similarly, if Peter is twice as strong as Paul, he will be able to fetch as much as water as needed in half the time Paul needs - neither more, nor less. In other words, the relationship between ability and productivity in these fields is linear:
You get approximately this linear pattern in many physical jobs, but also in some intellectual jobs. Assume, for instance, that there is an intellectual field where the only thing that determines your productivity is your ability to acquire and memorize factual information. Say also that this field is neatly separated into small problems, so that your knowledge of one problem doesn't affect your ability to solve other problems. In this case, a twice as good capacity to acquire and memorize factual information will mean that you'll be able to solve twice as many of these problems - neither more nor less. Now there is obviously no intellectual field where you have exactly this pattern, but there are fields - the more "descriptive", as opposed to theoretical, social sciences come to mind - which at least approach it, and where the differences in productivity hence are much smaller than they are in theoretical physics or mathematics. (Of course, there are other patterns besides these; for instance, in some jobs, what's important is that you meet some minimum level of ability, beyond which more ability translates in very little additional productivity.)

Due to the fact that different academic disciplines have more or less the same pay structure, and are governed by similar rules and social institutions, these large differences between them are, however, seldom noted. This contributes to our inability to see how huge the differences in productivity between different scientists are in some disciplines.

The difference between these two patterns is due to the fact that the first kinds of jobs are more "social" than the latter kinds in a particular way. The usefulness of your work in theoretical physics is dependent on how good others are at theoretical physics in a way the usefulness of your water fetching isn't. Even if you're weak, you'll still contribute something by carrying a small amount of water to put out a fire, but if you're not above a certain level of cognitive ability, your work in theoretical physics will have no value whatsoever.

I suppose that economists must have written on this phenomenon - what I term other-dependent productivity. If so I'd be interested in that and in adopting their terminology.

I think one reason why people have trouble accepting Yudkowsky's picture is that they note how vastly much more productive Einstein was than an average physicist (let alone the village idiot...) and then infer that this difference must be due to a vast difference in intelligence. Hence pointing out that the difference in productivity could be vast even though the difference in intelligence is not, due to the fact that productivity in theoretical physics is strongly other-dependent, should make people more disposed to accept Yudkowsky's picture.

It would be interesting to discuss what the relationship between ability and productivity is in different jobs and intellectual fields. I leave that for later, though. Obviously, the question of how ability is to be defined is relevant here. This question was extensively discussed in the comments to Yudkowsky's post but I have avoided to discuss it for two reasons: firstly, because I think it is possible to get an intuitive grasp of the phenomena I'm discussing without a precise definition of ability, and, secondly, because an extensive discussion of this notion would have made the post far too long and complicated.

Edit: Here is a relevant article I just found on Marginal Revolution on "winner-take-all economies" where "small differences in skills can mean large differences in returns". It also has some useful tips for further reading. 

Chocolate Ice Cream After All?

3 pallas 09 December 2013 09:09PM

I have collected some thoughts on decision theory and am wondering whether they are any good, or whether I’m just thinking non-sense. I would really appreciate some critical feedback. Please be charitable in terms of language and writing style, as I am not a native English speaker and as this is the first time I am writing such an essay.

Overview

  • The classical notion of free will messes up our minds, especially in decision-theoretic problems. Once we come to see it as confused and reject it, we realize that our choices in some sense not only determine the future but also the past.
  • If determining the past conflicts with our intuitions of how time behaves, then we need to adapt our intuitions.
  • The A,B-Game shows us that, as far as the rejection of free will allows for it, it is in principle possible to choose our genes.
  • Screening off only applies if we consider our action to be independent of the variable of interest – at least in expectation.
  • When dealing with Newcomblike problems, we have to be clear about which forecasting powers are at work. Likewise, it turns out to be crucial to precisely point out which agent knows how much about the setting of the game.
  • In the standard version of Newcomb’s Soda, one should choose chocolate ice cream – unless the game were specified in a way that previous subjects did not (unlike us) know of any interdependence of soda and ice cream.
  • Variations of Newcomb’s Soda suggest that the evidential approach makes us better off.
  • The analysis of Newcomb’s Soda shows that its formulation fundamentally differs from the formulation of Solomon’s Problem.  
  • Given all study-subjects make persistent precommitments, a proper use of evidential reasoning suggests precommitting to take chocolate ice cream. This is why Newcomb’s Soda does not show that the evidential approach is dynamically inconsistent.
  • The tickle defense does not apply to the standard medical version of Solomon’s Problem. In versions where it applies, it does not tell us anything non-trivial.
  • Evidential reasoning seems to be a winning approach not only in Newcomb’s Problem, but also in Newcomb’s Soda and in the medical version of Solomon’s Problem. Therefore, we should consider a proper use of evidential reasoning as a potentially promising component when building the ultimate decision algorithm.

In the standard formulation of Newcomb’s Soda, the evidential approach suggests picking chocolate ice cream, since this makes it more probable that we will have been awarded the million dollars. Hence, it denies us the thousand dollars we actually could win if we only took vanilla ice cream. Admittedly, this may be counterintuitive. Common sense tells us that considering the thousand dollars, one could change the outcome, whereas one cannot change which type of soda one has drunk; therefore we have to make a decision that actually affects our outcome. Maybe the flaw in this kind of reasoning doesn’t pose a problem to our intuitions as long as we deal with a “causal-intuition-friendly” setting of numbers. So let’s consider various versions of this problem in order to thoroughly compare the  two competing algorithmical traits. Let’s find out which one actually wins and therefore should be implemented by rational agents.

In this post, I will discuss Newcomblike problems and conclude that the arguments presented support an evidential approach. Various decision problems have shown that plain evidential decision theory is not a winning strategy. I instead propose to include evidential reasoning in more elaborate decision theories, such as timeless decision theory or updateless decision theory, since they also need to come up with an answer in Newcomblike problems.
By looking at the strategies proposed in those problems, currently discussed decision theories produce outputs that can be grouped into evidential-like and causal-like. I am going to outline which of these two traits a winning decision theory must possess.
Let’s consider the following excerpt by Yudkowsky (2010) about the medical version of Solomon’s Problem:

“In the chewing-gum throat-abscess variant of Solomon’s Problem, the dominant action is chewing gum, which leaves you better off whether or not you have the CGTA gene; but choosing to chew gum is evidence for possessing the CGTA gene, although it cannot affect the presence or absence of CGTA in any way.”

In what follows, I am going to elaborate on why I believe this point (in the otherwise brilliant paper) needs to be reconsidered. Furthermore, I will explore possible objections and have a look at other decision problems that might be of interest to the discussion.
But before we discuss classical Newcomblike problems, let’s first have a look at the following thought experiment:

 

The school mark is already settled

Imagine you were going to school; it is the first day of the semester. Suppose you only care about getting the best marks. Now your math teacher tells you that he knows you very well and that this would be why he already wrote down the mark you will receive for the upcoming exam. To keep things simple, let’s cut down your options to “study as usual” and “don't study at all”. What are you going to do? Should you learn as if you didn’t know about the settled mark? Or should you not learn at all since the mark has already been written down?

This is a tricky question because the answer to it depends on your credence in the teacher’s forecasting power. Therefore let's consider the following two cases:

  1. Let's assume that the teacher is correct in 100% of the cases. Now we find ourselves in a problem that resembles Newcomb's Problem since our decision exactly determines the output of his prediction. Just as an agent that really wishes to win the most money should take only one box in Newcomb’s Problem, you should learn for the exams as if you didn't know that the marks are already settled. (EDIT: For the record, one can point out a structural (but not relevant) difference between the two problems: Here, the logical equivalences "learning" <--> "good mark" and "not learning" <--> "bad mark" are part of the game's assumptions, while the teacher predicts in which of these two worlds we live in. In Newcomb's Problem, Omega predicts the logical equivalences of taking boxes and payoffs.)
  2. Now let's consider a situation where we assume a teacher having no forecasting power at all. In such a scenario the student's future effort behaves independently of the settled marks, that is no matter what input the student provides, the output of the teacher will have been random. Therefore, if we find ourselves in such a situation we shouldn't study for the exam and enjoy the gained spare time.

(Of course we can also think of a case 3) where the teacher's prediction is wrong in 100% of all cases. Let’s specify “wrong” since marks usually don’t work in binaries, so let’s go with “wrong” as the complementary mark. For instance, the best mark corresponds to the worst, the second best to the second worst and so on. In such a case not learning at all and returning an empty exam sheet would determine receiving the best marks. However, this scenario won't be of big interest to us.)
This thought experiment suggests that a deterministic world does not necessarily imply fatalism, since in expectation the fatalist (who wouldn't feel obligated to learn because the marks are "already written down") would lose in cases where the teacher predicts other than random. Generally, we can say that – beside the case 2) – in all the other cases the learning behaviour of the student is relevant for receiving a good mark.
This thought experiment does not only make it clear that determinism does not imply fatalism, but it even shows that fatalists tend to lose once they stop investing ressources in desriable outcomes. This will be important in subsequent sections. Now let us get to the actual topic of this article which already has been mentioned as an aside: Newcomblike problems.

 

Newcomb’s Problem

The standard version of Newcomb’s Problem has been thoroughly discussed on Lesswrong. Many would agree that one-boxing is the correct solution, for one-boxing agents obtain a million dollars, while two-boxers only take home a thousand dollars. To clarify the structure of the problem: an agent chooses between two options, “AB“ and “B“. When relatively considered, the option B “costs” a thousand dollars because one would abandon transparent box A containing this amount of money. As we play with the predictor Omega, who has an almost 100%  forecasting power, our decision determines what past occured, that is we determine whether Omega put a million into box B or not. With determining I mean as much as “being compatible with”. Hence, choosing box B is compatible only with a past where Omega put a million into it.

 

Newcomb’s Problem’s Problem of Free Will

To many, Newcomb’s Problem seems counterintuitive. People tend to think: “We cannot change the past, as past events have already happened! So there’s nothing we can do about it. Still, somehow the agents that only choose B become rich. How is this possible?“
This uneasy feeling can be resolved by clarifing the notion of “free will”, i.e. by acknowledging that a world state X either logically implies (hard determinism) or probabilistically suggests (hard incompatibilism, stating that free will is impossible and complete determinism is false) another world state Y or a set of possible world states (Y1,Y2,Y3,..,Yn) – no matter if X precedes Y or vice versa. (Paul Almond has shown in his paper on decision theory – unfortunately his page has been down lately – that upholding this distinction does not affect the clarification of free will in decision-theoretic problems. Therefore, I chose to go with hard determinism.)

The fog will lift once we accept the above. Since our action is a subset of a particular world state, the action itself is also implied by preceding world states, that is once we know all the facts about a preceding world state we can derive facts about subsequent world states.
If we look more closely, we cannot really choose in a way that people used to think. Common sense tells us that we confront a “real choice” if our decision is not just determined by external factors and also not picked at random, but governed by our free will. But what could this third case even mean? Despite its intuitive usefulness, the classical notion of choice seems to be an ill-defined term since it requires a problematic notion of free will, that is to say one that ought to be non-random but also not determined at once.
This is why I want to suggest a new definition of choice: Choosing is the way agents execute what they were determined to by other world states. Choosing has nothing to do with “changing” what did or is going to happen. The only thing that actually changes is the perception of what did or is going to happen, since executions produce new data points that call for updates.
So unless we could use a “true” random generator (which would only be possible if we did not assume complete determinism to be true) in order to make decisions, what we are going to do is “planned” and determined by preceding and subsequent world states.
If I take box B, then this determines a past world state where Omega has put a million dollars into this box. If I take both box A and B, then this determines a past world state where Omega has left box B empty. Therefore, when it comes to deciding, taking actions that determine (or are compatible with) not only desirable future worlds, but also desirable past worlds are the ones that make us win.
One may object now that we aren’t “really“ determining the past, but we only determine our perception of it. That’s an interesting point. In the next section we are going to have a closer look on that. For now, I’d like to bring the underlying perception of time into question. Because once I choose only box B, it seems that the million dollars I receive is not just an illusion of my map but it is really out there. Admittedly the past seems unswayable, but this example shows that maybe our conventional perception of time is misleading as it conflicts with the notion of us choosing what happened in the past.
How come self-proclaimed deterministic non-fatalists in fact are fatalists when they deal with the past? I’d suggest to perceive time not as being divided into seperate caterogies like “stuff that has passed “ and “stuff that is about to happen“, but rather as one dimension where every dot is just as real as any other and where the manifestation of one particular dot restrictively determines the set of possible manifestations other dots could embody. It is crucial to note that such a dot would describe the whole world in three spatial dimensions, while subsets of world states could still behave independently.

Perceiving time without an inherent “arrow” is not new to science and philosophy, but still, readers of this post will probably need a compelling reason why this view would be more goal-tracking. Considering the Newcomb’s Problem a reason can be given: Intuitively, the past seems much more “settled” to us than the future. But it seems to me that this notion is confounded as we often know more about the past than we know about the future. This could tempt us to project this disbalance of knowledge onto the universe such that we perceive the past as settled and unswayable in contrast to a shapeable future. However, such a conventional set of intuitions conflicts strongly with us picking only one box. These intuitions would tell us that we cannot affect the content of the box; it is already filled or empty since it has been prepared in the now inaccessible past.

Changing the notion of time into one block would lead to “better” intuitions, because they directly suggested to choose one box, as this action is only compatible with a more desirable past. Therefore we might need to adapt our intution, so that the universe looks normal again. To illustrate the ideas discussed above and to put them into practice, I have constructed the following game:

 

The A,B-Game

You are confronted with Omega, a 100% correct predictor. In front of you, there are two buttons, A and B. You know that there are two kinds of agents. Agents with the gene G_A and agents with the gene G_B. Carriers of G_A are blessed with a life expectancy of 100 years whereas carriers of G_B die of cancer at the age of 40 on average. Suppose you are much younger than 40. Now Omega predicts that every agent who presses A is a carrier of G_A and every agent that presses B is a carrier of G_B. You can only press one button, which one should it be if you want to live for as long as possible?
People who prefer to live for a hundred years over forty years would press A. They would even pay a lot of money in order to be able to do so. Though one might say one cannot change or choose one’s genes. Now we need to be clear about which definition of choice we make use of. Assuming the conventional one, I would agree that one could not choose one’s genes, but for instance, when getting dressed, one could not choose one’s jeans either, as the conventional understanding of choice requires an empty notion of non-random, not determined free will that is not applicable. Once we use the definition I introduced above, we can say that we choose our jeans. Likewise, we can choose our genes in the A,B-Game. If we one-box in Newcomb’s Problem, we should also press A here, because the two problems are structurally identical (except for the labels “box” versus “gene”).
The notion of objective ambiguity of genes only stands if we believe in some sort of objective ambiguity about which choices will be made. When facing a correct predictor, those of us who believe in indeterministic objective ambiguity of choices have to bite the bullet that their genes would be objectively ambiguous. Such a model seems counterintuitive, but not contradictory. However, I don’t feel forced to adapt this indeterministic view.

Let us focus on the deterministic scenario again: In this case, our past already determined our choice, so there is only one way we will go and only one way we can go.
We don’t know whether we are determined to do A or B. By “choosing” the one action that is compatible only with the more desirable past, we are better off. Just as we don’t know in Newcomb’s Problem whether B is empty or not, we have to behave in a way such that it must have been filled already. From our perspective, with little knowledge about the past, our choice determines the manifestation of our map of the past. Apparently, this is exactly what we do when making choices about the future. Taking actions determines the manifestation of our map of the future. Although the future is already settled, we don’t know yet its exact manifestation. Therefore, from our perspective, it makes sense to act in ways that determine the most desirable futures. This does not automatically imply that some mysterious “change” is going to happen.
In both directions it feels like one would change the manifestation of other world states, but when we look more closely we cannot even spell out what that would mean. The word “change” only starts to become meaningful once we hypothetically compare our world with counterfactual ones (where we were not determined to do what we do in our world). In such a framework we could consistently claim that the content of box B “changes” depending on whether or not we choose only box B.

 

Screening off

Following this approach of determining one’s perception of the world, the question arises, whether every change in perception is actually goal-tracking. We can ask ourselves, whether an agent should avoid new information if she knew that the new information had negative news value. For instance, if an agent, being suspected of having lung cancer and awaiting the results of her lung biopsy, seeks actions that make more desirable past world states more likely, then she should figure out a way so that she doesn’t receive any mail, for instance by declaring an incorrect postal address. This naive approach obviously fails because of lack of proper use of Bayesian updating. The action ”avoiding to receive mail” screens off the desirable outcome so that once we know about this action we don’t learn anything about the biopsy in (the very probable) case that we don’t receive any mail.

In the A,B-Game, this doesn’t apply, since we believe Omega’s prediction to be true when it says that A necessarily belongs to G _A and B to G_B. Generally, we can distinguish the cases by clarifying existing independencies: In the lung cancer case where we simply don’t know better, we can assume that P(prevention|positive lab result)=P(prevention|negative lab result)=P(prevention). Hence, screening off applies. In the A,B-Game, we should believe that P(Press A|G_A)>P(Press A)=P(Press A|G_A or G_B). We obtain this relevant piece of information thanks to Omega’s forecasting power. Here, screening off does not apply.

Subsequently, one might object that the statement P(Press A|G_A)>P(Press A) leads to a conditional independence as well, at least in cases where not all the players that press A necessarily belong to G_A. Then you might be pressing A because of your reasoning R_1 which would screen off pressing A from G_A. A further objection could be that even if one could show a dependency between G_A and R_1, you might be choosing R_1 because of some meta-reasoning R_2 that again provides a reason not to press A. However, considering these objections more thoroughly, we realize that R_1 has to be congruent or at least evenly associated (in G_A as well as in G_B) with Pressing A. The same works for R_2. If this wasn’t the case, then we would be talking about another game, a game where we knew, for instance, that 90% of the G_A carriers choose button A (without thinking) because of the gene and 10% of the G_B carriers would choose button A because of some sort of evidential reasoning. Knowing this, choosing A out of evidential reasoning would be foolish, since we already know that only G_B carriers could do that. Once we know this, evidential reasoners would suggest not to press A (unless B offers an even worse outcome). So these further objections fail as well, as they implicitly change the structure of the discussed problem. We can conclude that  no screening off applies as long as an instance with forecasting power tells us that a particular action makes the desirable outcome likelier.
Now let’s have a look at an alteration of the A,B-Game in order to figure out whether screening-off might apply here.

 

A Weak Omega in The A,B-Game

Thinking about the A,B-Game, what happens if we decreased Omega’s forecasting power? Let’s assume now that Omega’s prediction is correct only in 90% of all cases. Should this fundamentally change our choice whether to press A or B because we only pressed A as a consequence of our reasoning?
To answer that, we need to be clear about why agents believe in Omega’s predictions. They believe in Omega’s prediction because they were correct so many times. This constitutes Omega’s strong forecasting power. As we saw above, screening off only applies if the predicting instance (Omega, or us reading a study) has no forecasting power at all.
In the A,B-Game, as well as in the original Newcomb’s Problem, we also have to take the predictions of a weaker Omega (with less forecasting power) into account, unless we face an Omega that happens to be right by chance (i.e. in 50% of the cases when considering a binary decision situation).

If, in the standard A,B-Game, we consider pressing A to be important, and if we were willing to spend a large  amount of money in order to be able to press A (suppose the button A would send a signal to cause a withdrawal from our bank account), then this amount should only gradually shrink once we decrease Omega’s forecasting power. The question now arises whether we also had to “choose” the better genes in the medical version of Solomon’s Problem and whether there might not be a fundamental difference between it and the original Newcomb’s Problem.

 

Newcomb’s versus Solomon’s Problem

In order to uphold this convenient distinction, people tell me that “you cannot change your genes” though that’s a bad argument since one could reply “according to your definition of change, you cannot change the content of box B either, still you choose one-boxing”. Further on, I quite often hear something like “in Newcomb’s Problem, we have to deal with Omega and that’s something completely different than just reading a study”. This – in contrast to the first – is a good point.

In order to accept the forecasting power of a 100% correct Omega, we already have to presume induction to be legitimate. Or else one could say: “Well, I see that Omega has been correct in 3^^^3 cases already, but why should I believe that it will be correct the next time?”. As sophisticated this may sound, such an agent would lose terribly. So how do we deal with studies then? Do they have any forecasting power at all? It seems that this again depends on the setting of the game. Just as Omega’s forecasting power can be set, the forecasting power of a study can be properly defined as well. It can be described by assigning values to the following two variables: its descriptive power and its inductive power. To settle them, we have to answer two questions: 1. How correct is the study's description of the population? 2. How representative is the population of the study to the future population of agents acting in knowledge of the study? Or in other words, to what degree can one consider the study subjects to be in one’s reference class in order to make true predictions about one’s behaviour and the outcome of the game? Once this is clear, we can then infer the forecasting power. How much forecasting power does the study have? Let’s assume that the study we deal with is correct in what it describes. Those who wish can use a discounting factor. However, this is not important for subsequent arguments and would only make it more complicated.

Considering the inductive power, it get’s more tricky. Omega’s predictions are defined to be correct. In contrast, the study’s predictions have not been tested. Therefore we are quite uncertain about the study’s forecasting power. It were 100% if and only if every factor involved was specified so that the total of them compel identical outcomes in the study and our game. Due to induction, we do have reason to assume a positive value of forecasting power. To identify its specific value (that discounts the forecasting power according to the specified conditions), we would need to settle every single factor that might be involved. So let’s keep it simple by applying a 100% forecasting power. As long as there is a positive value of forecasting power, the basic point of the subsequent arguments (that presume a 100% forecasting power) will also hold when discounted.
Thinking about the inductive power of the study, there still is one thing that we need to specify: It is not clear what exactly previous subjects of the study knew.

For instance in a case A), the subjects of the study knew nothing about the tendency of CGTA-carriers to chew gum. First, their genom was analyzed, then they had to decide whether or not to chew gum. In such a case, the subjects‘ knowledge is quite different from those who play the medical version of Solomon’s Problem. Therefore screening off applies. But does it apply to the same extent as in the avoiding-bad-news example mentioned above? That seems to be the case. In the avoiding-bad-news example, we assumed that there is no connection between the variables „lung cancer“ and „avoiding mail“. In Solomon’s Problem such an indepence can be settled as well. Then the variables „having the gene CGTA“ and „not chewing gum because of evidential reasoning“ are also assumed to be independent. Total screening off applies. Considering an evidential reasoner who knows that much, choosing not to chew gum would then be as irrational as declaring an incorrect postal address when awaiting biopsy results.

Now let us consider a case B) where the subjects were introduced to the game just as we were. Then they would know about the tendency of CGTA-carriers to chew gum, and they themselves might have used evidential reasoning. In this scenario, screening off does not apply. This is why not chewing gum would be the winning strategy.
One might say that of course the study-subjects did not know of anything and that we should assume case A) a priori. I only partially agree with that. The screening off can already be weakend if, for instance, the subjects knew why the study was conducted. Maybe there was anecdotal evidence about heredity of a tendency to chew gum, which was about to be confirmed properly.
Without further clarification, one can plausibly assume a probability distribution over various intermediate cases between A and B where screening off becomes gradually fainter when getting closer to B. Of course there might also be cases where anecdotal evidence leads astray, but in order to cancel out the argument above, anecdotal evidence needs to be equalized with in expectation knowing nothing at all. But since it seems to be better (even though not much) than knowing nothing, it is not a priori clear that we have to assume case A right away.  
So when compiling a medical version of Solomon’s Problem, it is important to be very clear about what the subjects of the study were aware of.

 

What about Newcomb’s Soda?

After exploring screening off and possible differences between Newcomb’s Problem and Solomon’s Problem (or rather between Omega and a study), let’s investigate those questions in another game. My favourite of all Newcomblike problems is called Newcomb’s Soda and was introduced in Yudkowsky (2010). Comparing Newcomb’s Soda with Solomon’s Problem, Yudkowsky writes:
“Newcomb’s Soda has the same structure as Solomon’s Problem, except that instead of the outcome stemming from genes you possessed since birth, the outcome stems from a soda you will drink shortly. Both factors are in no way affected by your action nor by your decision, but your action provides evidence about which genetic allele you inherited or which soda you drank.”

Is there any relevant difference in structure between the two games?
In the previous section, we saw that once we settle that the study-subjects in Solomon’s Problem don’t know of any connection between the gene and chewing gum, screening off applies and one has good reasons to chew gum. Likewise, the screening off only applies in Newcomb’s Soda if the subjects of the clinical test are completely unaware of any connection between the sodas and the ice creams. But is this really the case? Yudkowsky introduces the game as one big clinical test in which you are participating as a subject:

“You know that you will shortly be administered one of two sodas in a double-blind clinical test. After drinking your assigned soda, you will enter a room in which you find a chocolate ice cream and a vanilla ice cream. The first soda produces a strong but entirely subconscious desire for chocolate ice cream, and the second soda produces a strong subconscious desire for vanilla ice cream.”

This does not sound like previous subjects had no information about a connection between the sodas and the ice creams. Maybe you, and you alone, received those specific insights. If this were the case, it clearly had to be mentioned in the game’s definition, since this factor is crucial when it comes to decision-making. Considering a game where the agent herself is a study-subject, without further specification, she wouldn’t by default expect that other subjects knew less about the game than she did. Therefore let’s assume in the following that all the subjects in the clinical test knew that the sodas cause a subconscious desire for a specific flavor of ice cream.

 

Newcomb’s Soda in four variations

Let “C” be the causal approach which states that one has to choose vanilla ice cream in Newcomb’s Soda. C only takes the $1,000 of the vanilla ice cream into account since one still can change the variable “ice cream”, whereas the variable “soda” is already settled. Let “E” be the evidential approach which suggests that one has to choose chocolate or vanilla ice cream in Newcomb’s Soda – depending on the probabilities specified. E takes both the $1,000 of the vanilla ice cream and the $1,000,000 of the chocolate soda into account. In that case, one argument can outweigh the other.  

Let’s compile a series of examples. We denote “Ch” for chocolate, “V” for vanilla, “S” for soda and “I” for ice cream. In all versions Ch-S will receive $1,000,000 and V-I will receive $1,000 and P(Ch-S)=P(V-S)=0.5. Furthermore we settle that P(Ch-I|Ch-S)=P(V-I|V-S) and call this term “p” in every version so we don’t vary unnecessarily many parameters. As we are going to deal with large numbers, let’s assume a linear monetary value utility function.

Version 1: Let us assume a case where the sodas are dosed homeopathically, so that no effect on the choice of ice creams can be observed. Ch-S and V-S choose from Ch-I and V-I randomly so that p=P(V-I|Ch-S)=P(Ch-I|V-S)=0.5. Both C and E choose V-I and win 0.5 *$1,001,000 + 0.5*$1000=$501,000 in expectation. C only considers the ice cream whereas E considers the soda as well, though in this case the soda doesn’t change anything as the Ch-S are equally distributed over Ch-I and V-I.

Version 2: Here p=0.999999. Since P(Ch-S)=P(V-S)=0.5, one Ch-I in a million will have originated from V-S, whereas one V-I in a million will have originated from Ch-S. The other 999,999 Ch-I will have determined the desired past, Ch-S, due to their choice of Ch-I. So if we participated in this game a million times and tracked E that suggests choosing Ch-I each time, we overall could have expected to win 999,999*$1,000,000=$999,999,000,000. This is different to following C’s advice. As C tells us that we cannot affect which soda we have drunk we would choose V-I each time and could expect to win 1,000,000*$1,000+$1,000,000=$1,001,000,000 in total. The second outcome, which C is responsible for, is 999 times worse than the first (which was suggested by E). In this version, E clearly outperforms C in helping us to make the most money.

Version 3: Now we have p=1. This version is equivalent to the standard version of the A,B-Game. What would C do? It seems that C ought to maintain its view that we cannot affect the soda. Therefore, only considering the ice cream-part of the outcome, C will suggest choosing V-I. This seems to be absurd: C leaves us disappointed with $1,000, whereas E makes us millionaires every single time.
A C-defender might say: “Wait! Now you have changed the game. Now we are dealing with a probability of 1!” The response would be : “Interesting, I can make p get as close to 1 as I want as long as it isn’t 1 and the rules of the game and my conclusions would still remain. For instance, we can think of a number like 0.999…(100^^^^^100 nines in a row). So tell me why exactly the probability change of 0.000…(100^^^^^100 -1 zeros in a row)1 should make you switch to Ch-I? But wait, why would you – as a defender of C –  even consider Ch-I since it cannot affect your soda while it definitely prevents you from winning the $1,000 of the ice cream?”

The previous versions tried to exemplify why taking both arguments (the $1,000 and the $1,000,000) into account makes you better off at the one edge of the probability measure, whereas at the other edge, C and E produce the same outcomes. With a simple equation we can figure out for which p E would be indifferent about whether to choose Ch-I or V-I: solve(p*1,000,000=(1-p)*1,000,000+1,000,p). This gives us p=0.5005. So for 0.5005<p<=1 E does better than C and for 0<=p<=0.5005 E and C behave alike. Finally, let us consider the original version:  

Version 4: Here we deal with p=0.9. According to the above we could already deduce that deciding according to E makes us better off, but let’s have a closer look at it for the sake of completeness: In expectation, choosing V-I makes us win 0.1*$1,000,000+$1,000=$101,000, whereas Ch-I leaves us with 0.9*$1,000,000=$900,000 almost 9 times richer. After the insights above, it shouldn’t surprise us too much that E clearly does better than C in the original version of Newcomb’s Soda as well.

The variations above illustrated that C had to eat V-I even if 99.9999% of C-S choose C-I and 99.9999% of V-S eat V-I. If you played it a million times, in expectation C-I would win the million 999,999 times and V-I just once. Can we really be indifferent about that? Wasn’t it all about winning and losing? And who is winning here and who is losing?

 

Newcomb-Soda and Precommitments

Another excerpt from Yudkowsky (2010):
“An evidential agent would rather precommit to eating vanilla ice cream than precommit to eating chocolate, because such a precommitment made in advance of drinking the soda is not evidence about which soda will be assigned.”
At first sight this seems intuitive. But if we look at the probabilities more closely suddenly a problem arises: Let’s consider an agent that precommits (let’s assume a 100% persistent mechanism) one’s decision before a standard game (p=0.9) starts. Let’s assume that he precommits – as suggested above – to choose V-I. What credence should he assign to P(Ch-S|V-I)? Is it 0.5 as if he didn’t precommit at all or does something change? Basically, adding precommitments to the equation inhibits the effect of the sodas on the agent’s decision. Again, we have to be clear about which agents are affected by this newly introduced variable. If we were the only ones who can precommit 100% persistently, then our game fundamentally differs from the previous subjects’ one. If they didn’t precommit, we couldn’t presuppose a forecasting power anymore because the previous subjects decided according to the soda’s effect, whereas we now decide independently of that. In this case, E would suggest to precommit to V-I. However, this would constitute an entirely new game without any forecasting power. If all the agents of the study make persistent precommitments, then the forecasting power holds; the game doesn’t change fundamentally. Hence, the way previous subjects behaved remains crucial to our decision-making. Let’s now imagine that we were playing this game a million times. Each time we irrevocably precommit to V-I. In this case, if we consider ourselves to be sampled randomly among V-I, we can expect to originate from V-S 900,000 times. If we approach p to 1 we see that it gets desperately unlikely to originate from Ch-S once we precommit ourselves to V-I. So a rational agent following E should precommit Ch-I in advance of drinking the soda. Since E suggests Ch-I both during and before the game, this example doesn’t show that E would be dynamically inconsistent.

In the other game, where only we precommit persistently and the previous subjects don’t, picking V-I doesn’t make E dynamically inconsistent, as we would face another decision situation where no forecasting power applies. Of course we can also imagine intermediate cases. For instance one, where we make precommitments and the previous subjects were able to make them as well, but we don’t know whether they did. The more uncertain we get  about their precommitments, the more we approach the case where only we precommit while the forecasting power gradually weakens. Those cases are more complicated, but they do not show a dynamical inconsistency of E either.

 

The tickle defense in Newcomblike problems

In the last section I want to have a brief look at the tickle defense, which is sometimes used to defend evidential reasoning by offering a less controversial output. For instance, it states that in the medical version of Solomon’s Problem an evidential reasoner should chew gum, since she can rule out having the gene as long as she doesn’t feel an urge to chew gum. So chewing gum doesn’t make it likelier to have the gene since she already has ruled it out.

I believe that this argument fails since it changes the game. Suddenly, the gene doesn’t cause you to “choose chewing gum” anymore but to “feel an urge to choose chewing gum”. Though I admit, in such a game a conditional independence would screen off the action “not chewing gum” from “not having the gene” – no matter what the previous subjects of the study knew. This is why it would be more attractive to chew gum. However, I don’t see why this case should matter to us. In the original medical version of Solomon’s Problem we are dealing with another game where this particular kind of screening off does not apply. As the gene causes one to “choose chewing gum” we can only rule it out by not doing so. However, this conclusion has to be treated with caution. For one thing, depending on the numbers, one can only diminish the probability of the undesirable event of having the gene – not rule it out completely; for another thing, the diminishment only works if the previous subjects were not ignorant of a depedence of the gene and chewing gum – at least in expectation. Therefore the tickle defense only trivially applies to a special version of the medical Solomon’s Problem and fails to persuade proper evidential reasoners to do anything differently in the standard version. Depending on the specification of the previous subjects’ knowledge, an evidential reasoner would still chew or not chew gum.

Robust Cooperation in the Prisoner's Dilemma

69 orthonormal 07 June 2013 08:30AM

I'm proud to announce the preprint of Robust Cooperation in the Prisoner's Dilemma: Program Equilibrium via Provability Logic, a joint paper with Mihaly Barasz, Paul Christiano, Benja Fallenstein, Marcello Herreshoff, Patrick LaVictoire (me), and Eliezer Yudkowsky.

This paper was one of three projects to come out of the 2nd MIRI Workshop on Probability and Reflection in April 2013, and had its genesis in ideas about formalizations of decision theory that have appeared on LessWrong. (At the end of this post, I'll include links for further reading.)

Below, I'll briefly outline the problem we considered, the results we proved, and the (many) open questions that remain. Thanks in advance for your thoughts and suggestions!

Background: Writing programs to play the PD with source code swap

(If you're not familiar with the Prisoner's Dilemma, see here.)

The paper concerns the following setup, which has come up in academic research on game theory: say that you have the chance to write a computer program X, which takes in one input and returns either Cooperate or Defect. This program will face off against some other computer program Y, but with a twist: X will receive the source code of Y as input, and Y will receive the source code of X as input. And you will be given your program's winnings, so you should think carefully about what sort of program you'd write!

Of course, you could simply write a program that defects regardless of its input; we call this program DefectBot, and call the program that cooperates on all inputs CooperateBot. But with the wealth of information afforded by the setup, you might wonder if there's some program that might be able to achieve mutual cooperation in situations where DefectBot achieves mutual defection, without thereby risking a sucker's payoff. (Douglas Hofstadter would call this a perfect opportunity for superrationality...)

Previously known: CliqueBot and FairBot

And indeed, there's a way to do this that's been known since at least the 1980s. You can write a computer program that knows its own source code, compares it to the input, and returns C if and only if the two are identical (and D otherwise). Thus it achieves mutual cooperation in one important case where it intuitively ought to: when playing against itself! We call this program CliqueBot, since it cooperates only with the "clique" of agents identical to itself.

There's one particularly irksome issue with CliqueBot, and that's the fragility of its cooperation. If two people write functionally analogous but syntactically different versions of it, those programs will defect against one another! This problem can be patched somewhat, but not fully fixed. Moreover, mutual cooperation might be the best strategy against some agents that are not even functionally identical, and extending this approach requires you to explicitly delineate the list of programs that you're willing to cooperate with. Is there a more flexible and robust kind of program you could write instead?

As it turns out, there is: in a 2010 post on LessWrong, cousin_it introduced an algorithm that we now call FairBot. Given the source code of Y, FairBot searches for a proof (of less than some large fixed length) that Y returns C when given the source code of FairBot, and then returns C if and only if it discovers such a proof (otherwise it returns D). Clearly, if our proof system is consistent, FairBot only cooperates when that cooperation will be mutual. But the really fascinating thing is what happens when you play two versions of FairBot against each other. Intuitively, it seems that either mutual cooperation or mutual defection would be stable outcomes, but it turns out that if their limits on proof lengths are sufficiently high, they will achieve mutual cooperation!

The proof that they mutually cooperate follows from a bounded version of Löb's Theorem from mathematical logic. (If you're not familiar with this result, you might enjoy Eliezer's Cartoon Guide to Löb's Theorem, which is a correct formal proof written in much more intuitive notation.) Essentially, the asymmetry comes from the fact that both programs are searching for the same outcome, so that a short proof that one of them cooperates leads to a short proof that the other cooperates, and vice versa. (The opposite is not true, because the formal system can't know it won't find a contradiction. This is a subtle but essential feature of mathematical logic!)

Generalization: Modal Agents

Unfortunately, FairBot isn't what I'd consider an ideal program to write: it happily cooperates with CooperateBot, when it could do better by defecting. This is problematic because in real life, the world isn't separated into agents and non-agents, and any natural phenomenon that doesn't predict your actions can be thought of as a CooperateBot (or a DefectBot). You don't want your agent to be making concessions to rocks that happened not to fall on them. (There's an important caveat: some things have utility functions that you care about, but don't have sufficient ability to predicate their actions on yours. In that case, though, it wouldn't be a true Prisoner's Dilemma if your values actually prefer the outcome (C,C) to (D,C).)

However, FairBot belongs to a promising class of algorithms: those that decide on their action by looking for short proofs of logical statements that concern their opponent's actions. In fact, there's a really convenient mathematical structure that's analogous to the class of such algorithms: the modal logic of provability (known as GL, for Gödel-Löb).

So that's the subject of this preprint: what can we achieve in decision theory by considering agents defined by formulas of provability logic?

continue reading »

Meta Decision Theory and Newcomb's Problem

5 wdmacaskill 05 March 2013 01:29AM

Hi all,

As part of my PhD I've written a paper developing a new approach to decision theory that I call Meta Decision Theory. The idea is that decision theory should take into account decision-theoretic uncertainty as well as empirical uncertainty, and that, once we acknowledge this, we can explain some puzzles to do with Newcomb problems, and can come up with new arguments to adjudicate the causal vs evidential debate. Nozick raised this idea of taking decision-theoretic uncertainty into account, but he did not defend the idea at length, and did not discuss implications of the idea.

I'm not yet happy to post this paper publicly, so I'll just write a short abstract of the paper below. However, I would appreciate written comments on the paper. If you'd like to read it and/or comment on it, please e-mail me at will dot crouch at 80000hours.org. And, of course, comments in the thread on the idea sketched below are also welcome.

 

Abstract

First, I show that our judgments concerning Newcomb problems are stakes-sensitive. By altering the relative amounts of value in  the transparent box and the opaque box, one can construct situations in which one should clearly one-box, and one can construct situations in which one should clearly two-box. A plausible explanation of this phenomenon is that our intuitive judgments are sensitive to decision-theoretic uncertainty as well as empirical uncertainty: if the stakes are very high for evidential decision theory (EDT) but not for Causal Decision theory (CDT) then we go with EDT's recommendation, and vice-versa for CDT over EDT.

Second, I show that, if we 'go meta' and take decision-theoretic uncertainty into account, we can get the right answer in both the Smoking Lesion case and the Psychopath Button case.

Third, I distinguish Causal MDT (CMDT) and Evidential MDT (EMDT). I look at what I consider to be the two strongest arguments in favour of EDT, and show that these arguments do not work at the meta level. First, I consider the argument that EDT gets the right answer in certain cases. In response to this, I show that one only needs to have small credence in EDT in order to get the right answer in such cases. The second is the "Why Ain'cha Rich?" argument. In response to this, I give a case where EMDT recommends two-boxing, even though two-boxing has a lower average return than one-boxing.

Fourth, I respond to objections. First, I consider and reject alternative explanations of the stakes-sensitivity of our judgments about particular cases, including Nozick's explanation. Second, I consider the worry that 'going meta' leads one into a vicious regress. I accept that there is a regress, but argue that the regress is non-vicious.

In an appendix, I give an axiomatisation of CMDT.

The Ellsberg paradox and money pumps

10 fool 28 January 2012 05:34PM

Followup to: The Savage theorem and the Ellsberg paradox

In the previous post, I presented a simple version of Savage's theorem, and I introduced the Ellsberg paradox. At the end of the post, I mentioned a strong Bayesian thesis, which can be summarised: "There is always a price to pay for leaving the Bayesian Way."1 But not always, it turns out. I claimed that there was a method that is Ellsberg-paradoxical, therefore non-Bayesian, but can't be money-pumped (or "Dutch booked"). I will present the method in this post.

I'm afraid this is another long post. There's a short summary of the method at the very end, if you want to skip the jibba jabba and get right to the specification. Before trying to money-pump it, I'd suggest reading at least the two highlighted dialogues.

Ambiguity aversion

To recap the Ellsberg paradox: there's an urn with 30 red balls and 60 other balls that are either green or blue, in unknown proportions. Most people, when asked to choose between betting on red or on green, choose red, but, when asked to choose between betting on red-or-blue or on green-or-blue, choose green-or-blue. For some people this behaviour persists even after due calculation and reflection. This behaviour is non-Bayesian, and is the prototypical example of ambiguity aversion.

There were some major themes that came out in the comments on that post. One theme was that I Fail Technical Writing Forever. I'll try to redeem myself.

Another theme was that the setup given may be a bit too symmetrical. The Bayesian answer would be indifference, and really, you can break ties however you want. However the paradoxical preferences are typically strict, rather than just tie-breaking behaviour. (And when it's not strict, we shouldn't call it ambiguity aversion.) One suggestion was to add or remove a couple of red balls. Speaking for myself, I would still make the paradoxical choices.

A third theme was that ambiguity aversion might be a good heuristic if betting against someone who may know something you don't. Now, no such opponent was specified, and speaking for myself, I'm not inferring one when I make the paradoxical choices. Still, let me admit that it's not contrived to infer a mischievous experimenter from the Ellsberg setup. One commentator puts it better than me:

Betting generally includes an adversary who wants you to lose money so they win in. Possibly in psychology experiments [this might not apply] ... But generally, ignoring the possibility of someone wanting to win money off you when they offer you a bet is a bad idea.

Now betting is supposed to be a metaphor for options with possibly unknown results. In which case sometimes you still need to account for the possibility that the options were made available by an adversary who wants you to choose badly, but less often. And you should also account for the possibility that they were from other people who wanted you to choose well, or that the options were not determined by any intelligent being or process trying to predict your choices, so you don't need to account for an anticorrelation between your choice and the best choice. Except for your own biases.

We can take betting on the Ellsberg urn as a stand-in for various decisions under ambiguous circumstances. Ambiguity aversion can be Bayesian if we assume the right sort of correlation between the options offered and the state of the world, or the right sort of correlation between the choice made and the state of the world. In that case just about anything can Bayesian. But sometimes the opponent will not have extra information, nor extra power. There might not even be any opponent as such. If we assume there are no such correlations, then ambiguity aversion is non-Bayesian.

The final theme was: so what? Ambiguity aversion is just another cognitive bias. One commentator specifically complained that I spent too much time talking about various abstractions and not enough time talking about how ambiguity aversion could be money-pumped. I will fix that now: I claim that ambiguity aversion cannot be money-pumped, and the rest of this post is about my claim.

I'll start with a bit of name-dropping and some whig history, to make myself sound more credible than I really am2. In the last twenty years or so many models of ambiguity averse reasoning have been constructed. Choquet expected utility3 and maxmin expected utility4 were early proposed models of ambiguity aversion. Later multiplier preferences5 were the result of applying the ideas of robust control to macroeconomic models. This results in ambiguity aversion, though it was not explicitly motivated by the Ellsberg paradox. More recently, variational preferences6 generalises both multiplier preferences and maxmin expected utility. What I'm going to present is a finitary case of variational preferences, with some of my own amateur mathematical fiddling for rhetorical purposes.

Probability intervals

The starting idea is simple enough, and may have already occurred to some LW readers. Instead of using a prior probability for events, can we not use an interval of probabilities? What should our betting behaviour be for an event with probability 50%, plus or minus 10%?

There are some different ways of filling in the details. So to be quite clear, I'm not proposing the following as the One True Probability Theory, and I am not claiming that the following is descriptive of many people's behaviour. What follows is just one way of making ambiguity aversion work, and perhaps the simplest way. This makes sense, given my aim: I should just describe a simple method that leaves the Bayesian Way, but does not pay.

Now, sometimes disjoint ambiguous events together make an event with known probability. Or even a certainty, as in an event and its negation. If we want probability intervals to be additive (and let's say that we do) then what we really want are oriented intervals. I'll use +- or -+ (pronounced: plus-or-minus, minus-or-plus) to indicate two opposite orientations. So, if P(X) = 1/2 +- 1/10, then P(not X) = 1/2 -+ 1/10, and these add up to 1 exactly.

Such oriented intervals are equivalent to ordered pairs of numbers. Sometimes it's more helpful to think of them as oriented intervals, but sometimes it's more helpful to think of them as pairs. So 1/2 +- 1/10 is the pair (3/5,2/5). And 1/2 -+ 1/10 is (2/5,3/5), the same numbers in the opposite order. The sum of these is (1,1), which is 1 exactly.

You may wonder, if we can use ordered pairs, can we use triples, or longer lists? Yes, this method can be made to work with those too. And we can still think in terms of centre, length, and orientation. The orientation can go off in all sorts of directions, instead of just two. But for my purposes, I'll just stick with two.

You might also ask, can we set P(X) = 1/2 +- 1/2? No, this method just won't handle it. A restriction of this method is that neither of the pair can be 0 or 1, except when they're both 0 or both 1. The way we will be using these intervals, 1/2 +- 1/2 would be the extreme case of ambiguity aversion. 1/2 +- 1/10 represents a lesser amount of ambiguity aversion, a sort of compromise between worst-case and average-case behaviour.

To decide among bets (having the same two outcomes), compute their probability intervals. Sometimes, the intervals will not overlap. Then it's unambiguous which is more likely, so it's clear what to pick. In general, whether they overlap or not, pick the one with the largest minimum -- though we will see there are three caveats when they do overlap. If P(X) = 1/2 +- 1/10, we would be indifferent between a bet on X and on not X: the minimum is 2/5 in either case. If P(Y) = 1/2 exactly, then we would strictly prefer a bet on Y to a bet on X.

Which leads to the first caveat: sometimes, given two options, it's strictly better to randomise. Let's suppose Y represents a fair coin. So P(Y) = 1/2 exactly, as we said. But also, Y is independent of X. P(X and Y) = 1/4 +- 1/20, and so on. This means that P((X and not Y) or (Y and not X)) = 1/2 exactly also. So we're indifferent between a bet on X and a bet on not X, but we strictly prefer the randomised bet.

In general, randomisation will be strictly better if you have two choices with overlapping intervals of opposite orientations. The best randomisation ratio will be the one that gives a bet with zero-length interval.

Now let us reconsider the Ellsberg urn. We did say the urn can be a metaphor for various situations. Generally these situations will not be symmetrical. But, even in symmetrical scenarios, we should still re-think how we apply the principle of indifference. I argue that the underlying idea is really this: if our information has a symmetry, then our decisions should have that same symmetry. If we switch green and blue, our information about the Ellsberg urn doesn't change. The situation is indistinguishable, so we should behave the same way. It follows that we should be indifferent between a bet on green and a bet on blue. Then, for the Bayesian, it follows that P(red) = P(green) = P(blue) = 1/3. Period.

But for us, there is a degree of freedom, even in this symmetrical situation. We know what the probability of red is, so of course P(red) = 1/3 exactly. But we can set, say7, P(green) = 1/3 +- 1/9, and P(blue) = 1/3 -+ 1/9. So we get P(red or green) = 2/3 +- 1/9, P(red or blue) = 2/3 -+ 1/9, P(green or blue) = 2/3 exactly, and of course P(red or green or blue) = 1 exactly.

So: red is 1/3 exactly, but the minimum of green is 2/9. (green or blue) is 2/3 exactly, but the minimum of (red or blue) is 5/9. So choose red over green, and (green or blue) over (red or blue). That's the paradoxical behaviour. Note that neither pair of choices offered in the Ellsberg paradox has the type of overlap that favours randomisation.

Once we have a decision procedure for the two-outcome case, then we can tack on any utility function, as I explained in the previous post. The result here is what you would expect: we get oriented expected utility intervals, obtained by multiplying the oriented probability intervals by the utility. When deciding, we pick the one whose interval has the largest minimum. So for example, a bet which pays 15U on red (using U for "utils", the abstract units of measurement of the utility function) has expected utility 5U exactly. A bet which pays 18U on green has expected utility 6U +- 2U, the minimum is 4U. So pick the bet on red over that.

Operationally, probability is associated with the "fair price" at which we are willing to bet. A probability interval indicates that there is no fair price. Instead we have a spread: we buy bets at their low price and sell at their high price. At least, we do that if we have no outstanding bets, or more generally, if the expected utility interval on our outstanding bets has zero-length. The second caveat is that if this interval has length, then it affects our price: we also sell bets of the same orientation at their low price, and buy bets of the opposite orientation at their high price, until the length of this interval is used up. The midpoint of the expected utility interval on our outstanding bets will be irrelevant.

This can be confusing, so it's time for an analogy.

Bootsianism

If you are Bayesian and risk-neutral (and if bets pay in "utils" rather than cash, you are risk-neutral by definition) then outstanding bets have no effect on further betting behaviour. However, if you are risk-averse, as is the most common case, then this is no longer true. The more money you've already got on the line, the less willing you will be to bet.

But besides risk attitude, there could also be interference effects from non-monetary payouts. For example, if you are dealing in boots, then you wouldn't buy a single boot for half the price of a pair, and neither would you sell one of your boots for half the price of a pair. Unless you happened to already have unmatched boots, then you would sell those at a lower price, or buy boots of the opposite orientation at a higher price, until you had no more unmatched boots. If you were otherwise risk-neutral with respect to boots, then your behaviour would not depend on the number of pairs you have, just on the number and orientation of your unmatched boots.

This closely resembles the non-Bayesian behaviour above. In fact, for the Ellsberg urn, we could just say that a bet on red is worth a pair of boots, a bet on green is worth two left boots, and a bet on blue is worth two right boots. Without saying anything further, it's clear that we would strictly prefer red (a pair) over green (two lefts), but we would also strictly prefer green-or-blue (two pairs) over red-or-blue (one left and three rights). That's the paradoxical behaviour, but you know you can't money-pump boots.

A: I'll buy that pair of boots for 30 zorkmids.
B: Okay, here's your pair of boots.
A: And here's your 30 zorkmids. Thank you.
B: Thank you. Say, didn't you just buy an identical pair this morning?
A: Yeah, I did. Then a dingo ate the right one. I've got the left one here. Never worn.
B: How narratively convenient! How much would you sell it for?
A: Hmm, how about 10 zorkmids?
B: Really, 10 zorkmids? So, do you think right boots are more valuable than left boots?
A: No, of course not. Why?
B: Arbitrage!
A: Gesundheit.
B: Thanks. I'll buy a left boot from you for 10 zorkmids.
A: Great! Here's your left boot.
B: And here's your 10 zorkmids. Thank you.
A: Thank you!
B: And I'll buy a right boot from you for 10 zorkmids.
A: Errrm... Sorry? Why would I agree to that?
B: You just sold me a left boot for 10 zorkmids. Well, you yourself said rights aren't more valuable than lefts. So, logically, you should be willing to sell me a right boot for 10 zorkmids.
A: What? No.

Boots' rule

So much for the static case. But what do we do with new information? How do we handle conditional probabilities?

We still get P(A|B) by dividing P(A and B) by P(B). It will be easier to think in terms of pairs here. So for example P(red) = 1/3 exactly = (1/3,1/3) and P(red or green) = 2/3 +- 1/9 = (7/9,5/9), so P(red|red or green) = (3/7,3/5) = 18/35 -+ 3/35. And similarly P(green|red or green) = (1/3 +- 1/9)/(2/3 +- 1/9) = 17/35 +- 3/35.

This rule covers the dynamic passive case, where we update probabilities based on what we observe, before betting. The third and final caveat is in the active case, when information comes in between bets. Now, we saw that the length and orientation of the interval on expected utility of outstanding bets affects further betting behaviour. There is actually a separate update rule for this quantity. It is about as simple as it gets: do nothing. The interval can change when we make choices, and its midpoint can shift due to external events, but its length and orientation do not update.

You might expect the update rule for this quantity to follow from the way the expected utility updates, which follows from the way probability updates. But it has a mind of its own. So even if we are keeping track of our bets, we'd still need to keep track of this extra variable separately.

Sometimes it may be easier to think in terms of the total expected utility interval of our outstanding bets, but sometimes it may be easier to think of this in terms of having a "virtual" interval that cancels the change in the length and orientation of the "real" expected utility interval. The midpoint of this virtual interval is irrelevant and can be taken to always be zero. So, on update, compute the prior expected utility interval of outstanding bets, subtract the posterior expected utility interval from it, and add this difference to the virtual interval. Reset its midpoint to zero, keeping only the length and orientation.

That can also be confusing, so let's have another analogy.

Yo' mama's so illogical...

I recently came across this example by Mark Machina:

M: Children, I only have one treat, I can only give it to one of you.
I: Me, mama!
J: No, give it to me!
M: No. Rather than give it to either of you, it's better if I toss a coin. Heads, it goes to Irina, tails, it goes to Joey.
...
M: Heads. Irina gets it.
J: But mama!
M: Fair is fair.
I: Yeah Joey!
J: But mama, you yourself said it's better to toss a coin than to give it to either of us. So, logically, instead of giving it to Irina you should toss a coin again.
M: Nice try, Joey.

Instead of giving the treat to either child, she strictly prefers to toss a coin and give the treat to the winner. But after the coin is tossed, she strictly prefers to give the treat to the winner rather than toss again.

This cannot be explained in terms of maximising expected utility, in the typical sense of "utility". And of course only known probabilities are involved here, so there's no question as to whether her beliefs are probabilistically sophisticated or not. But it could be said that she is still maximising the expected value of an extended objective function. This extended objective function does not just consider who gets a treat, but also considers who "had a fair chance". She is unfair if she gives the treat to either child outright, but fair if she tosses a coin. That fairness doesn't go away when the result of the coin toss is known.

Or something like that. There are surely other ways of dissecting the mother's behaviour. But no matter what, it's going to have to take the coin toss into account, even though the coin, in and of itself, has no relevance to the situation.

Let's go back to the urn. Green and blue have the type of overlap that favours randomisation: P((green and heads) or (blue and tails)) = 1/3 exactly. A bet paying 9U on this event has expected utility of 3U exactly. Let's say we took this bet. Now say the coin comes up heads. We can update the probabilities as per above. The answer is that P(green) = 1/3 +- 1/9 as it was before. That makes sense because it's an independent event: knowing the result of the coin toss gives no information about the urn. The difference is that we now have an outstanding bet that pays 9U if the ball is green. The expected utility would therefore be 3U +- 1U. Except, the expected utility interval was zero-length before the coin was tossed, so it remains zero-length. Equivalently, the virtual interval becomes -+ 1U, so that the effective total is 3U exactly. (In this example, the midpoint of the expected utility interval didn't change either. That's not generally the case.) A bet randomised on a new coin toss would have expected utility 3U, plus the virtual interval of -+ 1U, for an effective total of 3U -+ 1U. So we would strictly prefer to keep the bet on green rather than re-randomise.

Let's compare this with a trivial example: let's say we took a bet that pays 9U if the ball drawn from the urn is green. The expected utility of this bet is 3U +- 1U. For some unrelated reason, a coin is tossed, and it comes up heads. The coin has also nothing to do with the urn or my bet. I still have a bet of 9U on green, and its expected utility is still 3U +- 1U.

But the difference between these two examples is just in the counterfactual: if the coin had come up tails, in the first example I would have had a bet of 9U on blue, and in the second example I would have had a bet of 9U on green. But the coin came up heads, and in both examples I end up with a bet of 9U on green. The virtual interval has some spooky dependency on what could have happened, just like "had a fair chance". It is the ghost of a departed bet.

I expect many on LW are wondering what happened. There was supposed to be a proof that anything that isn't Bayesian can be punished. Actually, this threat comes with some hidden assumptions, which I hope these analogies have helped to illustrate. A boot is an example of something which has no fair price, even if a pair of boots has one. A mother with two children and one treat is an example where some counterfactuals are not forgotten. The hidden assumptions fail in our case, just as they can fail in these other contexts where Bayesianism is not at issue. This can be stated more rigorously8, but that is basically how it's possible. Now We Know. And Knowing is Half the Battle.

Notes

  1. Taken almost verbatim from Eliezer Yudkowsky's post on the Allais paradox.
  2. And footnotes pointing to some tangentially relevant journal articles make me sound extra credible.
  3. For Choquet expected utility see: D. Schmeidler, Subjective probability and expected utility without additivity, Econometrica 57 (1989) pp 571-587.
  4. For maxmin expected utility see: I. Gilboa and D. Schmeidler, Maxmin expected utility with a non-unique prior, J. Math. Econ. 18 (1989) pp 141-153.
  5. For multiplier preferences see: L.P. Hansen and T.J. Sargeant, Robust control and model uncertainty, Amer. Econ. Rev. 91 (2001) pp 60-66.
  6. For variational preferences see: F. Maccheroni, M. Marinacci, and A. Rustichini, Dynamic variational preferences, J. Econ. Theory 128 (2006) pp 4-44.
  7. Any length between 0 and 1/3 works. But here's where I pulled 1/9 from: a Bayesian might assign exactly 1/61 prior probability to the 61 possible urn compositions, and the result is roughly approximated by the Laplacian rule of succession, which prescribes a pseudocount of one green and one blue ball. A similar thing with probability intervals is roughly approximated by using a pseudocount of 3/2 +- 1/2 green and 3/2 -+ 1/2 blue balls.
  8. To quickly relate this back to Savage's rules: rules 1 and 3 guarantee that there's no static money pump. Rule 2 then is supposed to guarantee that there is no dynamic money pump. But it is stronger than necessary for that purpose. I claim that this method obeys rules 1, 3, and a weaker version of rule 2, and that it is dynamically consistent. For dynamic consistency of variational preferences in general, see footnotes above. This method is a special case, for which I wrote up a simpler proof.

Appendix A: method summary

  • Events are assigned a pair of prior probabilities, which can also be thought of as an oriented probability interval. e.g. (3/5,2/5) can also be thought of as 1/2 +- 1/10.
  • Neither side of the pair can be 0 or 1, except when they're both 0 or both 1.
  • Each side of the pair is additive: if A and B are disjoint, and P(A) = (x,y), and P(B) = (u,v), then P(A or B) = (x+u,y+v).
  • Each side of the pair updates by Bayes' rule: if P(A and B) = (x,y), and P(B) = (u,v), then P(A|B) = (x/u,y/v).
  • Given a utility function, each bet will then have an expected utility interval: multiply the probability intervals by the utility for each possible outcome.
  • There is also a virtual expected utility interval to keep track of. The midpoint of this interval is always zero.
  • When updating the virtual expected utility interval, compute the prior expected utility interval of the outstanding bet(s), subtract the posterior expected utility interval from it, and add this difference to the virtual expected utility interval. Throw away the midpoint (reset the midpoint of the interval to zero, keeping just the length and orientation).
  • To decide among bets: compute the expected utility intervals of each of them -- including already outstanding bets, and including the virtual expected utility interval. Rank them according to the minimum values of the intervals.
  • Implicitly when presented with options we are also presented with the option to randomise among them, and sometimes this is strictly better than any of the pure options.

Appendix B: obligatory image for LW posts on this topic

All your Bayes are belong to us

The Savage theorem and the Ellsberg paradox

13 fool 14 January 2012 07:06PM

Followup to: A summary of Savage's foundation for probability and utility.

In 1961, Daniel Ellsberg, most famous for leaking the Pentagon Papers, published the decision-theoretic paradox which is now named after him 1. It is a cousin to the Allais paradox. They both involve violations of an independence or separability principle. But they go off in different directions: one is a violation of expected utility, while the other is a violation of subjective probability. The Allais paradox has been discussed on LW before, but when I do a search it seems that the first discussion of the Ellsberg paradox on LW was my comments on the previous post 2. It seems to me that from a Bayesian point of view, the Ellsberg paradox is the greater evil.

But I should first explain what I mean by a violation of expected utility versus subjective probability, and for that matter, what I mean by Bayesian. I will explain a special case of Savage's representation theorem, which focuses on the subjective probability side only. Then I will describe Ellsberg's paradox. In the next episode, I will give an example of how not to be Bayesian. If I don't get voted off the island at the end of this episode.

Rationality and Bayesianism

Bayesianism is often taken to involve the maximisation of expected utility with respect to a subjective probability distribution. I would argue this label only sticks to the subjective probability side. But mainly, I wish to make a clear division between the two sides, so I can focus on one.

Subjective probability and expected utility are certainly related, but they're still independent. You could be perfectly willing and able to assign belief numbers to all possible events as if they were probabilities. That is, your belief assignment obeys all the laws of probability, including Bayes' rule, which is, after all, what the -ism is named for. You could do all that, but still maximise something other than expected utility. In particular, you could combine subjective probabilities with prospect theory, which has also been discussed on LW before. In that case you may display Allais-paradoxical behaviour but, as we will see, not Ellsberg-paradoxical behaviour. The rationalists might excommunicate you, but it seems to me you should keep your Bayesianist card.

On the other hand your behaviour could be incompatible with any subjective probability distribution. But you could still maximise utility with respect to something other than subjective probability. In particular, when faced with known probabilities, you would be maximising expected utility in the normal sense. So you can not exhibit any Allais-paradoxical behaviour, because the Allais paradox involves only objective lotteries. But you may exhibit, as we will see, Ellsberg-paradoxical behaviour. I would say you are not Bayesian.

So a non-Bayesian, even the strictest frequentist, can still be an expected utility maximiser, and a perfect Bayesian need not be an expected utility maximiser. What I'm calling Bayesianist is just the idea that we should reason with our subjective beliefs the same way that we reason with objective probabilities. This also has been called having "probabilistically sophisticated" beliefs, if you prefer to avoid the B-word, or don't like the way I'm using it.

In a lot of what follows, I will bypass utility by only considering two outcomes. Utility functions are only unique up to a constant offset and a positive scale factor. With two outcomes, they evaporate entirely. The question of maximising expected utility with respect to a subjective probability distribution reduces to the question of maximising the probability, according to that distribution, of getting the better of the two outcomes. (And if the two outcomes are equal, there is nothing to maximise.)

And on the flip side, if we have a decision method for the two-outcome case, Bayesian or otherwise, then we can always tack on a utility function. The idea of utility is just that any intermediate outcome is equivalent to an objective lottery between better and worse outcomes. So if we want, we can use a utility function to reduce a decision problem with any (finite) number of outcomes to a decision problem over the best and worst outcomes in question.

Savage's representation theorem

Let me recap some of the previous post on Savage's theorem. How might we defend Bayesianism? We could invoke Cox's theorem. This starts by assuming possible events can be assigned real numbers corresponding to some sort of belief level on someone's part, and that there are certain functions over these numbers corresponding to logical operations. It can be proven that, if someone's belief functions obey some simple rules, then that person acts as if they were reasoning with subjective probability. Now, while the rules for belief functions are intuitive, the background assumptions are pretty sketchy. It is not at all clear why these mathematical constructs are requirements of rationality.

One way to justify those constructs is to argue in terms of choices a rational person must make. We imagine someone is presented with choices among various bets on uncertain events. Their level of belief in these events can be gauged by which bets they choose. But if we're going to do that anyway, then, as it turns out, we can just give some simple rules directly about these choices, and bypass the belief functions entirely. This was Leonard Savage's approach 3. To quote a comment on the previous post: "This is important because agents in general don't have to use beliefs or goals, but they do all have to choose actions."

Savage's approach actually covers both subjective probability and expected utility. The previous post discusses both, whereas I am focusing on the former. This lets me give a shorter exposition, and I think a clearer one.

We start by assuming some abstract collection of possible bets. We suppose that when you are offered two bets from this collection, you will choose one over the other, or express indifference.

As discussed, we will only consider two outcomes. So all bets have the same payout, the difference among them is just their winning conditions. It is not specified what it is that you win. But it is assumed that, given the choice between winning unconditionally and losing unconditionally, you would choose to win.

It is assumed that the collection of bets form what is called a boolean algebra. This just means we can consider combinations of bets under boolean operators like "and", "or", or "not". Here I will use brackets to indicate these combinations. (A or B) is a bet that wins under the conditions that make either A win, or B win, or both win. (A but not B) wins whenever A wins but B doesn't. And so on.

If you are rational, your choices must, it is claimed, obey some simple rules. If so, it can be proven that you are choosing as if you had a assigned subjective probabilities to bets. Savage's axioms for choosing among bets are 4:

  1. If you choose A over B, you shall not choose B over A; and, if you do not choose A over B, and do not choose B over C, you shall not choose A over C.
  2. If you choose A over B, you shall also choose (A but not B) over (B but not A); and conversely, if you choose (A but not B) over (B but not A), you shall also choose A over B.
  3. You shall not choose A over (A or B).
  4. If you choose A over B, then you shall be able to specify a finite sequence of bets C1, C2, ..., Cn, such that it is guaranteed that one and only one of the C's will win, and such that, for any one of the C's, you shall still choose (A but not C) over (B or C).

Rule 1 is a coherence requirement on rational choice. It is requires your preferences to be a total pre-order. One objection to Cox's theorem is that levels of belief could be incomparable. This objection does not apply to rule 1 in this context because, as we discussed above, we're talking about choices of bets, not beliefs. Faced with choices, we choose. A rational person's choices must be non-circular.

Rule 2 is an independence requirement. It demands that when you compare two bets, you ignore the possibilty that they could both win. In those circumstances you would be indifferent between the two anyway. The only possibilities that are relevant to the comparison are the ones where one bet wins and the other doesn't. So, you ought to compare A to B the same way you compare (A but not B) to (B but not A). Savage called this rule the Sure-thing principle.

Rule 3 is a dominance requirement on rational choice. It demands that you not choose something that cannot do better under any circumstance: whenever A would win, so would (A or B). Note that you might judge (B but not A) to be impossible a priori. So, you might legitimately express indifference between A and (A or B). We can only say it is never legitimate to choose A over (A or B).

Rule 4 is the most complicated. Luckily it's not going to be relevant to the Ellsberg paradox. Call it Mostly Harmless and forget this bit if you want.

What rule 4 says is that if you choose A over B, you must be willing to pay a premium for your choice. Now, we said there are only two outcomes in this context. Here, the premium is paid in terms of other bets. Rule 4 demands that you give a finite list of mutually exclusive and exhaustive events, and still be willing to choose A over B if we take any event on your list, cut it from A, and paste it to B. You can list as many events as you need to, but it must be a finite list.

For example, if you thought A was much more likely than B, you might pull out a die, and list the 6 possible outcomes of one roll. You would also be willing to choose (A but not a roll of 1) over (B or a roll of 1), (A but not a roll of 2) over (B or a roll of 2), and so on. If not, you might list the 36 possible outcomes of two consecutive rolls, and be willing to choose (A but not two rolls of 1) over (B or two rolls of 1), and so on. You could go to any finite number of rolls.

In fact rule 4 is pretty liberal, it doesn't even demand that every event on your list be equiprobable, or even independent of the A and B in question. It just demands that the events be mutually exclusive and exhaustive. If you are not willing to specify some such list of events, then you ought to express indifference between A and B.

If you obey rules 1-3, then that is sufficient for us construct a sort of qualitative subjective probability out of your choices. It might not be quantitative: for one thing, there could be infinitessimally likely beliefs. Another thing is that there might be more than one way to assign numbers to beliefs. Rule 4 takes care of these things. If you obey rule 4 also, then we can assign a subjective probability to every possible bet, prove that you choose among bets as if you were using those probabilities, and also prove that it is the only probability assignment that matches your choices. And, on the flip side, if you are choosing among bets based on a subjective probability assignment, then it is easy to prove you obey rules 1-3, as well as rule 4 if the collection of bets is suitably infinite, like if a fair die is avaialble to bet on.

Savage's theorem is impressive. The background assumptions involve just the concept of choice, and no numbers at all. There are only a few simple rules. Even rule 4 isn't really all that hard to understand and accept. A subjective probability distribution appears seemingly out of nowhere. In the full version, a utility function appears out of nowhere too. This theorem has been called the crowning glory of decision theory.

The Ellsberg paradox

Let's imagine there is an urn containing 90 balls. 30 of them are red, and the other 60 are either green or blue, in unknown proportion. We will draw a ball from the urn at random. Let us bet on the colour of this ball. As above, all bets have the same payout. To be specific, let's say you get pie if you win, and a boot to the head if you lose. The first question is: do you prefer to bet that the colour will be red, or that it will be green? The second question is: do you prefer to bet that it will be (red or blue), or that it will be (green or blue)?

The most common response5 is to choose red over green, and (green or blue) over (red or blue). And that's all there is to it. Paradox! 6

  30 60
Red Green Blue

A pie BOOT BOOT   A is preferred to B
B BOOT pie BOOT

C pie BOOT pie   D is preferred to C
D BOOT pie pie
  Paradox!

 

If choices were based solely on an assignment of subjective probability, then because the three colours are mutually exclusive, P(red or blue) = P(red) + P(blue), and P(green or blue) = P(green) + P(blue). So, since P(red) > P(green) then P (red or blue) > P(green or blue), but instead we have P(red or blue) < P(green or blue).

Knowing Savage's representation theorem, we expect to get a formal contradiction from the 4 rules above plus the 2 expressed choices. Something has to give, so we'd like to know which rules are really involved. You can see that we are talking only about rule 2, the Sure-thing principle. It says we shall compare (red or blue) to (green or blue) the same way as we compare red to green.

This behaviour has been called ambiguity aversion. Now, perhaps this is just a cognitive bias. It wouldn't be the first time that people behave a certain way, but the analysis of their decisions shows a clear error. And indeed, when explained, some people do repent of their sins against Bayes. They change their choices to obey rule 2. But others don't. To quote Ellsberg:

...after rethinking all their 'offending' decisions in light of [Savage's] axioms, a number of people who are not only sophisticated but reasonable decide that they wish to persist in their choices. This includes people who previously felt a 'first order commitment' to the axioms, many of them surprised and some dismayed to find that they wished, in these situations, to violate the Sure-thing Principle. Since this group included L.J. Savage, when last tested by me (I have been reluctant to try him again), it seems to deserve respectful consideration.

I include myself in the group that thinks rule 2 is what should be dropped. But I don't have any dramatic (de-)conversion story to tell. I was somewhat surprised, but not at all dismayed, and I can't say I felt much if any prior commitment to the rules. And as to whether I'm sophisticated or reasonable, well never mind! Even if there are a number of other people who are all of the above, and even if Savage himself may have been one of them for a while, I do realise that smart people can be Just Plain Wrong. So I'd better have something more to say for myself.

Well, red obviously has a probability of 1/3. Our best guess is to apply the principle of indifference to also assign probability 1/3 to green or blue. But our best guess is not necessarily a good guess. The probabilities we assign to red, and to (green or blue), are objective. We're guessing the probability of green, and of (red or blue). It seems wise to take this difference into account when choosing what to bet on, doesn't it? And surely it will be all the more wise when dealing with real-life, non- symetrical situations where we can't even appeal to the principle of indifference.

Or maybe I'm just some fool talking jibba jabba. Against this sort of talk, the LW post on the Allais paradox presents a version of Howard Raiffa's dynamic inconsistency argument. This makes no references to internal thought processes, it is a purely external argument about the decisions themselves. As stated in that post, "There is always a price to pay for leaving the Bayesian Way." 7 This is expanded upon in an earlier post:

Sometimes you must seek an approximation; often, indeed. This doesn't mean that probability theory has ceased to apply, any more than your inability to calculate the aerodynamics of a 747 on an atom-by-atom basis implies that the 747 is not made out of atoms. Whatever approximation you use, it works to the extent that it approximates the ideal Bayesian calculation - and fails to the extent that it departs.

Bayesianism's coherence and uniqueness proofs cut both ways ... anything that is not Bayesian must fail one of the coherency tests. This, in turn, opens you to punishments like Dutch-booking (accepting combinations of bets that are sure losses, or rejecting combinations of bets that are sure gains).

Now even if you believe this about the Allais paradox, I've argued that this doesn't really have much to do with Bayesianism one way or the other. The Ellsberg paradox is what actually strays from the Path. So, does God also punish ambiguity aversion?

Tune in next time8, when I present a two-outcome decision method that obeys rules 1, 3, and 4, and even a weaker form of rule 2. But it exhibits ambiguity aversion, in gross violation of the original rule 2, so that it's not even approximately Bayesian. I will try to present it in a way that advocates for its internal cognitive merit. But the main thing 9 is that, externally, it is dynamically consistent. We do not get booked, by the Dutch or any other nationality.

Notes

 

  1. Ellsberg's original paper is: Risk, ambiguity, and the Savage axioms, Quarterly Journal of Economics 75 (1961) pp 643-669
  2. Some discussion followed, in which I did rather poorly. Actually I had to admit defeat. Twice. But, as they say: fool me once, shame on me; fool me twice, won't get fooled again!
  3. Savage presents his theorem in his book: The Foundations of Statistics, Wiley, New York, 1954.
  4. To compare to Savage's setup: for the two outcome case, we deal directly with "actions" or equivalently "events", here called "bets". We can dispense with "states"; in particular we don't have to demand that the collection of bets be countably complete, or even a power-set algebra of states, just that it be some boolean algebra. Savage's axioms of course have a descriptive interpretation, but it is their normativity that is at issue here, so I state them as "you shall". Rules 1-3 are his P1-P3, and 4 is P6. P4 and P7 are irrelevant in the two- outcome case. P5 is included in the background assumption that you would choose to win. I do not call this normative, because the payoff wasn't specified.
  5. Ellsberg originally proposed this just as a thought experiment, and canvassed various victims for their thoughts under what he called "absolutely non-expiremental conditions". He used $100 and $0 instead of pie and a boot to the head. Which is dull of course, but it shouldn't make a difference10. The experiment has since been repeated under more experimental conditions. The expirementers also invariably opt for the more boring cash payouts.
  6. Some people will say this isn't "really" a paradox. Meh.
  7. Actually, I inserted "to pay". It wasn't in the original post. But it should have been.
  8. Sneak preview
  9. As a great decision theorist once said, "Stupid is as stupid does."
  10. ...or should it? Savage's rule P4 demands that it shall not. And the method I have in mind obeys this rule. But it turns out this is another rule that God won't enforce. And that's yet another post, if I get to it at all.

 

Poker with Lennier

15 HonoreDB 15 November 2011 10:21PM

In J. Michael Straczynski's science fiction TV show Babylon 5, there's a character named Lennier. He's pretty Spock-like: he's a long-lived alien who avoids displaying emotion and feels superior to humans in intellect and wisdom. He's sworn to always speak the truth. In one episode, he and another character, the corrupt and rakish Ambassador Mollari, are chatting. Mollari is bored. But then Lennier mentions that he's spent decades studying probability. Mollari perks up, and offers to introduce him to this game the humans call poker.

continue reading »

Revisiting the Anthropic Trilemma II: axioms and assumptions

4 Stuart_Armstrong 16 February 2011 09:42AM

tl;dr: I present four axioms for anthropic reasoning under copying/deleting/merging, and show that these result in a unique way of doing it: averaging non-indexical utility across copies, adding indexical utility, and having all copies being mutually altruistic.

Some time ago, Eliezer constructed an anthropic trilemma, where standard theories of anthropic reasoning seemed to come into conflict with subjective anticipation. rwallace subsequently argued that subjective anticipation was not ontologically fundamental, so we should not expect it to work out of the narrow confines of everyday experience, and Wei illustrated some of the difficulties inherent in "copy-delete-merge" types of reasoning.

Wei also made the point that UDT shifts the difficulty in anthropic reasoning away from probability and onto the utility function, and ata argued that neither the probabilities nor the utility function are fundamental, that it was the decisions that resulted from them that were important - after all, if two theories give the same behaviour in all cases, what grounds do we have for distinguishing them? I then noted that this argument could be extended to subjective anticipation: instead of talking about feelings of subjective anticipation, we could replace it by questions such as "would I give up a chocolate bar now for one of my copies to have two in these circumstances?"

I then made a post where I applied by current intuitions to the anthropic trilemma, and showed how this results in complete nonsense, despite the fact that I used a bona fide utility function. What we need are some sensible criteria for which to divide utility and probability between copies, and this post is an attempt to figure that out. The approach is similar to expected utility, where a quadruped of natural axioms forced all decision processes to have a single format.

The assumptions are:

  1. No intrinsic value in the number of copies
  2. No preference reversals
  3. All copies make the same personal indexical decisions
  4. No special status to any copy.

continue reading »

Dutch Books and Decision Theory: An Introduction to a Long Conversation

19 Jack 21 December 2010 04:55AM

For a community that endorses Bayesian epistemology we have had surprisingly few discussions about the most famous Bayesian contribution to epistemology: the Dutch Book arguments. In this post I present the arguments, but it is far from clear yet what the right way to interpret them is or even if they prove what they set out to. The Dutch Book arguments attempt to justify the Bayesian approach to science and belief; I will also suggest that any successful Dutch Book defense of Bayesianism cannot be disentangled from decision theory. But mostly this post is to introduce people to the argument and to get people thinking about a solution. The literature is scant enough that it is plausible people here could actually make genuine progress, especially since the problem is related to decision theory.1

Bayesianism fits together. Like a well-tailored jacket it feels comfortable and looks good. It's an appealing, functional aesthetic for those with cultivated epistemic taste. But sleekness is not a rigourous justification and so we should ask: why must the rational agent adopt the axioms of probability as conditions for her degrees of belief? Further, why should agents accept the principle conditionalization as a rule of inference? These are the questions the Dutch Book arguments try to answer.

The arguments begin with an assumption about the connection between degrees of belief and willingness to wager. An agent with degree of belief b in hypothesis h is assumed to be willing to buy wager up to and including $b in a unit wager on h and sell a unit wager on h down to and including $b. For example, if my degree of belief that I can drink ten eggnogs without passing out is .3 I am willing to bet $0.30 on the proposition that I can drink the nog without passing out when the stakes of the bet are $1. Call this the Will-to-wager Assumption. As we will see it is problematic.

continue reading »

View more: Next