Dark Arts 101: Be rigorous, on average
I'm reading George Steiner's 1989 book on literary theory, Real Presences. Steiner is a literary theorist who achieved the trifecta of having appointments at Oxford, Cambridge, and Harvard. His book demonstrates an important Dark Arts method of argument.
So far, Steiner's argument appears to be:
 Human language is an undecidable symbolsystem.
 Every sentence therefore carries with it an infinite amount of meaning, the accumulation of all connotations, contexts, and historical associations invoked, and invoked by those invocations, etc. Alternately, every sentence contains no meaning at all, since none of those words can refer to things in the world.
 The meaning of a sentence, therefore, is not finite or analyzable, but transcendent.
 The transcendent is the search for God.
 Therefore, all good literature is a search for God.
The critics quoted on the back of the book, and its reviews on Amazon, praise Steiner's rigor and learning. It is impressive. Within a single paragraph he may show the relationship between Homer, 12thcentury theological works, Racine, Shakespeare, and Schoenberg. And his care and precision with words is exemplary; I have the impression, even when he speaks of meaning in music or other qualialaden subjects, that I know exactly what he means.
He was intelligent enough to trace the problems he was grappling with out past the edges of his domain of expertise. The key points of his argument lie not in literary theory, but in information theory, physics, artificial intelligence, computability theory, linguistics, and transfinite math.
Unfortunately, he knows almost nothing about any of those fields, and his language is precise enough to be wrong, which he is when he speaks on any of those subjects. How did he get away with it?
Answer: He took a twopage argument about things he knew little about, spread it across 200 pages, and filled the gaps with tangential statements of impressive rigor and thoroughness on things he was expert in.
Productivity as a function of ability in theoretical fields
I argued in this post that the differences in capability between different researchers are vast (Kaj Sotala provided me with some interesting empirical evidence that backs up this claim). Einstein's contributions to physics or John von Neumann's contributions to mathematics (and a number of other disciplines) are arguably at least hundreds of times greater than that of an average physicist or mathematician.
At the same time, Yudkowsky argues that "in the space of brain designs" the difference between the village idiot and Einstein is tiny. Their brains are extremely similar, with the exception of some "minor genetic tweaks". Hence we get the following picture:
Chocolate Ice Cream After All?
I have collected some thoughts on decision theory and am wondering whether they are any good, or whether I’m just thinking nonsense. I would really appreciate some critical feedback. Please be charitable in terms of language and writing style, as I am not a native English speaker and as this is the first time I am writing such an essay.
Overview
 The classical notion of free will messes up our minds, especially in decisiontheoretic problems. Once we come to see it as confused and reject it, we realize that our choices in some sense not only determine the future but also the past.
 If determining the past conflicts with our intuitions of how time behaves, then we need to adapt our intuitions.
 The A,BGame shows us that, as far as the rejection of free will allows for it, it is in principle possible to choose our genes.
 Screening off only applies if we consider our action to be independent of the variable of interest – at least in expectation.
 When dealing with Newcomblike problems, we have to be clear about which forecasting powers are at work. Likewise, it turns out to be crucial to precisely point out which agent knows how much about the setting of the game.
 In the standard version of Newcomb’s Soda, one should choose chocolate ice cream – unless the game were specified in a way that previous subjects did not (unlike us) know of any interdependence of soda and ice cream.
 Variations of Newcomb’s Soda suggest that the evidential approach makes us better off.
 The analysis of Newcomb’s Soda shows that its formulation fundamentally differs from the formulation of Solomon’s Problem.
 Given all studysubjects make persistent precommitments, a proper use of evidential reasoning suggests precommitting to take chocolate ice cream. This is why Newcomb’s Soda does not show that the evidential approach is dynamically inconsistent.
 The tickle defense does not apply to the standard medical version of Solomon’s Problem. In versions where it applies, it does not tell us anything nontrivial.
 Evidential reasoning seems to be a winning approach not only in Newcomb’s Problem, but also in Newcomb’s Soda and in the medical version of Solomon’s Problem. Therefore, we should consider a proper use of evidential reasoning as a potentially promising component when building the ultimate decision algorithm.
In the standard formulation of Newcomb’s Soda, the evidential approach suggests picking chocolate ice cream, since this makes it more probable that we will have been awarded the million dollars. Hence, it denies us the thousand dollars we actually could win if we only took vanilla ice cream. Admittedly, this may be counterintuitive. Common sense tells us that considering the thousand dollars, one could change the outcome, whereas one cannot change which type of soda one has drunk; therefore we have to make a decision that actually affects our outcome. Maybe the flaw in this kind of reasoning doesn’t pose a problem to our intuitions as long as we deal with a “causalintuitionfriendly” setting of numbers. So let’s consider various versions of this problem in order to thoroughly compare the two competing algorithmical traits. Let’s find out which one actually wins and therefore should be implemented by rational agents.
In this post, I will discuss Newcomblike problems and conclude that the arguments presented support an evidential approach. Various decision problems have shown that plain evidential decision theory is not a winning strategy. I instead propose to include evidential reasoning in more elaborate decision theories, such as timeless decision theory or updateless decision theory, since they also need to come up with an answer in Newcomblike problems.
By looking at the strategies proposed in those problems, currently discussed decision theories produce outputs that can be grouped into evidentiallike and causallike. I am going to outline which of these two traits a winning decision theory must possess.
Let’s consider the following excerpt by Yudkowsky (2010) about the medical version of Solomon’s Problem:
“In the chewinggum throatabscess variant of Solomon’s Problem, the dominant action is chewing gum, which leaves you better off whether or not you have the CGTA gene; but choosing to chew gum is evidence for possessing the CGTA gene, although it cannot affect the presence or absence of CGTA in any way.”
In what follows, I am going to elaborate on why I believe this point (in the otherwise brilliant paper) needs to be reconsidered. Furthermore, I will explore possible objections and have a look at other decision problems that might be of interest to the discussion.
But before we discuss classical Newcomblike problems, let’s first have a look at the following thought experiment:
The school mark is already settled
Imagine you were going to school; it is the first day of the semester. Suppose you only care about getting the best marks. Now your math teacher tells you that he knows you very well and that this would be why he already wrote down the mark you will receive for the upcoming exam. To keep things simple, let’s cut down your options to “study as usual” and “don't study at all”. What are you going to do? Should you learn as if you didn’t know about the settled mark? Or should you not learn at all since the mark has already been written down?
This is a tricky question because the answer to it depends on your credence in the teacher’s forecasting power. Therefore let's consider the following two cases:
 Let's assume that the teacher is correct in 100% of the cases. Now we find ourselves in a problem that resembles Newcomb's Problem since our decision exactly determines the output of his prediction. Just as an agent that really wishes to win the most money should take only one box in Newcomb’s Problem, you should learn for the exams as if you didn't know that the marks are already settled. (EDIT: For the record, one can point out a structural (but not relevant) difference between the two problems: Here, the logical equivalences "learning" <> "good mark" and "not learning" <> "bad mark" are part of the game's assumptions, while the teacher predicts in which of these two worlds we live in. In Newcomb's Problem, Omega predicts the logical equivalences of taking boxes and payoffs.)
 Now let's consider a situation where we assume a teacher having no forecasting power at all. In such a scenario the student's future effort behaves independently of the settled marks, that is no matter what input the student provides, the output of the teacher will have been random. Therefore, if we find ourselves in such a situation we shouldn't study for the exam and enjoy the gained spare time.
(Of course we can also think of a case 3) where the teacher's prediction is wrong in 100% of all cases. Let’s specify “wrong” since marks usually don’t work in binaries, so let’s go with “wrong” as the complementary mark. For instance, the best mark corresponds to the worst, the second best to the second worst and so on. In such a case not learning at all and returning an empty exam sheet would determine receiving the best marks. However, this scenario won't be of big interest to us.)
This thought experiment suggests that a deterministic world does not necessarily imply fatalism, since in expectation the fatalist (who wouldn't feel obligated to learn because the marks are "already written down") would lose in cases where the teacher predicts other than random. Generally, we can say that – beside the case 2) – in all the other cases the learning behaviour of the student is relevant for receiving a good mark.
This thought experiment does not only make it clear that determinism does not imply fatalism, but it even shows that fatalists tend to lose once they stop investing ressources in desriable outcomes. This will be important in subsequent sections. Now let us get to the actual topic of this article which already has been mentioned as an aside: Newcomblike problems.
Newcomb’s Problem
The standard version of Newcomb’s Problem has been thoroughly discussed on Lesswrong. Many would agree that oneboxing is the correct solution, for oneboxing agents obtain a million dollars, while twoboxers only take home a thousand dollars. To clarify the structure of the problem: an agent chooses between two options, “AB“ and “B“. When relatively considered, the option B “costs” a thousand dollars because one would abandon transparent box A containing this amount of money. As we play with the predictor Omega, who has an almost 100% forecasting power, our decision determines what past occured, that is we determine whether Omega put a million into box B or not. With determining I mean as much as “being compatible with”. Hence, choosing box B is compatible only with a past where Omega put a million into it.
Newcomb’s Problem’s Problem of Free Will
To many, Newcomb’s Problem seems counterintuitive. People tend to think: “We cannot change the past, as past events have already happened! So there’s nothing we can do about it. Still, somehow the agents that only choose B become rich. How is this possible?“
This uneasy feeling can be resolved by clarifing the notion of “free will”, i.e. by acknowledging that a world state X either logically implies (hard determinism) or probabilistically suggests (hard incompatibilism, stating that free will is impossible and complete determinism is false) another world state Y or a set of possible world states (Y1,Y2,Y3,..,Yn) – no matter if X precedes Y or vice versa. (Paul Almond has shown in his paper on decision theory – unfortunately his page has been down lately – that upholding this distinction does not affect the clarification of free will in decisiontheoretic problems. Therefore, I chose to go with hard determinism.)
The fog will lift once we accept the above. Since our action is a subset of a particular world state, the action itself is also implied by preceding world states, that is once we know all the facts about a preceding world state we can derive facts about subsequent world states.
If we look more closely, we cannot really choose in a way that people used to think. Common sense tells us that we confront a “real choice” if our decision is not just determined by external factors and also not picked at random, but governed by our free will. But what could this third case even mean? Despite its intuitive usefulness, the classical notion of choice seems to be an illdefined term since it requires a problematic notion of free will, that is to say one that ought to be nonrandom but also not determined at once.
This is why I want to suggest a new definition of choice: Choosing is the way agents execute what they were determined to by other world states. Choosing has nothing to do with “changing” what did or is going to happen. The only thing that actually changes is the perception of what did or is going to happen, since executions produce new data points that call for updates.
So unless we could use a “true” random generator (which would only be possible if we did not assume complete determinism to be true) in order to make decisions, what we are going to do is “planned” and determined by preceding and subsequent world states.
If I take box B, then this determines a past world state where Omega has put a million dollars into this box. If I take both box A and B, then this determines a past world state where Omega has left box B empty. Therefore, when it comes to deciding, taking actions that determine (or are compatible with) not only desirable future worlds, but also desirable past worlds are the ones that make us win.
One may object now that we aren’t “really“ determining the past, but we only determine our perception of it. That’s an interesting point. In the next section we are going to have a closer look on that. For now, I’d like to bring the underlying perception of time into question. Because once I choose only box B, it seems that the million dollars I receive is not just an illusion of my map but it is really out there. Admittedly the past seems unswayable, but this example shows that maybe our conventional perception of time is misleading as it conflicts with the notion of us choosing what happened in the past.
How come selfproclaimed deterministic nonfatalists in fact are fatalists when they deal with the past? I’d suggest to perceive time not as being divided into seperate caterogies like “stuff that has passed “ and “stuff that is about to happen“, but rather as one dimension where every dot is just as real as any other and where the manifestation of one particular dot restrictively determines the set of possible manifestations other dots could embody. It is crucial to note that such a dot would describe the whole world in three spatial dimensions, while subsets of world states could still behave independently.
Perceiving time without an inherent “arrow” is not new to science and philosophy, but still, readers of this post will probably need a compelling reason why this view would be more goaltracking. Considering the Newcomb’s Problem a reason can be given: Intuitively, the past seems much more “settled” to us than the future. But it seems to me that this notion is confounded as we often know more about the past than we know about the future. This could tempt us to project this disbalance of knowledge onto the universe such that we perceive the past as settled and unswayable in contrast to a shapeable future. However, such a conventional set of intuitions conflicts strongly with us picking only one box. These intuitions would tell us that we cannot affect the content of the box; it is already filled or empty since it has been prepared in the now inaccessible past.
Changing the notion of time into one block would lead to “better” intuitions, because they directly suggested to choose one box, as this action is only compatible with a more desirable past. Therefore we might need to adapt our intution, so that the universe looks normal again. To illustrate the ideas discussed above and to put them into practice, I have constructed the following game:
The A,BGame
You are confronted with Omega, a 100% correct predictor. In front of you, there are two buttons, A and B. You know that there are two kinds of agents. Agents with the gene G_A and agents with the gene G_B. Carriers of G_A are blessed with a life expectancy of 100 years whereas carriers of G_B die of cancer at the age of 40 on average. Suppose you are much younger than 40. Now Omega predicts that every agent who presses A is a carrier of G_A and every agent that presses B is a carrier of G_B. You can only press one button, which one should it be if you want to live for as long as possible?
People who prefer to live for a hundred years over forty years would press A. They would even pay a lot of money in order to be able to do so. Though one might say one cannot change or choose one’s genes. Now we need to be clear about which definition of choice we make use of. Assuming the conventional one, I would agree that one could not choose one’s genes, but for instance, when getting dressed, one could not choose one’s jeans either, as the conventional understanding of choice requires an empty notion of nonrandom, not determined free will that is not applicable. Once we use the definition I introduced above, we can say that we choose our jeans. Likewise, we can choose our genes in the A,BGame. If we onebox in Newcomb’s Problem, we should also press A here, because the two problems are structurally identical (except for the labels “box” versus “gene”).
The notion of objective ambiguity of genes only stands if we believe in some sort of objective ambiguity about which choices will be made. When facing a correct predictor, those of us who believe in indeterministic objective ambiguity of choices have to bite the bullet that their genes would be objectively ambiguous. Such a model seems counterintuitive, but not contradictory. However, I don’t feel forced to adapt this indeterministic view.
Let us focus on the deterministic scenario again: In this case, our past already determined our choice, so there is only one way we will go and only one way we can go.
We don’t know whether we are determined to do A or B. By “choosing” the one action that is compatible only with the more desirable past, we are better off. Just as we don’t know in Newcomb’s Problem whether B is empty or not, we have to behave in a way such that it must have been filled already. From our perspective, with little knowledge about the past, our choice determines the manifestation of our map of the past. Apparently, this is exactly what we do when making choices about the future. Taking actions determines the manifestation of our map of the future. Although the future is already settled, we don’t know yet its exact manifestation. Therefore, from our perspective, it makes sense to act in ways that determine the most desirable futures. This does not automatically imply that some mysterious “change” is going to happen.
In both directions it feels like one would change the manifestation of other world states, but when we look more closely we cannot even spell out what that would mean. The word “change” only starts to become meaningful once we hypothetically compare our world with counterfactual ones (where we were not determined to do what we do in our world). In such a framework we could consistently claim that the content of box B “changes” depending on whether or not we choose only box B.
Screening off
Following this approach of determining one’s perception of the world, the question arises, whether every change in perception is actually goaltracking. We can ask ourselves, whether an agent should avoid new information if she knew that the new information had negative news value. For instance, if an agent, being suspected of having lung cancer and awaiting the results of her lung biopsy, seeks actions that make more desirable past world states more likely, then she should figure out a way so that she doesn’t receive any mail, for instance by declaring an incorrect postal address. This naive approach obviously fails because of lack of proper use of Bayesian updating. The action ”avoiding to receive mail” screens off the desirable outcome so that once we know about this action we don’t learn anything about the biopsy in (the very probable) case that we don’t receive any mail.
In the A,BGame, this doesn’t apply, since we believe Omega’s prediction to be true when it says that A necessarily belongs to G _A and B to G_B. Generally, we can distinguish the cases by clarifying existing independencies: In the lung cancer case where we simply don’t know better, we can assume that P(preventionpositive lab result)=P(preventionnegative lab result)=P(prevention). Hence, screening off applies. In the A,BGame, we should believe that P(Press AG_A)>P(Press A)=P(Press AG_A or G_B). We obtain this relevant piece of information thanks to Omega’s forecasting power. Here, screening off does not apply.
Subsequently, one might object that the statement P(Press AG_A)>P(Press A) leads to a conditional independence as well, at least in cases where not all the players that press A necessarily belong to G_A. Then you might be pressing A because of your reasoning R_1 which would screen off pressing A from G_A. A further objection could be that even if one could show a dependency between G_A and R_1, you might be choosing R_1 because of some metareasoning R_2 that again provides a reason not to press A. However, considering these objections more thoroughly, we realize that R_1 has to be congruent or at least evenly associated (in G_A as well as in G_B) with Pressing A. The same works for R_2. If this wasn’t the case, then we would be talking about another game, a game where we knew, for instance, that 90% of the G_A carriers choose button A (without thinking) because of the gene and 10% of the G_B carriers would choose button A because of some sort of evidential reasoning. Knowing this, choosing A out of evidential reasoning would be foolish, since we already know that only G_B carriers could do that. Once we know this, evidential reasoners would suggest not to press A (unless B offers an even worse outcome). So these further objections fail as well, as they implicitly change the structure of the discussed problem. We can conclude that no screening off applies as long as an instance with forecasting power tells us that a particular action makes the desirable outcome likelier.
Now let’s have a look at an alteration of the A,BGame in order to figure out whether screeningoff might apply here.
A Weak Omega in The A,BGame
Thinking about the A,BGame, what happens if we decreased Omega’s forecasting power? Let’s assume now that Omega’s prediction is correct only in 90% of all cases. Should this fundamentally change our choice whether to press A or B because we only pressed A as a consequence of our reasoning?
To answer that, we need to be clear about why agents believe in Omega’s predictions. They believe in Omega’s prediction because they were correct so many times. This constitutes Omega’s strong forecasting power. As we saw above, screening off only applies if the predicting instance (Omega, or us reading a study) has no forecasting power at all.
In the A,BGame, as well as in the original Newcomb’s Problem, we also have to take the predictions of a weaker Omega (with less forecasting power) into account, unless we face an Omega that happens to be right by chance (i.e. in 50% of the cases when considering a binary decision situation).
If, in the standard A,BGame, we consider pressing A to be important, and if we were willing to spend a large amount of money in order to be able to press A (suppose the button A would send a signal to cause a withdrawal from our bank account), then this amount should only gradually shrink once we decrease Omega’s forecasting power. The question now arises whether we also had to “choose” the better genes in the medical version of Solomon’s Problem and whether there might not be a fundamental difference between it and the original Newcomb’s Problem.
Newcomb’s versus Solomon’s Problem
In order to uphold this convenient distinction, people tell me that “you cannot change your genes” though that’s a bad argument since one could reply “according to your definition of change, you cannot change the content of box B either, still you choose oneboxing”. Further on, I quite often hear something like “in Newcomb’s Problem, we have to deal with Omega and that’s something completely different than just reading a study”. This – in contrast to the first – is a good point.
In order to accept the forecasting power of a 100% correct Omega, we already have to presume induction to be legitimate. Or else one could say: “Well, I see that Omega has been correct in 3^^^3 cases already, but why should I believe that it will be correct the next time?”. As sophisticated this may sound, such an agent would lose terribly. So how do we deal with studies then? Do they have any forecasting power at all? It seems that this again depends on the setting of the game. Just as Omega’s forecasting power can be set, the forecasting power of a study can be properly defined as well. It can be described by assigning values to the following two variables: its descriptive power and its inductive power. To settle them, we have to answer two questions: 1. How correct is the study's description of the population? 2. How representative is the population of the study to the future population of agents acting in knowledge of the study? Or in other words, to what degree can one consider the study subjects to be in one’s reference class in order to make true predictions about one’s behaviour and the outcome of the game? Once this is clear, we can then infer the forecasting power. How much forecasting power does the study have? Let’s assume that the study we deal with is correct in what it describes. Those who wish can use a discounting factor. However, this is not important for subsequent arguments and would only make it more complicated.
Considering the inductive power, it get’s more tricky. Omega’s predictions are defined to be correct. In contrast, the study’s predictions have not been tested. Therefore we are quite uncertain about the study’s forecasting power. It were 100% if and only if every factor involved was specified so that the total of them compel identical outcomes in the study and our game. Due to induction, we do have reason to assume a positive value of forecasting power. To identify its specific value (that discounts the forecasting power according to the specified conditions), we would need to settle every single factor that might be involved. So let’s keep it simple by applying a 100% forecasting power. As long as there is a positive value of forecasting power, the basic point of the subsequent arguments (that presume a 100% forecasting power) will also hold when discounted.
Thinking about the inductive power of the study, there still is one thing that we need to specify: It is not clear what exactly previous subjects of the study knew.
For instance in a case A), the subjects of the study knew nothing about the tendency of CGTAcarriers to chew gum. First, their genom was analyzed, then they had to decide whether or not to chew gum. In such a case, the subjects‘ knowledge is quite different from those who play the medical version of Solomon’s Problem. Therefore screening off applies. But does it apply to the same extent as in the avoidingbadnews example mentioned above? That seems to be the case. In the avoidingbadnews example, we assumed that there is no connection between the variables „lung cancer“ and „avoiding mail“. In Solomon’s Problem such an indepence can be settled as well. Then the variables „having the gene CGTA“ and „not chewing gum because of evidential reasoning“ are also assumed to be independent. Total screening off applies. Considering an evidential reasoner who knows that much, choosing not to chew gum would then be as irrational as declaring an incorrect postal address when awaiting biopsy results.
Now let us consider a case B) where the subjects were introduced to the game just as we were. Then they would know about the tendency of CGTAcarriers to chew gum, and they themselves might have used evidential reasoning. In this scenario, screening off does not apply. This is why not chewing gum would be the winning strategy.
One might say that of course the studysubjects did not know of anything and that we should assume case A) a priori. I only partially agree with that. The screening off can already be weakend if, for instance, the subjects knew why the study was conducted. Maybe there was anecdotal evidence about heredity of a tendency to chew gum, which was about to be confirmed properly.
Without further clarification, one can plausibly assume a probability distribution over various intermediate cases between A and B where screening off becomes gradually fainter when getting closer to B. Of course there might also be cases where anecdotal evidence leads astray, but in order to cancel out the argument above, anecdotal evidence needs to be equalized with in expectation knowing nothing at all. But since it seems to be better (even though not much) than knowing nothing, it is not a priori clear that we have to assume case A right away.
So when compiling a medical version of Solomon’s Problem, it is important to be very clear about what the subjects of the study were aware of.
What about Newcomb’s Soda?
After exploring screening off and possible differences between Newcomb’s Problem and Solomon’s Problem (or rather between Omega and a study), let’s investigate those questions in another game. My favourite of all Newcomblike problems is called Newcomb’s Soda and was introduced in Yudkowsky (2010). Comparing Newcomb’s Soda with Solomon’s Problem, Yudkowsky writes:
“Newcomb’s Soda has the same structure as Solomon’s Problem, except that instead of the outcome stemming from genes you possessed since birth, the outcome stems from a soda you will drink shortly. Both factors are in no way affected by your action nor by your decision, but your action provides evidence about which genetic allele you inherited or which soda you drank.”
Is there any relevant difference in structure between the two games?
In the previous section, we saw that once we settle that the studysubjects in Solomon’s Problem don’t know of any connection between the gene and chewing gum, screening off applies and one has good reasons to chew gum. Likewise, the screening off only applies in Newcomb’s Soda if the subjects of the clinical test are completely unaware of any connection between the sodas and the ice creams. But is this really the case? Yudkowsky introduces the game as one big clinical test in which you are participating as a subject:
“You know that you will shortly be administered one of two sodas in a doubleblind clinical test. After drinking your assigned soda, you will enter a room in which you find a chocolate ice cream and a vanilla ice cream. The first soda produces a strong but entirely subconscious desire for chocolate ice cream, and the second soda produces a strong subconscious desire for vanilla ice cream.”
This does not sound like previous subjects had no information about a connection between the sodas and the ice creams. Maybe you, and you alone, received those specific insights. If this were the case, it clearly had to be mentioned in the game’s definition, since this factor is crucial when it comes to decisionmaking. Considering a game where the agent herself is a studysubject, without further specification, she wouldn’t by default expect that other subjects knew less about the game than she did. Therefore let’s assume in the following that all the subjects in the clinical test knew that the sodas cause a subconscious desire for a specific flavor of ice cream.
Newcomb’s Soda in four variations
Let “C” be the causal approach which states that one has to choose vanilla ice cream in Newcomb’s Soda. C only takes the $1,000 of the vanilla ice cream into account since one still can change the variable “ice cream”, whereas the variable “soda” is already settled. Let “E” be the evidential approach which suggests that one has to choose chocolate or vanilla ice cream in Newcomb’s Soda – depending on the probabilities specified. E takes both the $1,000 of the vanilla ice cream and the $1,000,000 of the chocolate soda into account. In that case, one argument can outweigh the other.
Let’s compile a series of examples. We denote “Ch” for chocolate, “V” for vanilla, “S” for soda and “I” for ice cream. In all versions ChS will receive $1,000,000 and VI will receive $1,000 and P(ChS)=P(VS)=0.5. Furthermore we settle that P(ChIChS)=P(VIVS) and call this term “p” in every version so we don’t vary unnecessarily many parameters. As we are going to deal with large numbers, let’s assume a linear monetary value utility function.
Version 1: Let us assume a case where the sodas are dosed homeopathically, so that no effect on the choice of ice creams can be observed. ChS and VS choose from ChI and VI randomly so that p=P(VIChS)=P(ChIVS)=0.5. Both C and E choose VI and win 0.5 *$1,001,000 + 0.5*$1000=$501,000 in expectation. C only considers the ice cream whereas E considers the soda as well, though in this case the soda doesn’t change anything as the ChS are equally distributed over ChI and VI.
Version 2: Here p=0.999999. Since P(ChS)=P(VS)=0.5, one ChI in a million will have originated from VS, whereas one VI in a million will have originated from ChS. The other 999,999 ChI will have determined the desired past, ChS, due to their choice of ChI. So if we participated in this game a million times and tracked E that suggests choosing ChI each time, we overall could have expected to win 999,999*$1,000,000=$999,999,000,000. This is different to following C’s advice. As C tells us that we cannot affect which soda we have drunk we would choose VI each time and could expect to win 1,000,000*$1,000+$1,000,000=$1,001,000,000 in total. The second outcome, which C is responsible for, is 999 times worse than the first (which was suggested by E). In this version, E clearly outperforms C in helping us to make the most money.
Version 3: Now we have p=1. This version is equivalent to the standard version of the A,BGame. What would C do? It seems that C ought to maintain its view that we cannot affect the soda. Therefore, only considering the ice creampart of the outcome, C will suggest choosing VI. This seems to be absurd: C leaves us disappointed with $1,000, whereas E makes us millionaires every single time.
A Cdefender might say: “Wait! Now you have changed the game. Now we are dealing with a probability of 1!” The response would be : “Interesting, I can make p get as close to 1 as I want as long as it isn’t 1 and the rules of the game and my conclusions would still remain. For instance, we can think of a number like 0.999…(100^^^^^100 nines in a row). So tell me why exactly the probability change of 0.000…(100^^^^^100 1 zeros in a row)1 should make you switch to ChI? But wait, why would you – as a defender of C – even consider ChI since it cannot affect your soda while it definitely prevents you from winning the $1,000 of the ice cream?”
The previous versions tried to exemplify why taking both arguments (the $1,000 and the $1,000,000) into account makes you better off at the one edge of the probability measure, whereas at the other edge, C and E produce the same outcomes. With a simple equation we can figure out for which p E would be indifferent about whether to choose ChI or VI: solve(p*1,000,000=(1p)*1,000,000+1,000,p). This gives us p=0.5005. So for 0.5005<p<=1 E does better than C and for 0<=p<=0.5005 E and C behave alike. Finally, let us consider the original version:
Version 4: Here we deal with p=0.9. According to the above we could already deduce that deciding according to E makes us better off, but let’s have a closer look at it for the sake of completeness: In expectation, choosing VI makes us win 0.1*$1,000,000+$1,000=$101,000, whereas ChI leaves us with 0.9*$1,000,000=$900,000 almost 9 times richer. After the insights above, it shouldn’t surprise us too much that E clearly does better than C in the original version of Newcomb’s Soda as well.
The variations above illustrated that C had to eat VI even if 99.9999% of CS choose CI and 99.9999% of VS eat VI. If you played it a million times, in expectation CI would win the million 999,999 times and VI just once. Can we really be indifferent about that? Wasn’t it all about winning and losing? And who is winning here and who is losing?
NewcombSoda and Precommitments
Another excerpt from Yudkowsky (2010):
“An evidential agent would rather precommit to eating vanilla ice cream than precommit to eating chocolate, because such a precommitment made in advance of drinking the soda is not evidence about which soda will be assigned.”
At first sight this seems intuitive. But if we look at the probabilities more closely suddenly a problem arises: Let’s consider an agent that precommits (let’s assume a 100% persistent mechanism) one’s decision before a standard game (p=0.9) starts. Let’s assume that he precommits – as suggested above – to choose VI. What credence should he assign to P(ChSVI)? Is it 0.5 as if he didn’t precommit at all or does something change? Basically, adding precommitments to the equation inhibits the effect of the sodas on the agent’s decision. Again, we have to be clear about which agents are affected by this newly introduced variable. If we were the only ones who can precommit 100% persistently, then our game fundamentally differs from the previous subjects’ one. If they didn’t precommit, we couldn’t presuppose a forecasting power anymore because the previous subjects decided according to the soda’s effect, whereas we now decide independently of that. In this case, E would suggest to precommit to VI. However, this would constitute an entirely new game without any forecasting power. If all the agents of the study make persistent precommitments, then the forecasting power holds; the game doesn’t change fundamentally. Hence, the way previous subjects behaved remains crucial to our decisionmaking. Let’s now imagine that we were playing this game a million times. Each time we irrevocably precommit to VI. In this case, if we consider ourselves to be sampled randomly among VI, we can expect to originate from VS 900,000 times. If we approach p to 1 we see that it gets desperately unlikely to originate from ChS once we precommit ourselves to VI. So a rational agent following E should precommit ChI in advance of drinking the soda. Since E suggests ChI both during and before the game, this example doesn’t show that E would be dynamically inconsistent.
In the other game, where only we precommit persistently and the previous subjects don’t, picking VI doesn’t make E dynamically inconsistent, as we would face another decision situation where no forecasting power applies. Of course we can also imagine intermediate cases. For instance one, where we make precommitments and the previous subjects were able to make them as well, but we don’t know whether they did. The more uncertain we get about their precommitments, the more we approach the case where only we precommit while the forecasting power gradually weakens. Those cases are more complicated, but they do not show a dynamical inconsistency of E either.
The tickle defense in Newcomblike problems
In the last section I want to have a brief look at the tickle defense, which is sometimes used to defend evidential reasoning by offering a less controversial output. For instance, it states that in the medical version of Solomon’s Problem an evidential reasoner should chew gum, since she can rule out having the gene as long as she doesn’t feel an urge to chew gum. So chewing gum doesn’t make it likelier to have the gene since she already has ruled it out.
I believe that this argument fails since it changes the game. Suddenly, the gene doesn’t cause you to “choose chewing gum” anymore but to “feel an urge to choose chewing gum”. Though I admit, in such a game a conditional independence would screen off the action “not chewing gum” from “not having the gene” – no matter what the previous subjects of the study knew. This is why it would be more attractive to chew gum. However, I don’t see why this case should matter to us. In the original medical version of Solomon’s Problem we are dealing with another game where this particular kind of screening off does not apply. As the gene causes one to “choose chewing gum” we can only rule it out by not doing so. However, this conclusion has to be treated with caution. For one thing, depending on the numbers, one can only diminish the probability of the undesirable event of having the gene – not rule it out completely; for another thing, the diminishment only works if the previous subjects were not ignorant of a depedence of the gene and chewing gum – at least in expectation. Therefore the tickle defense only trivially applies to a special version of the medical Solomon’s Problem and fails to persuade proper evidential reasoners to do anything differently in the standard version. Depending on the specification of the previous subjects’ knowledge, an evidential reasoner would still chew or not chew gum.
Robust Cooperation in the Prisoner's Dilemma
I'm proud to announce the preprint of Robust Cooperation in the Prisoner's Dilemma: Program Equilibrium via Provability Logic, a joint paper with Mihaly Barasz, Paul Christiano, Benja Fallenstein, Marcello Herreshoff, Patrick LaVictoire (me), and Eliezer Yudkowsky.
This paper was one of three projects to come out of the 2nd MIRI Workshop on Probability and Reflection in April 2013, and had its genesis in ideas about formalizations of decision theory that have appeared on LessWrong. (At the end of this post, I'll include links for further reading.)
Below, I'll briefly outline the problem we considered, the results we proved, and the (many) open questions that remain. Thanks in advance for your thoughts and suggestions!
Background: Writing programs to play the PD with source code swap
(If you're not familiar with the Prisoner's Dilemma, see here.)
The paper concerns the following setup, which has come up in academic research on game theory: say that you have the chance to write a computer program X, which takes in one input and returns either Cooperate or Defect. This program will face off against some other computer program Y, but with a twist: X will receive the source code of Y as input, and Y will receive the source code of X as input. And you will be given your program's winnings, so you should think carefully about what sort of program you'd write!
Of course, you could simply write a program that defects regardless of its input; we call this program DefectBot, and call the program that cooperates on all inputs CooperateBot. But with the wealth of information afforded by the setup, you might wonder if there's some program that might be able to achieve mutual cooperation in situations where DefectBot achieves mutual defection, without thereby risking a sucker's payoff. (Douglas Hofstadter would call this a perfect opportunity for superrationality...)
Previously known: CliqueBot and FairBot
And indeed, there's a way to do this that's been known since at least the 1980s. You can write a computer program that knows its own source code, compares it to the input, and returns C if and only if the two are identical (and D otherwise). Thus it achieves mutual cooperation in one important case where it intuitively ought to: when playing against itself! We call this program CliqueBot, since it cooperates only with the "clique" of agents identical to itself.
There's one particularly irksome issue with CliqueBot, and that's the fragility of its cooperation. If two people write functionally analogous but syntactically different versions of it, those programs will defect against one another! This problem can be patched somewhat, but not fully fixed. Moreover, mutual cooperation might be the best strategy against some agents that are not even functionally identical, and extending this approach requires you to explicitly delineate the list of programs that you're willing to cooperate with. Is there a more flexible and robust kind of program you could write instead?
As it turns out, there is: in a 2010 post on LessWrong, cousin_it introduced an algorithm that we now call FairBot. Given the source code of Y, FairBot searches for a proof (of less than some large fixed length) that Y returns C when given the source code of FairBot, and then returns C if and only if it discovers such a proof (otherwise it returns D). Clearly, if our proof system is consistent, FairBot only cooperates when that cooperation will be mutual. But the really fascinating thing is what happens when you play two versions of FairBot against each other. Intuitively, it seems that either mutual cooperation or mutual defection would be stable outcomes, but it turns out that if their limits on proof lengths are sufficiently high, they will achieve mutual cooperation!
The proof that they mutually cooperate follows from a bounded version of Löb's Theorem from mathematical logic. (If you're not familiar with this result, you might enjoy Eliezer's Cartoon Guide to Löb's Theorem, which is a correct formal proof written in much more intuitive notation.) Essentially, the asymmetry comes from the fact that both programs are searching for the same outcome, so that a short proof that one of them cooperates leads to a short proof that the other cooperates, and vice versa. (The opposite is not true, because the formal system can't know it won't find a contradiction. This is a subtle but essential feature of mathematical logic!)
Generalization: Modal Agents
Unfortunately, FairBot isn't what I'd consider an ideal program to write: it happily cooperates with CooperateBot, when it could do better by defecting. This is problematic because in real life, the world isn't separated into agents and nonagents, and any natural phenomenon that doesn't predict your actions can be thought of as a CooperateBot (or a DefectBot). You don't want your agent to be making concessions to rocks that happened not to fall on them. (There's an important caveat: some things have utility functions that you care about, but don't have sufficient ability to predicate their actions on yours. In that case, though, it wouldn't be a true Prisoner's Dilemma if your values actually prefer the outcome (C,C) to (D,C).)
However, FairBot belongs to a promising class of algorithms: those that decide on their action by looking for short proofs of logical statements that concern their opponent's actions. In fact, there's a really convenient mathematical structure that's analogous to the class of such algorithms: the modal logic of provability (known as GL, for GödelLöb).
So that's the subject of this preprint: what can we achieve in decision theory by considering agents defined by formulas of provability logic?
Meta Decision Theory and Newcomb's Problem
Hi all,
As part of my PhD I've written a paper developing a new approach to decision theory that I call Meta Decision Theory. The idea is that decision theory should take into account decisiontheoretic uncertainty as well as empirical uncertainty, and that, once we acknowledge this, we can explain some puzzles to do with Newcomb problems, and can come up with new arguments to adjudicate the causal vs evidential debate. Nozick raised this idea of taking decisiontheoretic uncertainty into account, but he did not defend the idea at length, and did not discuss implications of the idea.
I'm not yet happy to post this paper publicly, so I'll just write a short abstract of the paper below. However, I would appreciate written comments on the paper. If you'd like to read it and/or comment on it, please email me at will dot crouch at 80000hours.org. And, of course, comments in the thread on the idea sketched below are also welcome.
Abstract
First, I show that our judgments concerning Newcomb problems are stakessensitive. By altering the relative amounts of value in the transparent box and the opaque box, one can construct situations in which one should clearly onebox, and one can construct situations in which one should clearly twobox. A plausible explanation of this phenomenon is that our intuitive judgments are sensitive to decisiontheoretic uncertainty as well as empirical uncertainty: if the stakes are very high for evidential decision theory (EDT) but not for Causal Decision theory (CDT) then we go with EDT's recommendation, and viceversa for CDT over EDT.
Second, I show that, if we 'go meta' and take decisiontheoretic uncertainty into account, we can get the right answer in both the Smoking Lesion case and the Psychopath Button case.
Third, I distinguish Causal MDT (CMDT) and Evidential MDT (EMDT). I look at what I consider to be the two strongest arguments in favour of EDT, and show that these arguments do not work at the meta level. First, I consider the argument that EDT gets the right answer in certain cases. In response to this, I show that one only needs to have small credence in EDT in order to get the right answer in such cases. The second is the "Why Ain'cha Rich?" argument. In response to this, I give a case where EMDT recommends twoboxing, even though twoboxing has a lower average return than oneboxing.
Fourth, I respond to objections. First, I consider and reject alternative explanations of the stakessensitivity of our judgments about particular cases, including Nozick's explanation. Second, I consider the worry that 'going meta' leads one into a vicious regress. I accept that there is a regress, but argue that the regress is nonvicious.
In an appendix, I give an axiomatisation of CMDT.
The Ellsberg paradox and money pumps
Followup to: The Savage theorem and the Ellsberg paradox
In the previous post, I presented a simple version of Savage's theorem, and I introduced the Ellsberg paradox. At the end of the post, I mentioned a strong Bayesian thesis, which can be summarised: "There is always a price to pay for leaving the Bayesian Way."^{1} But not always, it turns out. I claimed that there was a method that is Ellsbergparadoxical, therefore nonBayesian, but can't be moneypumped (or "Dutch booked"). I will present the method in this post.
I'm afraid this is another long post. There's a short summary of the method at the very end, if you want to skip the jibba jabba and get right to the specification. Before trying to moneypump it, I'd suggest reading at least the two highlighted dialogues.
Ambiguity aversion
To recap the Ellsberg paradox: there's an urn with 30 red balls and 60 other balls that are either green or blue, in unknown proportions. Most people, when asked to choose between betting on red or on green, choose red, but, when asked to choose between betting on redorblue or on greenorblue, choose greenorblue. For some people this behaviour persists even after due calculation and reflection. This behaviour is nonBayesian, and is the prototypical example of ambiguity aversion.
There were some major themes that came out in the comments on that post. One theme was that I Fail Technical Writing Forever. I'll try to redeem myself.
Another theme was that the setup given may be a bit too symmetrical. The Bayesian answer would be indifference, and really, you can break ties however you want. However the paradoxical preferences are typically strict, rather than just tiebreaking behaviour. (And when it's not strict, we shouldn't call it ambiguity aversion.) One suggestion was to add or remove a couple of red balls. Speaking for myself, I would still make the paradoxical choices.
A third theme was that ambiguity aversion might be a good heuristic if betting against someone who may know something you don't. Now, no such opponent was specified, and speaking for myself, I'm not inferring one when I make the paradoxical choices. Still, let me admit that it's not contrived to infer a mischievous experimenter from the Ellsberg setup. One commentator puts it better than me:
Betting generally includes an adversary who wants you to lose money so they win in. Possibly in psychology experiments [this might not apply] ... But generally, ignoring the possibility of someone wanting to win money off you when they offer you a bet is a bad idea.
Now betting is supposed to be a metaphor for options with possibly unknown results. In which case sometimes you still need to account for the possibility that the options were made available by an adversary who wants you to choose badly, but less often. And you should also account for the possibility that they were from other people who wanted you to choose well, or that the options were not determined by any intelligent being or process trying to predict your choices, so you don't need to account for an anticorrelation between your choice and the best choice. Except for your own biases.
We can take betting on the Ellsberg urn as a standin for various decisions under ambiguous circumstances. Ambiguity aversion can be Bayesian if we assume the right sort of correlation between the options offered and the state of the world, or the right sort of correlation between the choice made and the state of the world. In that case just about anything can Bayesian. But sometimes the opponent will not have extra information, nor extra power. There might not even be any opponent as such. If we assume there are no such correlations, then ambiguity aversion is nonBayesian.
The final theme was: so what? Ambiguity aversion is just another cognitive bias. One commentator specifically complained that I spent too much time talking about various abstractions and not enough time talking about how ambiguity aversion could be moneypumped. I will fix that now: I claim that ambiguity aversion cannot be moneypumped, and the rest of this post is about my claim.
I'll start with a bit of namedropping and some whig history, to make myself sound more credible than I really am^{2}. In the last twenty years or so many models of ambiguity averse reasoning have been constructed. Choquet expected utility^{3} and maxmin expected utility^{4} were early proposed models of ambiguity aversion. Later multiplier preferences^{5} were the result of applying the ideas of robust control to macroeconomic models. This results in ambiguity aversion, though it was not explicitly motivated by the Ellsberg paradox. More recently, variational preferences^{6} generalises both multiplier preferences and maxmin expected utility. What I'm going to present is a finitary case of variational preferences, with some of my own amateur mathematical fiddling for rhetorical purposes.
Probability intervals
The starting idea is simple enough, and may have already occurred to some LW readers. Instead of using a prior probability for events, can we not use an interval of probabilities? What should our betting behaviour be for an event with probability 50%, plus or minus 10%?
There are some different ways of filling in the details. So to be quite clear, I'm not proposing the following as the One True Probability Theory, and I am not claiming that the following is descriptive of many people's behaviour. What follows is just one way of making ambiguity aversion work, and perhaps the simplest way. This makes sense, given my aim: I should just describe a simple method that leaves the Bayesian Way, but does not pay.
Now, sometimes disjoint ambiguous events together make an event with known probability. Or even a certainty, as in an event and its negation. If we want probability intervals to be additive (and let's say that we do) then what we really want are oriented intervals. I'll use + or + (pronounced: plusorminus, minusorplus) to indicate two opposite orientations. So, if P(X) = 1/2 + 1/10, then P(not X) = 1/2 + 1/10, and these add up to 1 exactly.
Such oriented intervals are equivalent to ordered pairs of numbers. Sometimes it's more helpful to think of them as oriented intervals, but sometimes it's more helpful to think of them as pairs. So 1/2 + 1/10 is the pair (3/5,2/5). And 1/2 + 1/10 is (2/5,3/5), the same numbers in the opposite order. The sum of these is (1,1), which is 1 exactly.
You may wonder, if we can use ordered pairs, can we use triples, or longer lists? Yes, this method can be made to work with those too. And we can still think in terms of centre, length, and orientation. The orientation can go off in all sorts of directions, instead of just two. But for my purposes, I'll just stick with two.
You might also ask, can we set P(X) = 1/2 + 1/2? No, this method just won't handle it. A restriction of this method is that neither of the pair can be 0 or 1, except when they're both 0 or both 1. The way we will be using these intervals, 1/2 + 1/2 would be the extreme case of ambiguity aversion. 1/2 + 1/10 represents a lesser amount of ambiguity aversion, a sort of compromise between worstcase and averagecase behaviour.
To decide among bets (having the same two outcomes), compute their probability intervals. Sometimes, the intervals will not overlap. Then it's unambiguous which is more likely, so it's clear what to pick. In general, whether they overlap or not, pick the one with the largest minimum  though we will see there are three caveats when they do overlap. If P(X) = 1/2 + 1/10, we would be indifferent between a bet on X and on not X: the minimum is 2/5 in either case. If P(Y) = 1/2 exactly, then we would strictly prefer a bet on Y to a bet on X.
Which leads to the first caveat: sometimes, given two options, it's strictly better to randomise. Let's suppose Y represents a fair coin. So P(Y) = 1/2 exactly, as we said. But also, Y is independent of X. P(X and Y) = 1/4 + 1/20, and so on. This means that P((X and not Y) or (Y and not X)) = 1/2 exactly also. So we're indifferent between a bet on X and a bet on not X, but we strictly prefer the randomised bet.
In general, randomisation will be strictly better if you have two choices with overlapping intervals of opposite orientations. The best randomisation ratio will be the one that gives a bet with zerolength interval.
Now let us reconsider the Ellsberg urn. We did say the urn can be a metaphor for various situations. Generally these situations will not be symmetrical. But, even in symmetrical scenarios, we should still rethink how we apply the principle of indifference. I argue that the underlying idea is really this: if our information has a symmetry, then our decisions should have that same symmetry. If we switch green and blue, our information about the Ellsberg urn doesn't change. The situation is indistinguishable, so we should behave the same way. It follows that we should be indifferent between a bet on green and a bet on blue. Then, for the Bayesian, it follows that P(red) = P(green) = P(blue) = 1/3. Period.
But for us, there is a degree of freedom, even in this symmetrical situation. We know what the probability of red is, so of course P(red) = 1/3 exactly. But we can set, say^{7}, P(green) = 1/3 + 1/9, and P(blue) = 1/3 + 1/9. So we get P(red or green) = 2/3 + 1/9, P(red or blue) = 2/3 + 1/9, P(green or blue) = 2/3 exactly, and of course P(red or green or blue) = 1 exactly.
So: red is 1/3 exactly, but the minimum of green is 2/9. (green or blue) is 2/3 exactly, but the minimum of (red or blue) is 5/9. So choose red over green, and (green or blue) over (red or blue). That's the paradoxical behaviour. Note that neither pair of choices offered in the Ellsberg paradox has the type of overlap that favours randomisation.
Once we have a decision procedure for the twooutcome case, then we can tack on any utility function, as I explained in the previous post. The result here is what you would expect: we get oriented expected utility intervals, obtained by multiplying the oriented probability intervals by the utility. When deciding, we pick the one whose interval has the largest minimum. So for example, a bet which pays 15U on red (using U for "utils", the abstract units of measurement of the utility function) has expected utility 5U exactly. A bet which pays 18U on green has expected utility 6U + 2U, the minimum is 4U. So pick the bet on red over that.
Operationally, probability is associated with the "fair price" at which we are willing to bet. A probability interval indicates that there is no fair price. Instead we have a spread: we buy bets at their low price and sell at their high price. At least, we do that if we have no outstanding bets, or more generally, if the expected utility interval on our outstanding bets has zerolength. The second caveat is that if this interval has length, then it affects our price: we also sell bets of the same orientation at their low price, and buy bets of the opposite orientation at their high price, until the length of this interval is used up. The midpoint of the expected utility interval on our outstanding bets will be irrelevant.
This can be confusing, so it's time for an analogy.
Bootsianism
If you are Bayesian and riskneutral (and if bets pay in "utils" rather than cash, you are riskneutral by definition) then outstanding bets have no effect on further betting behaviour. However, if you are riskaverse, as is the most common case, then this is no longer true. The more money you've already got on the line, the less willing you will be to bet.
But besides risk attitude, there could also be interference effects from nonmonetary payouts. For example, if you are dealing in boots, then you wouldn't buy a single boot for half the price of a pair, and neither would you sell one of your boots for half the price of a pair. Unless you happened to already have unmatched boots, then you would sell those at a lower price, or buy boots of the opposite orientation at a higher price, until you had no more unmatched boots. If you were otherwise riskneutral with respect to boots, then your behaviour would not depend on the number of pairs you have, just on the number and orientation of your unmatched boots.
This closely resembles the nonBayesian behaviour above. In fact, for the Ellsberg urn, we could just say that a bet on red is worth a pair of boots, a bet on green is worth two left boots, and a bet on blue is worth two right boots. Without saying anything further, it's clear that we would strictly prefer red (a pair) over green (two lefts), but we would also strictly prefer greenorblue (two pairs) over redorblue (one left and three rights). That's the paradoxical behaviour, but you know you can't moneypump boots.
A: I'll buy that pair of boots for 30 zorkmids. 
Boots' rule
So much for the static case. But what do we do with new information? How do we handle conditional probabilities?
We still get P(AB) by dividing P(A and B) by P(B). It will be easier to think in terms of pairs here. So for example P(red) = 1/3 exactly = (1/3,1/3) and P(red or green) = 2/3 + 1/9 = (7/9,5/9), so P(redred or green) = (3/7,3/5) = 18/35 + 3/35. And similarly P(greenred or green) = (1/3 + 1/9)/(2/3 + 1/9) = 17/35 + 3/35.
This rule covers the dynamic passive case, where we update probabilities based on what we observe, before betting. The third and final caveat is in the active case, when information comes in between bets. Now, we saw that the length and orientation of the interval on expected utility of outstanding bets affects further betting behaviour. There is actually a separate update rule for this quantity. It is about as simple as it gets: do nothing. The interval can change when we make choices, and its midpoint can shift due to external events, but its length and orientation do not update.
You might expect the update rule for this quantity to follow from the way the expected utility updates, which follows from the way probability updates. But it has a mind of its own. So even if we are keeping track of our bets, we'd still need to keep track of this extra variable separately.
Sometimes it may be easier to think in terms of the total expected utility interval of our outstanding bets, but sometimes it may be easier to think of this in terms of having a "virtual" interval that cancels the change in the length and orientation of the "real" expected utility interval. The midpoint of this virtual interval is irrelevant and can be taken to always be zero. So, on update, compute the prior expected utility interval of outstanding bets, subtract the posterior expected utility interval from it, and add this difference to the virtual interval. Reset its midpoint to zero, keeping only the length and orientation.
That can also be confusing, so let's have another analogy.
Yo' mama's so illogical...
I recently came across this example by Mark Machina:
M: Children, I only have one treat, I can only give it to one of you. 
Instead of giving the treat to either child, she strictly prefers to toss a coin and give the treat to the winner. But after the coin is tossed, she strictly prefers to give the treat to the winner rather than toss again.
This cannot be explained in terms of maximising expected utility, in the typical sense of "utility". And of course only known probabilities are involved here, so there's no question as to whether her beliefs are probabilistically sophisticated or not. But it could be said that she is still maximising the expected value of an extended objective function. This extended objective function does not just consider who gets a treat, but also considers who "had a fair chance". She is unfair if she gives the treat to either child outright, but fair if she tosses a coin. That fairness doesn't go away when the result of the coin toss is known.
Or something like that. There are surely other ways of dissecting the mother's behaviour. But no matter what, it's going to have to take the coin toss into account, even though the coin, in and of itself, has no relevance to the situation.
Let's go back to the urn. Green and blue have the type of overlap that favours randomisation: P((green and heads) or (blue and tails)) = 1/3 exactly. A bet paying 9U on this event has expected utility of 3U exactly. Let's say we took this bet. Now say the coin comes up heads. We can update the probabilities as per above. The answer is that P(green) = 1/3 + 1/9 as it was before. That makes sense because it's an independent event: knowing the result of the coin toss gives no information about the urn. The difference is that we now have an outstanding bet that pays 9U if the ball is green. The expected utility would therefore be 3U + 1U. Except, the expected utility interval was zerolength before the coin was tossed, so it remains zerolength. Equivalently, the virtual interval becomes + 1U, so that the effective total is 3U exactly. (In this example, the midpoint of the expected utility interval didn't change either. That's not generally the case.) A bet randomised on a new coin toss would have expected utility 3U, plus the virtual interval of + 1U, for an effective total of 3U + 1U. So we would strictly prefer to keep the bet on green rather than rerandomise.
Let's compare this with a trivial example: let's say we took a bet that pays 9U if the ball drawn from the urn is green. The expected utility of this bet is 3U + 1U. For some unrelated reason, a coin is tossed, and it comes up heads. The coin has also nothing to do with the urn or my bet. I still have a bet of 9U on green, and its expected utility is still 3U + 1U.
But the difference between these two examples is just in the counterfactual: if the coin had come up tails, in the first example I would have had a bet of 9U on blue, and in the second example I would have had a bet of 9U on green. But the coin came up heads, and in both examples I end up with a bet of 9U on green. The virtual interval has some spooky dependency on what could have happened, just like "had a fair chance". It is the ghost of a departed bet.
I expect many on LW are wondering what happened. There was supposed to be a proof that anything that isn't Bayesian can be punished. Actually, this threat comes with some hidden assumptions, which I hope these analogies have helped to illustrate. A boot is an example of something which has no fair price, even if a pair of boots has one. A mother with two children and one treat is an example where some counterfactuals are not forgotten. The hidden assumptions fail in our case, just as they can fail in these other contexts where Bayesianism is not at issue. This can be stated more rigorously^{8}, but that is basically how it's possible. Now We Know. And Knowing is Half the Battle.
Notes
 Taken almost verbatim from Eliezer Yudkowsky's post on the Allais paradox.
 And footnotes pointing to some tangentially relevant journal articles make me sound extra credible.
 For Choquet expected utility see: D. Schmeidler, Subjective probability and expected utility without additivity, Econometrica 57 (1989) pp 571587.
 For maxmin expected utility see: I. Gilboa and D. Schmeidler, Maxmin expected utility with a nonunique prior, J. Math. Econ. 18 (1989) pp 141153.
 For multiplier preferences see: L.P. Hansen and T.J. Sargeant, Robust control and model uncertainty, Amer. Econ. Rev. 91 (2001) pp 6066.
 For variational preferences see: F. Maccheroni, M. Marinacci, and A. Rustichini, Dynamic variational preferences, J. Econ. Theory 128 (2006) pp 444.
 Any length between 0 and 1/3 works. But here's where I pulled 1/9 from: a Bayesian might assign exactly 1/61 prior probability to the 61 possible urn compositions, and the result is roughly approximated by the Laplacian rule of succession, which prescribes a pseudocount of one green and one blue ball. A similar thing with probability intervals is roughly approximated by using a pseudocount of 3/2 + 1/2 green and 3/2 + 1/2 blue balls.
 To quickly relate this back to Savage's rules: rules 1 and 3 guarantee that there's no static money pump. Rule 2 then is supposed to guarantee that there is no dynamic money pump. But it is stronger than necessary for that purpose. I claim that this method obeys rules 1, 3, and a weaker version of rule 2, and that it is dynamically consistent. For dynamic consistency of variational preferences in general, see footnotes above. This method is a special case, for which I wrote up a simpler proof.
Appendix A: method summary

Appendix B: obligatory image for LW posts on this topic
The Savage theorem and the Ellsberg paradox
Followup to: A summary of Savage's foundation for probability and utility.
In 1961, Daniel Ellsberg, most famous for leaking the Pentagon Papers, published the decisiontheoretic paradox which is now named after him ^{1}. It is a cousin to the Allais paradox. They both involve violations of an independence or separability principle. But they go off in different directions: one is a violation of expected utility, while the other is a violation of subjective probability. The Allais paradox has been discussed on LW before, but when I do a search it seems that the first discussion of the Ellsberg paradox on LW was my comments on the previous post ^{2}. It seems to me that from a Bayesian point of view, the Ellsberg paradox is the greater evil.
But I should first explain what I mean by a violation of expected utility versus subjective probability, and for that matter, what I mean by Bayesian. I will explain a special case of Savage's representation theorem, which focuses on the subjective probability side only. Then I will describe Ellsberg's paradox. In the next episode, I will give an example of how not to be Bayesian. If I don't get voted off the island at the end of this episode.
Rationality and Bayesianism
Bayesianism is often taken to involve the maximisation of expected utility with respect to a subjective probability distribution. I would argue this label only sticks to the subjective probability side. But mainly, I wish to make a clear division between the two sides, so I can focus on one.
Subjective probability and expected utility are certainly related, but they're still independent. You could be perfectly willing and able to assign belief numbers to all possible events as if they were probabilities. That is, your belief assignment obeys all the laws of probability, including Bayes' rule, which is, after all, what the ism is named for. You could do all that, but still maximise something other than expected utility. In particular, you could combine subjective probabilities with prospect theory, which has also been discussed on LW before. In that case you may display Allaisparadoxical behaviour but, as we will see, not Ellsbergparadoxical behaviour. The rationalists might excommunicate you, but it seems to me you should keep your Bayesianist card.
On the other hand your behaviour could be incompatible with any subjective probability distribution. But you could still maximise utility with respect to something other than subjective probability. In particular, when faced with known probabilities, you would be maximising expected utility in the normal sense. So you can not exhibit any Allaisparadoxical behaviour, because the Allais paradox involves only objective lotteries. But you may exhibit, as we will see, Ellsbergparadoxical behaviour. I would say you are not Bayesian.
So a nonBayesian, even the strictest frequentist, can still be an expected utility maximiser, and a perfect Bayesian need not be an expected utility maximiser. What I'm calling Bayesianist is just the idea that we should reason with our subjective beliefs the same way that we reason with objective probabilities. This also has been called having "probabilistically sophisticated" beliefs, if you prefer to avoid the Bword, or don't like the way I'm using it.
In a lot of what follows, I will bypass utility by only considering two outcomes. Utility functions are only unique up to a constant offset and a positive scale factor. With two outcomes, they evaporate entirely. The question of maximising expected utility with respect to a subjective probability distribution reduces to the question of maximising the probability, according to that distribution, of getting the better of the two outcomes. (And if the two outcomes are equal, there is nothing to maximise.)
And on the flip side, if we have a decision method for the twooutcome case, Bayesian or otherwise, then we can always tack on a utility function. The idea of utility is just that any intermediate outcome is equivalent to an objective lottery between better and worse outcomes. So if we want, we can use a utility function to reduce a decision problem with any (finite) number of outcomes to a decision problem over the best and worst outcomes in question.
Savage's representation theorem
Let me recap some of the previous post on Savage's theorem. How might we defend Bayesianism? We could invoke Cox's theorem. This starts by assuming possible events can be assigned real numbers corresponding to some sort of belief level on someone's part, and that there are certain functions over these numbers corresponding to logical operations. It can be proven that, if someone's belief functions obey some simple rules, then that person acts as if they were reasoning with subjective probability. Now, while the rules for belief functions are intuitive, the background assumptions are pretty sketchy. It is not at all clear why these mathematical constructs are requirements of rationality.
One way to justify those constructs is to argue in terms of choices a rational person must make. We imagine someone is presented with choices among various bets on uncertain events. Their level of belief in these events can be gauged by which bets they choose. But if we're going to do that anyway, then, as it turns out, we can just give some simple rules directly about these choices, and bypass the belief functions entirely. This was Leonard Savage's approach ^{3}. To quote a comment on the previous post: "This is important because agents in general don't have to use beliefs or goals, but they do all have to choose actions."
Savage's approach actually covers both subjective probability and expected utility. The previous post discusses both, whereas I am focusing on the former. This lets me give a shorter exposition, and I think a clearer one.
We start by assuming some abstract collection of possible bets. We suppose that when you are offered two bets from this collection, you will choose one over the other, or express indifference.
As discussed, we will only consider two outcomes. So all bets have the same payout, the difference among them is just their winning conditions. It is not specified what it is that you win. But it is assumed that, given the choice between winning unconditionally and losing unconditionally, you would choose to win.
It is assumed that the collection of bets form what is called a boolean algebra. This just means we can consider combinations of bets under boolean operators like "and", "or", or "not". Here I will use brackets to indicate these combinations. (A or B) is a bet that wins under the conditions that make either A win, or B win, or both win. (A but not B) wins whenever A wins but B doesn't. And so on.
If you are rational, your choices must, it is claimed, obey some simple rules. If so, it can be proven that you are choosing as if you had a assigned subjective probabilities to bets. Savage's axioms for choosing among bets are ^{4}:
 If you choose A over B, you shall not choose B over A; and, if you do not choose A over B, and do not choose B over C, you shall not choose A over C.
 If you choose A over B, you shall also choose (A but not B) over (B but not A); and conversely, if you choose (A but not B) over (B but not A), you shall also choose A over B.
 You shall not choose A over (A or B).
 If you choose A over B, then you shall be able to specify a finite sequence of bets C_{1}, C_{2}, ..., C_{n}, such that it is guaranteed that one and only one of the C's will win, and such that, for any one of the C's, you shall still choose (A but not C) over (B or C).
Rule 1 is a coherence requirement on rational choice. It is requires your preferences to be a total preorder. One objection to Cox's theorem is that levels of belief could be incomparable. This objection does not apply to rule 1 in this context because, as we discussed above, we're talking about choices of bets, not beliefs. Faced with choices, we choose. A rational person's choices must be noncircular.
Rule 2 is an independence requirement. It demands that when you compare two bets, you ignore the possibilty that they could both win. In those circumstances you would be indifferent between the two anyway. The only possibilities that are relevant to the comparison are the ones where one bet wins and the other doesn't. So, you ought to compare A to B the same way you compare (A but not B) to (B but not A). Savage called this rule the Surething principle.
Rule 3 is a dominance requirement on rational choice. It demands that you not choose something that cannot do better under any circumstance: whenever A would win, so would (A or B). Note that you might judge (B but not A) to be impossible a priori. So, you might legitimately express indifference between A and (A or B). We can only say it is never legitimate to choose A over (A or B).
Rule 4 is the most complicated. Luckily it's not going to be relevant to the Ellsberg paradox. Call it Mostly Harmless and forget this bit if you want.
What rule 4 says is that if you choose A over B, you must be willing to pay a premium for your choice. Now, we said there are only two outcomes in this context. Here, the premium is paid in terms of other bets. Rule 4 demands that you give a finite list of mutually exclusive and exhaustive events, and still be willing to choose A over B if we take any event on your list, cut it from A, and paste it to B. You can list as many events as you need to, but it must be a finite list.
For example, if you thought A was much more likely than B, you might pull out a die, and list the 6 possible outcomes of one roll. You would also be willing to choose (A but not a roll of 1) over (B or a roll of 1), (A but not a roll of 2) over (B or a roll of 2), and so on. If not, you might list the 36 possible outcomes of two consecutive rolls, and be willing to choose (A but not two rolls of 1) over (B or two rolls of 1), and so on. You could go to any finite number of rolls.
In fact rule 4 is pretty liberal, it doesn't even demand that every event on your list be equiprobable, or even independent of the A and B in question. It just demands that the events be mutually exclusive and exhaustive. If you are not willing to specify some such list of events, then you ought to express indifference between A and B.
If you obey rules 13, then that is sufficient for us construct a sort of qualitative subjective probability out of your choices. It might not be quantitative: for one thing, there could be infinitessimally likely beliefs. Another thing is that there might be more than one way to assign numbers to beliefs. Rule 4 takes care of these things. If you obey rule 4 also, then we can assign a subjective probability to every possible bet, prove that you choose among bets as if you were using those probabilities, and also prove that it is the only probability assignment that matches your choices. And, on the flip side, if you are choosing among bets based on a subjective probability assignment, then it is easy to prove you obey rules 13, as well as rule 4 if the collection of bets is suitably infinite, like if a fair die is avaialble to bet on.
Savage's theorem is impressive. The background assumptions involve just the concept of choice, and no numbers at all. There are only a few simple rules. Even rule 4 isn't really all that hard to understand and accept. A subjective probability distribution appears seemingly out of nowhere. In the full version, a utility function appears out of nowhere too. This theorem has been called the crowning glory of decision theory.
The Ellsberg paradox
Let's imagine there is an urn containing 90 balls. 30 of them are red, and the other 60 are either green or blue, in unknown proportion. We will draw a ball from the urn at random. Let us bet on the colour of this ball. As above, all bets have the same payout. To be specific, let's say you get pie if you win, and a boot to the head if you lose. The first question is: do you prefer to bet that the colour will be red, or that it will be green? The second question is: do you prefer to bet that it will be (red or blue), or that it will be (green or blue)?
The most common response^{5} is to choose red over green, and (green or blue) over (red or blue). And that's all there is to it. Paradox! ^{6}
30  60  
Red  Green  Blue  


A  pie  BOOT  BOOT  A is preferred to B  

B  BOOT  pie  BOOT  


C  pie  BOOT  pie  D is preferred to C  
D  BOOT  pie  pie  
Paradox! 
If choices were based solely on an assignment of subjective probability, then because the three colours are mutually exclusive, P(red or blue) = P(red) + P(blue), and P(green or blue) = P(green) + P(blue). So, since P(red) > P(green) then P (red or blue) > P(green or blue), but instead we have P(red or blue) < P(green or blue).
Knowing Savage's representation theorem, we expect to get a formal contradiction from the 4 rules above plus the 2 expressed choices. Something has to give, so we'd like to know which rules are really involved. You can see that we are talking only about rule 2, the Surething principle. It says we shall compare (red or blue) to (green or blue) the same way as we compare red to green.
This behaviour has been called ambiguity aversion. Now, perhaps this is just a cognitive bias. It wouldn't be the first time that people behave a certain way, but the analysis of their decisions shows a clear error. And indeed, when explained, some people do repent of their sins against Bayes. They change their choices to obey rule 2. But others don't. To quote Ellsberg:
...after rethinking all their 'offending' decisions in light of [Savage's] axioms, a number of people who are not only sophisticated but reasonable decide that they wish to persist in their choices. This includes people who previously felt a 'first order commitment' to the axioms, many of them surprised and some dismayed to find that they wished, in these situations, to violate the Surething Principle. Since this group included L.J. Savage, when last tested by me (I have been reluctant to try him again), it seems to deserve respectful consideration.
I include myself in the group that thinks rule 2 is what should be dropped. But I don't have any dramatic (de)conversion story to tell. I was somewhat surprised, but not at all dismayed, and I can't say I felt much if any prior commitment to the rules. And as to whether I'm sophisticated or reasonable, well never mind! Even if there are a number of other people who are all of the above, and even if Savage himself may have been one of them for a while, I do realise that smart people can be Just Plain Wrong. So I'd better have something more to say for myself.
Well, red obviously has a probability of 1/3. Our best guess is to apply the principle of indifference to also assign probability 1/3 to green or blue. But our best guess is not necessarily a good guess. The probabilities we assign to red, and to (green or blue), are objective. We're guessing the probability of green, and of (red or blue). It seems wise to take this difference into account when choosing what to bet on, doesn't it? And surely it will be all the more wise when dealing with reallife, non symetrical situations where we can't even appeal to the principle of indifference.
Or maybe I'm just some fool talking jibba jabba. Against this sort of talk, the LW post on the Allais paradox presents a version of Howard Raiffa's dynamic inconsistency argument. This makes no references to internal thought processes, it is a purely external argument about the decisions themselves. As stated in that post, "There is always a price to pay for leaving the Bayesian Way." ^{7} This is expanded upon in an earlier post:
Sometimes you must seek an approximation; often, indeed. This doesn't mean that probability theory has ceased to apply, any more than your inability to calculate the aerodynamics of a 747 on an atombyatom basis implies that the 747 is not made out of atoms. Whatever approximation you use, it works to the extent that it approximates the ideal Bayesian calculation  and fails to the extent that it departs.
Bayesianism's coherence and uniqueness proofs cut both ways ... anything that is not Bayesian must fail one of the coherency tests. This, in turn, opens you to punishments like Dutchbooking (accepting combinations of bets that are sure losses, or rejecting combinations of bets that are sure gains).
Now even if you believe this about the Allais paradox, I've argued that this doesn't really have much to do with Bayesianism one way or the other. The Ellsberg paradox is what actually strays from the Path. So, does God also punish ambiguity aversion?
Tune in next time^{8}, when I present a twooutcome decision method that obeys rules 1, 3, and 4, and even a weaker form of rule 2. But it exhibits ambiguity aversion, in gross violation of the original rule 2, so that it's not even approximately Bayesian. I will try to present it in a way that advocates for its internal cognitive merit. But the main thing ^{9} is that, externally, it is dynamically consistent. We do not get booked, by the Dutch or any other nationality.
Notes
 Ellsberg's original paper is: Risk, ambiguity, and the Savage axioms, Quarterly Journal of Economics 75 (1961) pp 643669
 Some discussion followed, in which I did rather poorly. Actually I had to admit defeat. Twice. But, as they say: fool me once, shame on me; fool me twice, won't get fooled again!
 Savage presents his theorem in his book: The Foundations of Statistics, Wiley, New York, 1954.
 To compare to Savage's setup: for the two outcome case, we deal directly with "actions" or equivalently "events", here called "bets". We can dispense with "states"; in particular we don't have to demand that the collection of bets be countably complete, or even a powerset algebra of states, just that it be some boolean algebra. Savage's axioms of course have a descriptive interpretation, but it is their normativity that is at issue here, so I state them as "you shall". Rules 13 are his P1P3, and 4 is P6. P4 and P7 are irrelevant in the two outcome case. P5 is included in the background assumption that you would choose to win. I do not call this normative, because the payoff wasn't specified.
 Ellsberg originally proposed this just as a thought experiment, and canvassed various victims for their thoughts under what he called "absolutely nonexpiremental conditions". He used $100 and $0 instead of pie and a boot to the head. Which is dull of course, but it shouldn't make a difference^{10}. The experiment has since been repeated under more experimental conditions. The expirementers also invariably opt for the more boring cash payouts.
 Some people will say this isn't "really" a paradox. Meh.
 Actually, I inserted "to pay". It wasn't in the original post. But it should have been.
 Sneak preview
 As a great decision theorist once said, "Stupid is as stupid does."
 ...or should it? Savage's rule P4 demands that it shall not. And the method I have in mind obeys this rule. But it turns out this is another rule that God won't enforce. And that's yet another post, if I get to it at all.
Poker with Lennier
In J. Michael Straczynski's science fiction TV show Babylon 5, there's a character named Lennier. He's pretty Spocklike: he's a longlived alien who avoids displaying emotion and feels superior to humans in intellect and wisdom. He's sworn to always speak the truth. In one episode, he and another character, the corrupt and rakish Ambassador Mollari, are chatting. Mollari is bored. But then Lennier mentions that he's spent decades studying probability. Mollari perks up, and offers to introduce him to this game the humans call poker.
Revisiting the Anthropic Trilemma II: axioms and assumptions
tl;dr: I present four axioms for anthropic reasoning under copying/deleting/merging, and show that these result in a unique way of doing it: averaging nonindexical utility across copies, adding indexical utility, and having all copies being mutually altruistic.
Some time ago, Eliezer constructed an anthropic trilemma, where standard theories of anthropic reasoning seemed to come into conflict with subjective anticipation. rwallace subsequently argued that subjective anticipation was not ontologically fundamental, so we should not expect it to work out of the narrow confines of everyday experience, and Wei illustrated some of the difficulties inherent in "copydeletemerge" types of reasoning.
Wei also made the point that UDT shifts the difficulty in anthropic reasoning away from probability and onto the utility function, and ata argued that neither the probabilities nor the utility function are fundamental, that it was the decisions that resulted from them that were important  after all, if two theories give the same behaviour in all cases, what grounds do we have for distinguishing them? I then noted that this argument could be extended to subjective anticipation: instead of talking about feelings of subjective anticipation, we could replace it by questions such as "would I give up a chocolate bar now for one of my copies to have two in these circumstances?"
I then made a post where I applied by current intuitions to the anthropic trilemma, and showed how this results in complete nonsense, despite the fact that I used a bona fide utility function. What we need are some sensible criteria for which to divide utility and probability between copies, and this post is an attempt to figure that out. The approach is similar to expected utility, where a quadruped of natural axioms forced all decision processes to have a single format.
The assumptions are:
 No intrinsic value in the number of copies
 No preference reversals
 All copies make the same personal indexical decisions
 No special status to any copy.
Dutch Books and Decision Theory: An Introduction to a Long Conversation
For a community that endorses Bayesian epistemology we have had surprisingly few discussions about the most famous Bayesian contribution to epistemology: the Dutch Book arguments. In this post I present the arguments, but it is far from clear yet what the right way to interpret them is or even if they prove what they set out to. The Dutch Book arguments attempt to justify the Bayesian approach to science and belief; I will also suggest that any successful Dutch Book defense of Bayesianism cannot be disentangled from decision theory. But mostly this post is to introduce people to the argument and to get people thinking about a solution. The literature is scant enough that it is plausible people here could actually make genuine progress, especially since the problem is related to decision theory.^{1}
Bayesianism fits together. Like a welltailored jacket it feels comfortable and looks good. It's an appealing, functional aesthetic for those with cultivated epistemic taste. But sleekness is not a rigourous justification and so we should ask: why must the rational agent adopt the axioms of probability as conditions for her degrees of belief? Further, why should agents accept the principle conditionalization as a rule of inference? These are the questions the Dutch Book arguments try to answer.
The arguments begin with an assumption about the connection between degrees of belief and willingness to wager. An agent with degree of belief b in hypothesis h is assumed to be willing to buy wager up to and including $b in a unit wager on h and sell a unit wager on h down to and including $b. For example, if my degree of belief that I can drink ten eggnogs without passing out is .3 I am willing to bet $0.30 on the proposition that I can drink the nog without passing out when the stakes of the bet are $1. Call this the Willtowager Assumption. As we will see it is problematic.
View more: Next