The Modesty Argument states that when two or more human beings have common knowledge that they disagree about a question of simple fact, they should each adjust their probability estimates in the direction of the others'.  (For example, they might adopt the common mean of their probability distributions.  If we use the logarithmic scoring rule, then the score of the average of a set of probability distributions is better than the average of the scores of the individual distributions, by Jensen's inequality.)

Put more simply:  When you disagree with someone, even after talking over your reasons, the Modesty Argument claims that you should each adjust your probability estimates toward the other's, and keep doing this until you agree.  The Modesty Argument is inspired by Aumann's Agreement Theorem, a very famous and oft-generalized result which shows that genuine Bayesians literally cannot agree to disagree; if genuine Bayesians have common knowledge of their individual probability estimates, they must all have the same probability estimate.  ("Common knowledge" means that I know you disagree, you know I know you disagree, etc.)

I've always been suspicious of the Modesty Argument.  It's been a long-running debate between myself and Robin Hanson.

Robin seems to endorse the Modesty Argument in papers such as Are Disagreements Honest?  I, on the other hand, have held that it can be rational for an individual to not adjust their own probability estimate in the direction of someone else who disagrees with them.

How can I maintain this position in the face of Aumann's Agreement Theorem, which proves that genuine Bayesians cannot have common knowledge of a dispute about probability estimates?  If genunie Bayesians will always agree with each other once they've exchanged probability estimates, shouldn't we Bayesian wannabes do the same?

To explain my reply, I begin with a metaphor:  If I have five different accurate maps of a city, they will all be consistent with each other.  Some philosophers, inspired by this, have held that "rationality" consists of having beliefs that are consistent among themselves.  But, although accuracy necessarily implies consistency, consistency does not necessarily imply accuracy.  If I sit in my living room with the curtains drawn, and make up five maps that are consistent with each other, but I don't actually walk around the city and make lines on paper that correspond to what I see, then my maps will be consistent but not accurate.  When genuine Bayesians agree in their probability estimates, it's not because they're trying to be consistent - Aumann's Agreement Theorem doesn't invoke any explicit drive on the Bayesians' part to be consistent.  That's what makes AAT surprising!  Bayesians only try to be accurate; in the course of seeking to be accurate, they end up consistent.  The Modesty Argument, that we can end up accurate in the course of seeking to be consistent, does not necessarily follow.

How can I maintain my position in the face of my admission that disputants will always improve their average score if they average together their individual probability distributions?

Suppose a creationist comes to me and offers:  "You believe that natural selection is true, and I believe that it is false.  Let us both agree to assign 50% probability to the proposition."  And suppose that by drugs or hypnosis it was actually possible for both of us to contract to adjust our probability estimates in this way.  This unquestionably improves our combined log-score, and our combined squared error.  If as a matter of altruism, I value the creationist's accuracy as much as my own - if my loss function is symmetrical around the two of us - then I should agree.  But what if I'm trying to maximize only my own individual accuracy?  In the former case, the question is absolutely clear, and in the latter case it is not absolutely clear, to me at least, which opens up the possibility that they are different questions.

If I agree to a contract with the creationist in which we both use drugs or hypnosis to adjust our probability estimates, because I know that the group estimate must be improved thereby, I regard that as pursuing the goal of social altruism.  It doesn't make creationism actually true, and it doesn't mean that I think creationism is true when I agree to the contract.  If I thought creationism was 50% probable, I wouldn't need to sign a contract - I would have already updated my beliefs!  It is tempting but false to regard adopting someone else's beliefs as a favor to them, and rationality as a matter of fairness, of equal compromise.  Therefore it is written:  "Do not believe you do others a favor if you accept their arguments; the favor is to you."  Am I really doing myself a favor by agreeing with the creationist to take the average of our probability distributions?

I regard rationality in its purest form as an individual thing - not because rationalists have only selfish interests, but because of the form of the only admissible question:  "Is is actually true?"  Other considerations, such as the collective accuracy of a group that includes yourself, may be legitimate goals, and an important part of human existence - but they differ from that single pure question.

In Aumann's Agreement Theorem, all the individual Bayesians are trying to be accurate as individuals.  If their explicit goal was to maximize group accuracy, AAT would not be surprising.  So the improvement of group score is not a knockdown argument as to what an individual should do if they are trying purely to maximize their own accuracy, and it is that last quest which I identify as rationality.  It is written:  "Every step of your reasoning must cut through to the correct answer in the same movement.  More than anything, you must think of carrying your map through to reflecting the territory.  If you fail to achieve a correct answer, it is futile to protest that you acted with propriety."  From the standpoint of social altruism, someone may wish to be Modest, and enter a drug-or-hypnosis-enforced contract of Modesty, even if they fail to achieve a correct answer thereby.

The central argument for Modesty proposes something like a Rawlsian veil of ignorance - how can you know which of you is the honest truthseeker, and which the stubborn self-deceiver?  The creationist believes that he is the sane one and you are the fool.  Doesn't this make the situation symmetric around the two of you?  If you average your estimates together, one of you must gain, and one of you must lose, since the shifts are in opposite directions; but by Jensen's inequality it is a positive-sum game.  And since, by something like a Rawlsian veil of ignorance, you don't know which of you is really the fool, you ought to take the gamble.  This argues that the socially altruistic move is also always the individually rational move.

And there's also the obvious reply:  "But I know perfectly well who the fool is.  It's the other guy.  It doesn't matter that he says the same thing - he's still the fool."

This reply sounds bald and unconvincing when you consider it abstractly.  But if you actually face a creationist, then it certainly feels like the correct answer - you're right, he's wrong, and you have valid evidence to know that, even if the creationist can recite exactly the same claim in front of a TV audience.

Robin Hanson sides with symmetry - this is clearest in his paper Uncommon Priors Require Origin Disputes - and therefore endorses the Modesty Argument.  (Though I haven't seen him analyze the particular case of the creationist.)

I respond:  Those who dream do not know they dream; but when you wake you know you are awake.  Dreaming, you may think you are awake.  You may even be convinced of it.  But right now, when you really are awake, there isn't any doubt in your mind - nor should there be.  If you, persuaded by the clever argument, decided to start doubting right now that you're really awake, then your Bayesian score would go down and you'd become that much less accurate.  If you seriously tried to make yourself doubt that you were awake - in the sense of wondering if you might be in the midst of an ordinary human REM cycle - then you would probably do so because you wished to appear to yourself as rational, or because it was how you conceived of "rationality" as a matter of moral duty.  Because you wanted to act with propriety.  Not because you felt genuinely curious as to whether you were awake or asleep.  Not because you felt you might really and truly be asleep.  But because you didn't have an answer to the clever argument, just an (ahem) incommunicable insight that you were awake.

Russell Wallace put it thusly:  "That we can postulate a mind of sufficiently low (dreaming) or distorted (insane) consciousness as to genuinely not know whether it's Russell or Napoleon doesn't mean I (the entity currently thinking these thoughts) could have been Napoleon, any more than the number 3 could have been the number 7. If you doubt this, consider the extreme case: a rock doesn't know whether it's me or a rock. That doesn't mean I could have been a rock."

There are other problems I see with the Modesty Argument, pragmatic matters of human rationality - if a fallible human tries to follow the Modesty Argument in practice, does this improve or disimprove personal rationality?  To me it seems that the adherents of the Modesty Argument tend to profess Modesty but not actually practice it.

For example, let's say you're a scientist with a controversial belief - like the Modesty Argument itself, which is hardly a matter of common accord - and you spend some substantial amount of time and effort trying to prove, argue, examine, and generally forward this belief.  Then one day you encounter the Modesty Argument, and it occurs to you that you should adjust your belief toward the modal belief of the scientific field.  But then you'd have to give up your cherished hypothesis.  So you do the obvious thing - I've seen at least two people do this on two different occasions - and say:  "Pursuing my personal hypothesis has a net expected utility to Science.  Even if I don't really believe that my theory is correct, I can still pursue it because of the categorical imperative: Science as a whole will be better off if scientists go on pursuing their own hypotheses."  And then they continue exactly as before.

I am skeptical to say the least.  Integrating the Modesty Argument as new evidence ought to produce a large effect on someone's life and plans.  If it's being really integrated, that is, rather than flushed down a black hole.  Your personal anticipation of success, the bright emotion with which you anticipate the confirmation of your theory, should diminish by literally orders of magnitude after accepting the Modesty Argument.  The reason people buy lottery tickets is that the bright anticipation of winning ten million dollars, the dancing visions of speedboats and mansions, is not sufficiently diminished - as a strength of emotion - by the probability factor, the odds of a hundred million to one.  The ticket buyer may even profess that the odds are a hundred million to one, but they don't anticipate it properly - they haven't integrated the mere verbal phrase "hundred million to one" on an emotional level.

So, when a scientist integrates the Modesty Argument as new evidence, should the resulting nearly total loss of hope have no effect on real-world plans originally formed in blessed ignorance and joyous anticipation of triumph?  Especially when you consider that the scientist knew about the social utility to start with, while making the original plans?  I think that's around as plausible as maintaining your exact original investment profile after the expected returns on some stocks change by a factor of a hundred.  What's actually happening, one naturally suspects, is that the scientist finds that the Modesty Argument has uncomfortable implications; so they reach for an excuse, and invent on-the-fly the argument from social utility as a way of exactly cancelling out the Modesty Argument and preserving all their original plans.

But of course if I say that this is an argument against the Modesty Argument, that is pure ad hominem tu quoque.  If its adherents fail to use the Modesty Argument properly, that does not imply it has any less force as logic.

Rather than go into more detail on the manifold ramifications of the Modesty Argument, I'm going to close with the thought experiment that initially convinced me of the falsity of the Modesty Argument.  In the beginning it seemed to me reasonable that if feelings of 99% certainty were associated with a 70% frequency of true statements, on average across the global population, then the state of 99% certainty was like a "pointer" to 70% probability.  But at one point I thought:  "What should an (AI) superintelligence say in the same situation?  Should it treat its 99% probability estimates as 70% probability estimates because so many human beings make the same mistake?"  In particular, it occurred to me that, on the day the first true superintelligence was born, it would be undeniably true that - across the whole of Earth's history - the enormously vast majority of entities who had believed themselves superintelligent would be wrong.  The majority of the referents of the pointer "I am a superintelligence" would be schizophrenics who believed they were God.

A superintelligence doesn't just believe the bald statement that it is a superintelligence - it presumably possesses a very detailed, very accurate self-model of its own cognitive systems, tracks in detail its own calibration, and so on.  But if you tell this to a mental patient, the mental patient can immediately respond:  "Ah, but I too possess a very detailed, very accurate self-model!"  The mental patient may even come to sincerely believe this, in the moment of the reply.  Does that mean the superintelligence should wonder if it is a mental patient?  This is the opposite extreme of Russell Wallace asking if a rock could have been you, since it doesn't know if it's you or the rock.

One obvious reply is that human beings and superintelligences occupy different classes - we do not have the same ur-priors, or we are not part of the same anthropic reference class; some sharp distinction renders it impossible to group together superintelligences and schizophrenics in probability arguments.  But one would then like to know exactly what this "sharp distinction" is, and how it is justified relative to the Modesty Argument.  Can an evolutionist and a creationist also occupy different reference classes?  It sounds astoundingly arrogant; but when I consider the actual, pragmatic situation, it seems to me that this is genuinely the case.

Or here's a more recent example - one that inspired me to write today's blog post, in fact.  It's the true story of a customer struggling through five levels of Verizon customer support, all the way up to floor manager, in an ultimately futile quest to find someone who could understand the difference between .002 dollars per kilobyte and .002 cents per kilobyte.  Audio [27 minutes], Transcript.  It has to be heard to be believed.  Sample of conversation:  "Do you recognize that there's a difference between point zero zero two dollars and point zero zero two cents?"  "No."

The key phrase that caught my attention and inspired me to write today's blog post is from the floor manager:  "You already talked to a few different people here, and they've all explained to you that you're being billed .002 cents, and if you take it and put it on your calculator... we take the .002 as everybody has told you that you've called in and spoken to, and as our system bills accordingly, is correct."

Should George - the customer - have started doubting his arithmetic, because five levels of Verizon customer support, some of whom cited multiple years of experience, told him he was wrong?  Should he have adjusted his probability estimate in their direction?  A straightforward extension of Aumann's Agreement Theorem to impossible possible worlds, that is, uncertainty about the results of computations, proves that, had all parties been genuine Bayesians with common knowledge of each other's estimates, they would have had the same estimate.  Jensen's inequality proves even more straightforwardly that, if George and the five levels of tech support had averaged together their probability estimates, they would have improved their average log score.  If such arguments fail in this case, why do they succeed in other cases?  And if you claim the Modesty Argument carries in this case, are you really telling me that if George had wanted only to find the truth for himself, he would have been wise to adjust his estimate in Verizon's direction?  I know this is an argument from personal incredulity, but I think it's a good one.

On the whole, and in practice, it seems to me like Modesty is sometimes a good idea, and sometimes not.  I exercise my individual discretion and judgment to decide, even knowing that I might be biased or self-favoring in doing so, because the alternative of being Modest in every case seems to me much worse.

But the question also seems to have a definite anthropic flavor.  Anthropic probabilities still confuse me; I've read arguments but I have been unable to resolve them to my own satisfaction.  Therefore, I confess, I am not able to give a full account of how the Modesty Argument is resolved.

Modest, aren't I?

New Comment
40 comments, sorted by Click to highlight new comments since:

I can give a counter-example to the Modesty Argument as stated: "When you disagree with someone, the Modesty Argument claims that you should both adjust your probability estimates toward the other, and keep doing this until you agree."

Suppose two coins are flipped out of sight, and you and another person are trying to estimate the probability that both are heads. You are told what the first coin is, and the other person is told what the second coin is. You both report your observations to each other.

Let's suppose that they did in fact fall both heads. You are told that the first coin is heads, and you report the probability of both heads as 1/2. The other person is told that the second coin is heads, and he also reports the probability as 1/2. However, you can now both conclude that the probability is 1, because if either of you had been told that the coin was tails, he would have reported a probability of zero. So in this case, both of you update your information away from the estimate provided by the other. One can construct more complicated thought experiments where estimates jump around in all kinds of crazy ways, I think.

I'm not sure whether this materially affects your conclusions about the Modesty Argument, but you might want to try to restate it to more clearly describe what you are disagreeing with.

I often recognize this as what I've called a "framing error".

It's the same problem with believing Fox News' claim of being "fair and balanced". Just because there are at least two sides to every story doesn't mean each side deserves equal consideration.

Consider the question of whether one can support the troops and oppose the war.

http://n8o.r30.net/dokuwiki/doku.php/blog:supportourwar

The very posing of the question, which came invariably from Fox News in 2003, is to imply that there is some controversy over the answer. But that's all but disappeared now.

I don't see how there's a problem with saying that the Modesty Principle is only ever a good one to follow when conversing with someone you believe to be more or equally rational as yourself. I guess the extreme case to illustrate this would be applying the principle to a conversation with a chatbot.

Hal, I changed the lead to say "When two or more human beings have common knowledge that they disagree", which covers your counterexample.

pdf23ds, the prbolem is how to decide when the person you are conversing with is more or equally rational as yourself. What if you disagree about that? Then you have a disagreement about a new variable, your respective degrees of rationality. Do you both believe yourself to be more meta-rational than the other? And so on. See Hanson and Cowen's "Are Disagreements Honest?", http://hanson.gmu.edu/deceive.pdf.

Eliezer, "when two or more human beings have common knowledge that they disagree about a question of simple fact" is problematic because this can never happen with common-prior Bayesians. Such people can never have common knowledge of a disagreement, although they can disagree on the way to common knowledge of their opininons. Maybe you could say that they learn that they initially disagree about the question.

However I don't think that addition fixes the problem. Suppose the first coin in my example is known by all to be biased and fall heads with 60% probability. Then the person who knows the first coin initially estimates the probability of HH as 50%, while the person who knows the second coin initially estimates it as 60%. So they initially disagree, and both know this. However they will both update their estimates upward to 100% after hearing each other, which means that they do not adjust their probability estimates towards the other.

I am having trouble relating the Modesty Argument to Aumann's result. It's not clear to me that anyone will defend the Modesty Argument as stated, at least not based on Aumann and the related literature.

Hal, that's why I specified human beings. Human beings often find themselves with common knowledge that they disagree about a question of fact. And indeed, genuine Bayesians would not find themselves in such a pickle to begin with, which is why I question that we can clean up the mess by imitating the surface features of Bayesians (mutual agreement) while departing from their causal mechanisms (instituting an explicit internal drive to agreement, which is not present in ideal Bayesians).

The reason my addition fixes the problem is that in your scenario, the disagreement only holds while the two observers do not have common knowledge of their own probability estimates - this can easily happen to Bayesians; all they need to do is observe a piece of evidence they haven't had the opportunity to communicate. So they disagree at first, but only while they don't have common knowledge.

Eliezer - I see better what you mean now. However I am still a bit confused because "common knowledge" usually refers to a state that is the end point of a computational or reasoning process, so it is inconsistent to speak of what participants "should" do after that point. If A and B's disagreement is supposedly common knowledge, but A may still choose to change his estimate to more closely match B's, then his estimate really isn't common knowledge at all because B is not sure if he has changed his mind or not.

When you say, "they should each adjust their probability estimates in the direction of the others'", do you mean that they should have done that and it would have been better for them, instead of letting themselves get into the state of common knowledge of their disagreement?

Sorry to belabor this but since you are critiquing this Modesty Argument it seems worthwhile to clarify exactly what it says.

Hal, I'm not really the best person to explain the Modesty Argument because I don't believe in it! You should ask a theory's advocates, not its detractors, to explain it. You, yourself, have advocated that people should agree to agree - how do you think that people should go about it? If your preferred procedure differs from the Modesty Argument as I've presented it, it probably means that I got it wrong.

What I mean by the Modesty Argument is: You sit down at a table with someone else who disagrees with you, you each present your first-order arguments about the immediate issue - on the object level, as it were - and then you discover that you still seem to have a disagreement. Then at this point (I consider the Modesty Argument to say), you should consider as evidence the second-order, meta-level fact that the other person isn't persuaded, and you should take that evidence into account by adjusting your estimate in his direction. And he should do likewise. Keep doing that until you agree.

As to how this fits into Aumann's original theorem - I'm the wrong person to ask about that, because I don't think it does fit! But in terms of real-world procedure, I think that's what Modesty advocates are advocating, more or less. When we're critiquing Inwagen for failing to agree with Lewis, this is more or less the sort of thing we think he ought to do instead - right?

There are times when I'm happy enough to follow Modest procedure, but the Verizon case, and the creationist case, aren't on my list. I exercise my individual discretion, and judge based on particular cases. I feel free to not regard a creationist's beliefs as evidence, despite the apparent symmetry of my belief that he's the fool and his belief that I'm the fool. Thus I don't concede that the Modesty Argument holds in general, while Robin Hanson seems (in "Are Disagreements Honest?") to hold that it should be universal.

I doubt that anyone is advocating the version of the Modesty Argument that you're attacking. People who advocate something resembling that seem to believe we should only respond that way if we should assume both sides are making honest attempts to be Bayesians. I don't know of anyone who suggests we ignore evidence concerning the degree to which a person is an honest Bayesian. See for example the qualification Robin makes in the last paragraph of this: http://lists.extropy.org/pipermail/extropy-chat/2005-March/014620.html. Or from page 28 of http://hanson.gmu.edu/deceive.pdf "seek observable signs that indicate when people are self-deceived about their meta-rationality on a particular topic. You might then try to disagree only with those who display such signs more strongly than you do." There seems to be enough agreement on some basic principles of rationality that we can conclude there are non-arbitrary ways of estimating who's more rational that are available to those who want to use them.

Okay, so what are Robin and Hal advocating, procedurally speaking? Let's hear it from them. I defined the Modesty Argument because I had to say what I thought I was arguing against, but, as said, I'm not an advocate and therefore I'm not the first person to ask. Where do you think Inwagen went wrong in disagreeing with Lewis - what choice did he make that he should not have made? What should he have done instead? The procedure I laid out looks to me like the obvious one - it's the one I'd follow with a perceived equal. It's in applying the Modest procedure to disputes about rationality or meta-rationality that I'm likely to start wondering if the other guy is in the same reference class. But if I've invented a strawman, I'm willing to hear about it - just tell me the non-strawman version.

It is certainly true that one should not superficially try to replicate Aumann's theorem, but should try to replicate the process of the bayesians, namely, to model the other agent and see how the other agent could disagree. Surely this is how we disagree with creationists and customer service agents. Even if they are far from bayesian, we can extract information from their behavior, until we can model them.

But modeling is also what RH was advocating for the philosophers. Inwagen accepts Lewis as a peer, perhaps a superior. Moreover, he accepts him as rationally integrating Inwagen's arguments. This is exactly where Aumann's argument applies. In fact, Inwagen does model himself and Lewis and claims (I've only read the quoted excerpt) that their disagreement must be due to incommunicable insights. Although Aumann's framework about modelling the world seems incompatible with the idea of incommunicable insight, I think it is right to worry about symmetry. Possibly this leads us into difficult anthropic territory. But EY is right that we should not respond by simply changing our opinions, but we should try to describe this incommunicable insight and see how it has infected our beliefs.

Anthropic arguments are difficult, but I do not think they are relevant in any of the examples, except maybe the initial superintelligence. In that situation, I would argue in a way that may be your argument about dreaming: if something has a false belief about having a detailed model of the world, there's not much it can do. You might as well say it is dreaming. (I'm not talking about accuracy, but precision and moreover persistence.)

And you seem to say that if it is dreaming it doesn't count. When you claim that my bayesian score goes up if I insist that I'm awake whenever I feel I'm awake, you seem to be asserting that my assertions in my dreams don't count. This seems to be a claim about persistence of identity. Of course, my actions in dreams seem to have less import than my actions when awake, so I should care less about dream error. But I should not discount it entirely.

I think Douglas has described the situation well. The heart of the Aumann reasoning as I understand it is the mirroring effect, where your disputant's refusal to agree must be interpreted in the context of his understanding the full significance of that refusal, which must be understood to carry weight due to your steadfastness, which itself gains meaning because of his persistence, and so on indefinitely. It is this infinite regress which is captured in the phrase "common knowledge" and which is where I think our intuition tends to go awry and leads us to underestimate the true significance of agreeing to disagree.

Without this mirroring, the Bayesian impact of another person's disagreement is less and we no longer obtain Aumann's strong result. In that case, as Douglas described we should try to model the information possessed by the other person and compare it with our own information, doing our best to be objective. This may indeed benefit from a degree of modesty but it's hard to draw a firm rule.

This is of course a topic I have a lot to say about, but alas I left on a trip just before Eliezer and Hal made their posts, and I'm going to be pretty busy for the next few days. Just wanted all to know I will get around to responding when I get time.

So what's the proper prior probability that the person I'm arguing with is a Bayesian? That would seem to be a necessary piece of information for a Bayesian to figure out how much to follow the Modesty Argument.

(I finally have time to reply; sorry for the long delay.)

Eliezer, one can reasonably criticize a belief without needing to give an exact algorithm for always and exactly computing the best possible belief in such situations. Imagine you said P(A) = .3 and P(notA) = .9, and I criticized you for not satisfying P(A)+P(notA) = 1. If you were to demand that I tell you what to believe instead I might suggest you renormalize, and assign P(A) = 3/12 and P(notA) = 9/12. To that you might reply that those are most certainly not always the best exact numbers to assign. You know of examples where the right thing to do was clearly to assign P(A) = .3 and P(notA) = .7. But surely this would not be a reasonable response to my criticism. Similarly, I can criticize disagreement without offering an exact algorithm which always computes the best way to resolve the disagreement. I would suggest you both moving to a middle belief in the same spirit I might suggest renormalizing when things don't sum to one, as a demonstration that apparently reasonable options are available.

Eliezer, you describe cases of dreamers vs folks awake, of super-intelligences vs schizophrenics who think they are God, of creationists vs. their opponents, and of a Verizon customer vs customer support, all as cases where it can be very reasonably obvious to one side that the other side is completely wrong. The question of course is what exactly identifies such cases, so that you can tell if you are in such a situation at any given moment.

Clearly, people having the mere impression that they are in such a situation is a very unreliable indicator. So if not such an impression, what exactly does justify each of these exemplars in thinking they are obviously right?

It seems to me that even if we grant the possibility of such cases, we must admit that people are far too quick to assume that they are in such cases. So until we can find a reason to think we succumb to this problem less than others, we should try to invoke this explanation less often than we were initially inclined too.

We will often have independent information about relative rationality, and/or one model may predict the statements of the other, so that the conversation has minimal information value on the acual topic of disagreement.

  1. I'll skip dreamers for the moment.

  2. Schizophrenics may claim to possess a detailed self-model, but they can neither convincingly describe the details nor demonstrate feats of superintelligence to others. The AI can do so easily, and distinguish itself from the broader class of beings claiming superintelligence.

  3. Creationists show lower IQ, knowledge levels, and awareness of cognitive biases than their opponents, along with higher rates of other dubious beliefs (and different types of true knowledge are correlated). Further, creationist exceptions with high levels of such rationality-predictors overwhelmingly have been subjected to strong social pressures (childhood indoctrination, etc) and other non-truth-oriented influences. Explaining this difference is very difficult from a creationist perspective, but simple for opponents.

I'd also note the possibility of an 'incommunicable insight.' If creationists attribute conscious bad faith to their opponents, who do not reciprocate, then the opponents can have incommunicable knowledge of their own sincerity.

  1. The Verizon customer-service issue was baffling. But relatively low-skill call center employees following scripts, afflicted with erroneous corporate information (with the weight of hierarchical authority behind it), can make mistakes and would be reluctant to cut the size of a customer's bill by 99%. Further, some of the representatives seemed to understand before passing the buck. Given the existence of mail-in-rebate companies that are paid on the basis of how many rebates they can manage to lose, George could reasonably suspect that the uniformly erroneous price quotes were an intentional moneymaking scheme. Since Verizon STILL has not changed the quotes its representatives offer, there's now even more reason to think that's the case.

Combining this knowledge about Verizon with personal knowledge of higher-than-average mathematics and reasoning skills (from independent objective measures like SAT scores and math contests) George could have had ample reason not to apply the Modesty Argument and conclude that 0.002 cents is equal to $0.002, $0.00167, or $0.00101.

Carl, I'd class schizophrenics with parrots and chatbots; creatures so obviously broken to so many observers that self-interested bias is plausibly a minor factor in our belifs about their rationality. For creationists I want to instead say that I usually have to go with the rough middle of expert opinion, and that goes against creationists. But there is the thorny issue of who exactly are the right experts for this topic. For Verizon I'd say what we are seeing is subject to a selection effect; probably usually Verizon is right and the customers are wrong.

Robin,

I agree that the difficult issue is defining the set of experts (although weighting expert opinion to account for known biases is an alternative to drawing a binary distinction between appropriate and inappropriate experts). I would think that if one can select a subset of expert opinion that is representative of the broad class except for higher levels of a rationality-signal (such as IQ, or professed commitment in approximating a Bayesian reasoner), then one should prefer the subset, until sample size shrinks to the point where the marginal increase in random noise outweighs the marginal improvement in expert quality.

What is your own view on the appropriate expert class for evaluating creationism or theism? College graduates? Science PhDs? Members of elite scientific societies? Science Nobel winners? (Substitute other classes and subclasses, e.g. biological/geological expertise, as you wish.) Unless the more elite group is sufficiently unrepresentative (e.g. elite scientists are disproportionately from non-Christian family backgrounds, although that can be controlled for) in some other way, why stop short?

On Verizon, I agree that if all one knows is that one is in the class of 'people in a dispute with Verizon' one should expect to be in the wrong. However, one can have sufficient information to situate oneself in a subclass, e.g.: a person with 99th percentile mathematical ability applying well-confirmed arithmetic, debating mathematically less-adept representatives whose opinions are not independent (all relying on the same computer display), where concession by the reps would involve admitting a corporate mistake and setting a precedent to cut numerous bills by 99%. Isn't identifying the optimum tradeoff between context-sensitivity and vulnerability to self-serving bias a largely empirical question, dependent on the distribution of rationality and the predictive power of such signals?

[-]Dr_Zen-10

Since the modesty argument requires rational correspondents, creationists are counted out! What I mean is that a creationist does not disagree with me on a matter of simple fact, in that his or her reasons for disagreeing are not themselves based on simple facts, as my reasons for disagreeing with him or her are. This makes our disagreement asymmetric. And Aumann's theorem can't apply, because even if the creationist knows what I know, he or she can't agree; and vice versa. But I think our disagreement is honest, all the same.

In the Verizon case, George can apply the modesty argument and still come up with the conclusion that he is almost certainly right.

He needs to take into account two things: (1) what other people besides Verizon think about the distinction between .002 dollars and .002 cents, and (2) what is the likelihood that Verizon would admit the mistake even if they know there is one.

Admitting the mistake and refunding one customer might as well have the consequence of having to refund tens of thousands of customers and losing millions of dollars. Even if that's the upstanding and right thing to do for the company, each individual customer support rep will fear that their admitting the facts will lead to them being fired. People whose jobs are at stake will do their best to bluff a belief that .002 == 0.00002, even if they don't actually believe that's the case.

This hypothesis is testable. Unless you're a trained liar, your voice should be measurably different when you're explaining the obvious fact that .002 ==.00002 and when you're claiming that .002=.00002 while your internal monologue is going "shit, shit, he's right, isn't he? We really fucked this one up, we're going to lose thousands, and they're going to blame me." We have audio, so someone trained in such analysis should be able to give us an answer.

Can a bayesian agent perfectly model arbitrary non-bayesian agents?

Given a bayesian model for someone else's non-bayesian decision system, won't a bayesian agent have a straight forward time of deciding which priors to update, and how?

Can a bayesian agent perfectly model arbitrary non-bayesian agents?

Sure, at least to the same extent that he or she could perfectly model any other bunch of atoms.

Unfortunately, the audio link is broken, unavailable in google cache, and not listed on the Way Back Machine. All my googling came to the same site, with the same dead link. YouTube only has ~3 minute clips. Does anyone else know where I can locate the whole debacle?

Greetings! I'm a relatively new reader, having spent a month or two working my way through the Sequences and following lots of links, and finally came across something interesting to me that no one else had yet commented on.

Eleizer wrote "Those who dream do not know they dream; but when you wake you know you are awake." No one picked out or disagreed with this statement.

This really surprised me. When I dream, if I bother to think about it I almost always know that I dream -- enough so that on the few occasions when I realize I was dreaming without knowing so, it's a surprising and memorable experience. (Though there may be selection bias here; I could have huge numbers of dreams where I don't know I'm dreaming, but I just don't remember them.)

I thought this was something that came with experience, maturity, and -- dare I say it? -- rationality. Now that I'm thinking about it in this context, I'm quite curious to hear whether this is true for most of the readership. I'm non-neurotypical in several ways; is this one of them?

Welcome to LessWrong!

This is actually rather atypical. It reminds me of lucid dreaming, though I am unsure if it is exactly the same. I almost never remember my dreams. When I have remembered them, I almost never knew I was dreaming at the time. This post discusses such differences.

Endoself mentioned lucid dreaming; in the few lucid dreams I have had, I knew I was dreaming, but in the many normal dreams I did not know. In fact, non-lucid dreams feel extremely real, because I try to change what's happening the way I would in a lucid dream, and nothing changes - convincing me that it's real.

When I am awake I am very aware that I am awake and not dreaming; strangely that feeling is absent during dreams but somehow doesn't alert me that I'm dreaming.

[-]loqi10

In fact, non-lucid dreams feel extremely real, because I try to change what's happening the way I would in a lucid dream, and nothing changes - convincing me that it's real.

This has been my experience. And on several occasions I've become highly suspicious that I was dreaming, but unable to wake myself. The pinch is a lie, but it still hurts in the dream.

[-][anonymous]00

Same with me. I always know if I'm dreaming, but I don't have special feeling of "I'm awake right now". I'm always slightly lucid in my dreams and can easily become fully lucid, but almost never do so except when I force myself out of a rare nightmare or rewind time a bit because I wanna fix something. (It's been this way since I was about 14. I don't remember any dreams from before that.)

In fact, I intentionally make sure to not be too lucid. I did some lucid dreaming experiments when I was 16 or so and after 2-3 weeks I started to fail reality checks while awake. My sanity went pretty much missing for a summer. I then concluded that my dreams aren't psychologically safe and stay as far away from them as I can.

Sometimes arrogance is the mark of truth. History is filled with the blood of people who died for their beliefs holding them against all counterargument, who were later vindicated.

Of course, history is also filled will the blood of people who died for erroneous beliefs.

Obviously, you should utilize the Modesty Argument iff your viewpoint is incorrect.

Excellent article! I have nothing to contribute but a bit of mechanical correction:

I regard rationality in its purest form as an individual thing - not because rationalists have only selfish interests, but because of the form of the only admissible question: "Is is actually true?"

Minor typo here; you mean "Is it actually true?"

If it's being really integrated, that is, rather than flushed down a black hole.

The phrase "flushed down a black hole" in the article above is a link, but it is unfortunately broken.

Interesting... it reminded me of this comic: http://xkcd.com/690/

It looks to ME like the theorem also requires perfect self-knowledge, AND perfect other-knowledge...which just doesn't happen in finite systems. Yeah. Godel again.

[-]Neph00

remember that Bayesian evidence never reaches 100%, thus making middle ground- upon hearing another rationalist's viewpoint, instead of not shifting (as you suggest) or shifting to average your estimate and theirs together (as AAT suggests) why not adjust your viewpoint based on how likely the other rationalist is to have assessed correctly? ie- you believe X is 90% likely to be true the other rationalist believes it's not true 90%. suppose this rationalist is very reliable, say in the neighborhood of 75% accurate, you should adjust your viewpoint down to X is 75% likely to be 10% likely to be true, and 25% likely to be 90% likely to be true (or around 30% likely, assuming I did my math right.) assume he's not very reliable, say a creationist talking about evolution. let's say 10%. you should adjust to X is 10% likely to be 10% likely and 90% likely to be 90% likely. (82%) ...of course this doesn't factor in your own fallibility.

If genunie Bayesians will always agree with each other once they've exchanged probability estimates, shouldn't we Bayesian wannabes do the same?

An example I read comes to mind (it's in dialogue form): "This is a very common error that's found throughout the world's teachings and religions," I continue. "They're often one hundred and eighty degrees removed from the truth. It's the belief that if you want to be Christ-like, then you should act more like Christ—as if the way to become something is by imitating it."

It comes with a fun example, portraying the absurdity and the potential dangers of the behavior: "Say I'm well fed and you're starving. You come to me and ask how you can be well fed. Well, I've noticed that every time I eat a good meal, I belch, so I tell you to belch because that means you're well fed. Totally backward, right? You're still starving, and now you're also off-gassing like a pig. And the worst part of the whole deal—pay attention to this trick—the worst part is that you've stopped looking for food. Your starvation is now assured."

I think that this is similar to the prisoner's dilemma. If A is in an argument with B over some issue, then a contract for them to each update their probability estimates to the average would benefit the group. But each side would have an incentive to cheat by not updating his/her probability estimates. Given A's priors the group would benefit more if only B updates (and vice versa). Unfortunately it would be very difficult to enforce a belief contract.

This does not require them to be selfish. Even if they were perfect altruists they would each believe that it would help both of them the most if only the other's probability estimate was changed.

If people updated their belief towards those around them, then people with agendas would hammer loudly their forged belief at any opportunit... wait, isn't this EXACTLY what they are doing ?

I can recall at least one occasion on which I momentarily doubted I was awake, simply because I saw something that seemed improbable.