Related: Horrible LHC Inconsistency, The Proper Use of Humility

Overconfidence, I've noticed, is a big fear around these parts. Well, it is a known human bias, after all, and therefore something to be guarded against. But I am going to argue that, at least in aspiring-rationalist circles, people are too afraid of overconfidence, to the point of overcorrecting -- which, not surprisingly, causes problems. (Some may detect implications here for the long-standing Inside View vs. Outside View debate.)

Here's Eliezer, voicing the typical worry:

[I]f you asked me whether I could make one million statements of authority equal to "The Large Hadron Collider will not destroy the world", and be wrong, on average, around once, then I would have to say no.

I now suspect that misleading imagery may be at work here. A million statements -- that sounds like a lot, doesn't it? If you made one such pronouncement every ten seconds, a million of them would require you to spend months doing nothing but pontificating, with no eating, sleeping, or bathroom breaks. Boy, that would be tiring, wouldn't it? At some point, surely, your exhausted brain would slip up and make an error. In fact, it would surely make more than one -- in which case, poof!, there goes your calibration.

No wonder, then, that people claim that we humans can't possibly hope to attain such levels of certainty. Look, they say, at all those times in the past when people -- even famous scientists! -- said they were 99.999% sure of something, and they turned out to be wrong. My own adolescent self would have assigned high confidence to the truth of Christianity; so where do I get the temerity, now, to say that the probability of this is 1-over-oogles-and-googols?

[EDIT: Unnecessary material removed.]

A probability estimate is not a measure of "confidence" in some psychological sense. Rather, it is a measure of the strength of the evidence: how much information you believe you have about reality. So, when judging calibration, it is not really appropriate to imagine oneself, say, judging thousands of criminal trials, and getting more than a few wrong here and there (because, after all, one is human and tends to make mistakes). Let me instead propose a less misleading image: picture yourself programming your model of the world (in technical terms, your prior probability distribution) into a computer, and then feeding all that data from those thousands of cases into the computer -- which then, when you run the program, rapidly spits out the corresponding thousands of posterior probability estimates. That is, visualize a few seconds or minutes of staring at a rapidly-scrolling computer screen, rather than a lifetime of exhausting judicial labor. When the program finishes, how many of those numerical verdicts on the screen are wrong?

I don't know about you, but modesty seems less tempting to me when I think about it in this way. I have a model of the world, and it makes predictions. For some reason, when it's just me in a room looking at a screen, I don't feel the need to tone down the strength of those predictions for fear of unpleasant social consequences. Nor do I need to worry about the computer getting tired from running all those numbers.

In the vanishingly unlikely event that Omega were to appear and tell me that, say, Amanda Knox was guilty, it wouldn't mean that I had been too arrogant, and that I had better not trust my estimates in the future. What it would mean is that my model of the world was severely stupid with respect to predicting reality. In which case, the thing to do would not be to humbly promise to be more modest henceforth, but rather, to find the problem and fix it. (I believe computer programmers call this "debugging".)

A "confidence level" is a numerical measure of how stupid your model is, if you turn out to be wrong.

The fundamental question of rationality is: why do you believe what you believe? As a rationalist, you can't just pull probabilities out of your rear end. And now here's the kicker: that includes the probability of your model being wrong. The latter must, paradoxically but necessarily, be part of your model itself. If you're uncertain, there has to be a reason you're uncertain; if you expect to change your mind later, you should go ahead and change your mind now.

This is the first thing to remember in setting out to dispose of what I call "quantitative Cartesian skepticism": the view that even though science tells us the probability of such-and-such is 10-50, well, that's just too high of a confidence for mere mortals like us to assert; our model of the world could be wrong, after all -- conceivably, we might even be brains in vats.

Now, it could be the case that 10-50 is too low of a probability for that event, despite the calculations; and it may even be that that particular level of certainty (about almost anything) is in fact beyond our current epistemic reach. But if we believe this, there have to be reasons we believe it, and those reasons have to be better than the reasons for believing the opposite.

I can't speak for Eliezer in particular, but I expect that if you probe the intuitions of people who worry about 10-6 being too low of a probability that the Large Hadron Collider will destroy the world -- that is, if you ask them why they think they couldn't make a million statements of equal authority and be wrong on average once -- they will cite statistics about the previous track record of human predictions: their own youthful failures and/or things like Lord Kelvin calculating that evolution by natural selection was impossible.

To which my reply is: hindsight is 20/20 -- so how about taking advantage of this fact?

Previously, I used the phrase "epistemic technology" in reference to our ability to achieve greater certainty through some recently-invented methods of investigation than through others that are native unto us. This, I confess, was an almost deliberate foreshadowing of my thesis here: we are not stuck with the inferential powers of our ancestors. One implication of the Bayesian-Jaynesian-Yudkowskian view, which marries epistemology to physics, is that our knowledge-gathering ability is as subject to "technological" improvement as any other physical process. With effort applied over time, we should be able to increase not only our domain knowledge, but also our meta-knowledge. As we acquire more and more information about the world, our Bayesian probabilities should become more and more confident.

If we're smart, we will look back at Lord Kelvin's reasoning, find the mistakes, and avoid making those mistakes in the future. We will, so to speak, debug the code. Perhaps we couldn't have spotted the flaws at the time; but we can spot them now. Whatever other flaws may still be plaguing us, our score has improved.  

In the face of precise scientific calculations, it doesn't do to say, "Well, science has been wrong before". If science was wrong before, it is our duty to understand why science was wrong, and remove known sources of stupidity from our model. Once we've done this, "past scientific predictions" is no longer an appropriate reference class for second-guessing the prediction at hand, because the science is now superior. (Or anyway, the strength of the evidence of previous failures is diminished.)        

That is why, with respect to Eliezer's LHC dilemma -- which amounts to a conflict between avoiding overconfidence and avoiding hypothesis-privileging -- I come down squarely on the side of hypothesis-privileging as the greater danger. Psychologically, you may not "feel up to" making a million predictions, of which no more than one can be wrong; but if that's what your model instructs you to do, then that's what you have to do -- unless you think your model is wrong, for some better reason than a vague sense of uneasiness. Without, ultimately, trusting science more than intuition, there's no hope of making epistemic progress. At the end of the day, you have to shut up and multiply -- epistemically as well as instrumentally. 
 

New Comment
110 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

I'd like to recast the problem this way: we know we're running on error-prone hardware, but standard probability theory assumes that we're running on errorless hardware, and seems to fail, at least in some situations, when running on error-prone hardware. What is the right probability theory and/or decision theory for running on error-prone hardware?

ETA: Consider ciphergoth's example:

do you think you could make a million statements along the lines of "I will not win the lottery" and not be wrong once? If not, you can't justify not playing the lottery, can you?

This kind of reasoning can be derived from standard probability theory and would work fine on someone running errorless hardware. But it doesn't work for us.

We need to investigate this problem systematically, and not just make arguments about whether we're too confident or not confident enough, trying to push the public consensus back and forth. The right answer might be completely different, like perhaps we need different kinds or multiple levels of confidence, or upper and lower bounds on probability estimates.

7MichaelVassar
I think that standard probability theory assumes a known ontology and infinite computing power. We should ideally also be able to produce a probability theory for agents with realistically necessary constraints but without the special constraints that we have.

One simple example: do you think you could make a million statements along the lines of "I will not win the lottery" and not be wrong once? If not, you can't justify not playing the lottery, can you?

3Bo102010
Isn't the expected value still negative?
2Paul Crowley
No, the jackpot is much more than a million times bigger than the stake. EDIT: the expected utility might still be negative, because of the diminishing marginal utility of money.
1Bo102010
Maybe I'm not understanding your point. If the odds of winning are one in 100 million, you could very well expect to make a million statements of "I will not win the lottery" and not be right once.
7orthonormal
As in the LHC example, the criterion is making a million statements with independent reasoning behind each. Predicting a non-win in a million independent lotteries isn't what ciphergoth was thinking, so much as making a million predictions in widely different areas, each of which you (or I) estimate has probability less than 10^-8. Even ruling out fatigue as a factor by imagining Omega copies me a million times and asks each a different question, I believe my mind is so constituted that I'd be very overconfident in tens of thousands of cases, and that several of them would prove me wrong.
5MichaelVassar
Everything is dependent on everything else. I can't make many independent statements.
6orthonormal
That's certainly true given full rationality and arbitrary computing power, but there are certainly many individual things I could be wrong about without being able to immediately see how it contradicts other things I get right. I wouldn't put it past Omega to pull this off.
2Paul Crowley
I'm not sure this properly represents what I was thinking. We all agree that any decision procedure that leads you to play the lottery is flawed. But the "million equivalent statement" test seems to indicate that you can't have sufficient confidence of not winning not to play given the payoffs. If you insist on independent reasoning, passing the million-statement test is even harder, and justifying not playing is therefore harder. It's a kind of real-life Pascal's mugging. I don't have a solution to Pascal's mugging, but for the lottery, I'm inclined to think that I really can have 10^-8 confidence of not winning, that the flaw is with the million-statement test, and it's simply that there aren't a million disparate situations where you can have this kind of confidence, though there certainly are a million broadly similar situations in the reference class "we are actually in a strong position to calculate high-quality odds on this coming to pass".
2wedrifid
I don't.
2Blueberry
Can you please explain that further? Why not? Do you just mean that the pleasure of buying the ticket could be worth a dollar, even though you know you won't win?
1wedrifid
Just reasoning based on a non linear relationship between money and utility.
0Blueberry
Winning ten million dollars provides less than ten million times the utility of winning one dollar, because the richer you are, the less difference each additional dollar makes. That seems to argue against playing the lottery, though.
4wedrifid
$5,000,000 debt. Bankruptcy laws.
2Blueberry
Very clever! You're right; that is a situation where you might as well play the lottery. This actually comes up in business, in terms of the types of investments that businesses make when they have a good chance of going bankrupt. They may not play the lottery, but they're likely to make riskier moves since they have very little to lose and a lot to gain.
1wedrifid
It also applies if you believe your company will be bailed out by the government. I don't tend to approve of bank bailouts for this reason. (Although government guarantees for deposits I place in a different category.)
0RobinZ
It looks to me like the flaw is in calculating the expected utility after changing the probability estimate with the probability of error.
0Paul Crowley
What alternative do you have in mind?
2RobinZ
Well, in an abstract case it would be reasonable, but if you are considering (for example) the lottery, the rule of thumb "you won't win playing the lottery" outweighs any expectation of errors in your own calculations.
1Paul Crowley
Potentially promising approach, but how does that translate into math?
0RobinZ
Let A represent the event when the lottery under consideration is profitable (positive expected value from playing); let X represent the event in which your calculation of the lottery's value is correct. What is desired is P(A). Trivially: P(A) = P(X) * P(A|X) + P(~X) * P(A|~X) From your calculations, you know P(A|X) - this is the arbitrarily-strong confidence komponisto described. What you need to estimate is P(X) and P(A|~X). P(X) I cannot help you with. From my own experience, depending on whether I checked my work, I'd put it in the range {0.9,0.999}, but that's your business. P(A|~X) I would put in the range {1e-10, 1e-4}. In order to conclude that you should always play the lottery, you would have to put P(A|~X) close to unity. Q.E.D. Edit: The error I see is supposing that a wrong calculation gives positive information about the correct answer. That's practically false - if your calculation is wrong, the prior should be approximately correct.
5Wei Dai
I think this doesn't work, or at least is incomplete, because what is needed (under standard decision theory) to decide whether or not to play is not the probability of the lottery having a positive expected value, but the expected utility of the lottery, which I don't see how to compute from your P(A) (assuming that utility is linear in dollars). ETA: In case the point isn't clear, suppose P(A)=1e-4, but the expected value of the lottery, conditional on A being true, is 1e5, then you should still play, right?
3RobinZ
You're right: recalculating... Let E(A) be the expected value of the lottery that you should use in determining your actions. Let E(a) be the expected value you calculate. Let p be your confidence in your calculation (a probability in the Bayesian sense). If we want to account for the possibility of calculating wrong, we are tempted to write something like E(A) = p * E(a) + (1-p) * x where x is what you would expect the lottery to be worth if your calculation was wrong. The naive calculation - the one which says, "play the lottery" - takes x as equal to the jackpot. This is not justified. The correct value for x is closer to your reference-class prediction. Setting x equal to "negative the cost of the ticket plus epsilon", then, it becomes abundantly clear that your ignorance does not make the lottery a good bet. Edit: This also explains why you check your math before betting when it looks like a lottery is a good bet, which is nice.
4Wei Dai
If we follow your suggestion and obtain E(A) < 0, then compute from that the probability of winning the lottery, we end up with P(will win lottery) < 1e-8. But what if we want to compute P(will win lottery) directly? Or, if you think we shouldn't try to compute it directly, but should do it in this roundabout way, then we need a method for deciding when this indirect method is necessary. (Meta point: I think you might be stopping at the first good answer.)
0RobinZ
The parallel calculation would be P(L) = p * P_calculated + (1-p) * P_typical I don't put P_typical very high. Okay, I'll grant you that one. I'm still promoting my original idea to a top-level post. Edit: ...in part because I would like more eyes to see it and provide feedback - I would love to know if it has some interesting faults. Edit: Here it is.
6Alicorn
There's a non-negligible chance that you've been misinformed or are mistaken about the odds of winning.

You propose to ignore the "odd" errors humans sometimes make while calculating a probability for some event. However, errors do occur, even when judging the very first case. And they (at least some of them) occur randomly. When you believe you have correctly calculated the probability, you just might have made an error anywhere in the calculation.

If you keep around the "socially accepted" levels of confidence, those errors average out pretty fast, but if you make only one error in 10^5 calculations, you should not assign probabilities smaller than 1 / 10^5. Otherwise a bet 10000 to 1 between you and me (a fair game from your perspective) will give me an expected value larger than 0 due to the errors in your thoughts you could possibly make.

This is another advantage an AI might have over humans, if the hardware is good enough, probability assignments below 10^-5 might actually be reasonable.

0komponisto
I don't think I said any such thing. There is always some uncertainty; but a belief that the uncertainty is above some particular lower bound is a belief like any other, and no more exempt from the requirements of justification.

But Drahflow did just justify it. He said you're running on error-prone hardware. Now, there's still the question of how often the hardware makes errors, and there's the problem of privileging the hypothesis (thinking wrongly about the lottery can't make the probability of a ticket winning more than 10^-8, no matter how wrong you are), and there's the horrible LHC inconsistency, but the opposing position is not unjustified. It has justification that goes beyond just social modesty. It's a consistent trend in which people form confidence bounds that are too narrow on hard problems (and to a lesser extent, too wide on easy problems). If you went by the raw experiments then "99% probability" would translate into 40% surprises because (a) people are that stupid (b) people have no grasp of what the phrase "99% probability" means.

2komponisto
I agree, and I don't think this contradicts or undermines the argument of the post. These experiments should definitely shift physicists' probabilities by some nonzero amount; the question is how much. When they calculate that the probability of a marble statue waving is 10 to the minus gazillion, would you really want to argue that, based on surveys like this, they should adjust that to some mundane quantity like 0.01? That seems absurd to me. But if you grant this, then you have to concede that "epistemic bootstrapping" beyond ordinary human levels of confidence is possible. Then the question becomes: what's the limit, given our knowledge of physics (present and future)?
5Morendil
If you did see a marble statue wave, after making this calculation, you would resurrect a hypothesis at the one-in-a-million level maybe (someone played a hugely elaborate prank on you involving sawing off a duplicate statue's arm and switching that with the recently examined statue while you were briefly distracted by a phone ringing, say), not a hypothesis at the 10 to the minus whatever (e.g. you are being simulated by Omega for laughs). Perhaps I'm getting this wrong, but this seems similar in spirit to the "queer uses of probability" discussion in Jaynes, where he asks what kind of evidence you'd have to see to believe in ESP, and you can take the probability of that as an indication of your prior probability for ESP. Perhaps you're making too much of absolute probabilities, when in general what we're interested in is choosing between two or more competing hypotheses.
2komponisto
This comment reads as if you're disagreeing with me about something ("you're making too much..."), but I can't detect any actual disagreement.

Now, if it is the case that she didn't, then it follows that, given sufficient information about how-the-world-is, one's probability estimate could be made arbitrarily close to 0.

What, like 1/3^^^3? There isn't that much information in the universe, and come to think, I'm not sure I can conceive of any stream of evidence which would drive the probability that low in the Knox case, because there are complicated hypotheses much less complicated than that in which you're in a computer simulation expressly created for the purpose of deluding you about the Amanda Knox case.

I thought I was stating a mathematical tautology. I didn't say there was enough information in the universe to get below 1/3^^^3. The point was only that the information controls the probability.

4Sticky
But surely any statement one could make about Amanda Knox is only about the Amanda Knox in this world, whether she's a fully simulated human or something less. Perhaps only the places I actually go are fully simulated, and everywhere else is only simulated in its effects on the places I go, so that the light from distant stars are supplied without bothering to run their internal processes; in that case, the innocent Amanda Knox only exists insofar as the effects that an innocent Amanda Knox would have on my part of the world are implemented. Even so, my beliefs about the case can only be about the figure in my own world. It doesn't matter that there could be some other world where Amanda Knox is a murderess and Hitler was a great humanitarian.
5Tyrrell_McAllister
I'm not sure why this is being downvoted so much (to –3 when I saw it). It's a good point. If I'm in a simulation, and the "base reality" is sufficiently different from how things appear to me in the simulation, it stops making sense to say that I'm fooled into attributing false predicates to things in the base reality. I'm so cut off from the base reality that few of my beliefs can be said to be about it at all. It makes more sense to say that I have true beliefs about the things in the simulation. I just have one important false belief about them—namely, that they're not simulated. But that doesn't mean that my other beliefs about them are wrong. The situation is similar to that of the proverbial man who thinks that penguins are blind borrowing mammals who live in the Namib Desert. Such beliefs aren't really about penguins at all. More probably, the man has true beliefs about some variety of golden mole. He just has one important false belief about them—namely, that they're called "penguins".
1Sticky
Perhaps it's being downvoted because of my strange speculation that the stars are unreal -- but it seems to me that if this is a simulation with such a narrow purpose as fooling komponisto/me/us/somebody about the Knox case is would be more thrifty to only simulate some narrow portion of the world, which need not include Knox herself. Even then, I think, it would make sense to say that my beliefs are about Knox as she is inside the simulation, not some other Knox I cannot have any knowledge of, even in principle.
3Zack_M_Davis
I downvoted the great-grandparent because it ignores the least convenient possible world where the simulators are implementing the entire Earth in detail such that the simulated Amanda Knox is a person, is guilty of the murder, and yet circumstances are such that she seems innocent given your state of knowledge. You're right that implementing the entire Earth is more expensive then just deluding you personally, but that's irrelevant to Eliezer's nitpick, which was only that 1/(3^^^3) really is just that small and yet nonzero.
3Tyrrell_McAllister
I think that you've only pushed it up (down?) a level. If I have gathered sufficiently strong evidence that the simulated Knox is not guilty, then the deception that you're suggesting would very probably amount to constructing a simulated simulated Knox, who is not guilty, and who, it would turn out, was the subject of my beliefs about Knox. My belief in her innocence would be a true belief about the simulated-squared Knox, rather than a false belief about the guilty simulated-to-the-first-power Knox. All deception amounts to an attempt to construct a simulation by controlling the evidence that the deceived person receives. The kind of deception that we see day-to-day is far too crude to really merit the term "simulation". But the difference is one of degree. If an epistemic agent were sufficiently powerful, then deceiving it would very probably require the sort of thing that we normally think of as a simulation. ETA: And the more powerful the agent, the more probable it is that whatever we induced it to believe is a true belief about the simulation, rather than a false belief about the "base reality" (except for its belief that it's not in a simulation, of course).
4Nick_Tarleton
This is a good point, but your input could also be the product of modeling you and computing "what inputs will make this person believe Knox is innocent?", not modeling Knox at all.
2Tyrrell_McAllister
How would this work in detail? When I try to think it through, it seems that, if I'm sufficiently good at gathering evidence, then the simulator would have to model Knox at some point while determining which inputs convince me that she's innocent. There are shades here of Eliezer's point about Giant Look-Up Tables modeling conscious minds. The GLUT itself might not be a conscious mind, but the process that built the GLUT probably had to contain the conscious mind that the GLUT models, and then some.
4pengvado
The process that builds the GLUT has to contain your mind, but nothing else. The deceiver tries all exponentially-many strings of sensory inputs, and sees what effects they have on your simulated internal state. Select the one that maximizes your belief in proposition X. No simulation of X involved, and the deceiver doesn't even need to know anything more about X than you think you know at the beginning.
1Sticky
If whoever controls the simulation knows that Tyrrell/me/komponisto/Eliezer/etc. are reasonably reasonable, there's little to be gained by modeling all the evidences that might persuade me. Just include the total lack of physical evidence tying the accused to the room where the murder happened, and I'm all yours. I'm sure I care more than I might have otherwise because she's pretty, and obviously (obviously to me, anyway) completely harmless and well-meaning, even now. Whereas, if we were talking about a gang member who's probably guilty of other horrible felonies, I'd still be more convinced of innocence than I am of some things I personally witnessed (since the physical evidence is more reliable than human memory), but I wouldn't feel so sorry for the wrongly convicted.
-1komponisto
But remember my original point here: level-of-belief is controlled by the amount of information. In order for me to reach certain extremely high levels of certainty about Knox's innocence, it may be necessary to effectively simulate a copy of Knox inside my mind. ETA: And that of course raises the question about whether in that case my beliefs are about the mind-external Knox ("simulated" or not) or the mind-internal simulated Knox. This is somewhat tricky, but the answer is the former -- for the same reason that the simple, non-conscious model of Amanda I have in my mind right now represents beliefs about the real, conscious Amanda in Capanne prison. Thus, a demon could theoretically create a conscious simulation of an innocent Amanda Knox in my mind, which could represent a "wrong" extremely-certain belief about a particular external reality. But in order to pull off a deception of this order, the demon would have to inhabit a world with a lot more information than even the large amount available to me in this scenario.
1Zack_M_Davis
That is a fascinating counterargument that I'm not sure what to make of yet.
-1komponisto
Here's how I see the whole issue, after some more reflection: Imagine a hypothetical universe with more than 3^^^3 total bits of information in it, which also contained a version of the Kercher murder. If you knew enough about the state of such a universe (e.g. if you were something like a Laplacian demon with respect to it), you could conceivably have on the order of 3^^^3 bits of evidence that the Amanda Knox of that universe was innocent of the crime. Now, the possibility would still exist that you were being deceived by a yet more powerful demon. But this possibility would only bound your probability away from 0 by an amount smaller than 1/3^^^3. In your (hypothesized) state of knowledge, you would be entitled to assert a probability of 1/3^^^3 that Knox killed Kercher. Furthermore, if a demon were deceiving you to the extent of feeding you 3^^^3 bits of "misleading" information, it would automatically be creating, within your mind, a model so complex as to almost certainly contain fully conscious versions of Knox, Kercher, and everyone else involved. In other words, it would effectively be creating an autonomous world in which Knox was innocent. Thus, while you might technically be "mistaken", in the sense that your highly complex model does not "correspond" to the external situation known to the demon, the moral force of that mistake would be undermined considerably, in view of the existence of a morally significant universe in which (the appropriate version of) Knox was indeed innocent. When we make probability estimates, what we're really doing is measuring the information content of our model. (The more detailed our model, the more extreme our estimates should be.) Positing additional layers of reality only adds information; it cannot take information away. A sufficiently complex model might be "wrong" as a model but yet morally significant as a universe in its own right.
1Sticky
What possible world would that be? If it should turn out that the Italian government is engaged in a vast experiment to see how many people it can convince of a true thing using only very inadequate evidence (and therefore falsified the evidence so as to destroy any reasonable case it had), we could, in principle, discover that. If the simulation simply deleted all of her hair, fiber, fingerprint, and DNA evidence left behind by the salacious ritual sex murder, then I can think of two objections. First, something like Tyrrell McAllister's second-order simulation, only this isn't so much a simulated Knox in my own head, I think, as it is a second-order simulation implemented in reality, by conforming all of reality (the crime scene, etc.) to what it would be if Knox were innocent. Second, an unlawful simulation such as this might seem to undermine any possible belief I might form, I could still in principle acquire some knowledge of it. Suppose whoever is running the simulation decides to talk to me and I have good reason to think he's telling the truth. (This last is indistinguishable from "suppose I run into a prophet" -- but in an unlawful universe that stops being a vice.) ETA: I suppose if I'm entertaining the possibility that the simulator might start telling me truths I couldn't otherwise know then I could, in principle, find out that I live in a simulated reality and the "real" Knox is guilty (contrary to what I asserted above). I don't think I'd change my mind about her so much as I would begin thinking that there is a guilty Knox out there and an innocent Knox in here. After all, I think I'm pretty real, so why shouldn't the innocent Amanda Knox be real?
1Tyrrell_McAllister
There seems to be a deep idea here, but I don't yet see that the numbers really balance out. I would appreciate it if you made a top-level post elaborating on this.

The fundamental question of rationality is: why do you believe what you believe? As a rationalist, you can't just pull probabilities out of your rear end. And now here's the kicker: that includes the probability of your model being wrong. The latter must, paradoxically but necessarily, be part of your model itself. If you're uncertain, there has to be a reason you're uncertain; if you expect to change your mind later, you should go ahead and change your mind now.

You're just telling people to pull different probabilities out of their rear end. Framing th... (read more)

-2Unknowns
Not only does komponisto's model make poor predictions; he in fact wants it to do this. That's why he brings up the image of a computer calculating your posteriors, so that you can say the probability of such and such is 10^-50, even though even komponisto knows that you are not and cannot be calibrated in asserting this probability.

What are the odds that, given that I didn't make a mistake pressing the buttons, that my electronic calculator (which appears to be in proper working order) will give a wrong answer on a basic arithmetic problem that it should be able to solve?

3RobinZ
With all the caveats, I'd guess somewhere south of one in ten thousand. I would expect the biggest terms by far in the error rate to be: 1. User error. 2. Design fault. 3. Mechanical failure (e.g. solder bump fracture, display damage). I'd like to know some estimates of probability that high-energy radiation can affect a calculation, but pretty much everything after 1 is highly unlikely.
1Paul Crowley
Presumably you're imagining something like a year-old calculator, solar powered and in bright light, reported in good working order and tested on a few problems with known answers, doing aritmetic on integers less than 10,000 in magnitude. Just to close as many of the doors as possible...
1Vladimir_Nesov
Shouldn't have to do that here.
4bogdanb
That's a technique useful when arguing against an idea. CronoDAS' comment contained just a question; it's not obvious to me what the “idea” is we should be arguing against.

Without, ultimately, trusting science more than intuition, there's no hope of making epistemic progress.

That's not really true. 10,000 hours of deliberate practice at making predictions about a given field will improve the intuition by a lot. Intuition isn't fixed.

4Waldheri
Isn't "intuition" in that case not simply subconscious empirical knowledge?
2ChristianKl
Do you believe that intuition exists in some other form than subconscious empirical knowledge? Provided you don't believe in any paranormal stuff I don't think that there's something else that you could call intuition. For me science is about having well defined theories and then trying to falsify those theories. When you make decisions based on intuition you aren't making decisions based on theory.

Summary: P(I'm right| my theory is good / I'm not having a brain fart) can be much more extreme than P(I'm right). Over time, we can become more confident in our theories.

"Generalizing from One Example" and "Reference Class of the Unreferenceclassable" links are both broken.

1komponisto
Thanks, fixed.

In accordance with a suggestion that turned out to be rather good, I've now deleted several paragraphs of unnecessary material from the post. (The contents of these paragraphs, having to do with previous posts of mine, were proving to be a distraction; luckily the post still flows without them.)

Perhaps too much is being made of the "arbitrarily close to zero" remark. Sure, there isn't enough information in the universe to get to one-over-number-made-with-up-arrow-notation. But there's certainly enough to get to, say, one in a thousand, or one in a million; and while this isn't arbitrarily close to zero in a mathematical sense, it's good enough for merely human purposes. And what's worse, the word 'arbitrary' is distracting from the actual argument being made, in what looks to me like a rather classic case of "Is that your real objection?"

A probability estimate is not a measure of "confidence" in some psychological sense.

This is one of the possible interpretations of probability. To say that this interpretation is wrong requires an argument, not simply you saying that your interpretation is the correct one.

0orthonormal
Here's one facet of the argument.

As we acquire more and more information about the world, our Bayesian probabilities should become more and more confident.

How do we know that we are acquiring more real information? The number of open questions in science grows. It doesn't shrink.

[-]RobinZ100

How do we know that we are acquiring more real information?

Because Archimedes didn't have a microwave.

3Jayson_Virissimo
If by know, ChristianKl means having belief that is universal, necessary, and certain, then we don't know that we have more real information. Nothing short of deductive proof will achieve this kind of knowledge. RobinZ seems to be (implicitly) using an argument similar to this: If theory A is true, then technology B will work. Technology B works. ∴ Theory A is true. This argument, while plausible, commits the fallacy of affirming the consequent, and so isn't deductively valid. This means that it fails to achieve the kind of knowledge that is universal, necessary, and certain. If, on the other hand, you will settle for knowledge that is particular, contingent, and probable, then it is quite clear that we have made leaps and bounds in the amount of real information that we have access to. For instance, compare Wikipedia to the 1911 Encyclopedia Britannica.
0RobinZ
I'm afraid I don't see what you're driving at. There's nothing in your comment that I disagree with, and nothing in my comment that you do not address correctly, but I thought my reply to ChristianKl was sufficient. Do you believe that it was not? If so, what is the question I should be responding to?
1Jayson_Virissimo
I was trying to point out (perhaps badly) that your argument succeeds assuming one definition of knowledge, but fails assuming the other definition. It isn't clear to me which definition ChristianKl had in mind.
0RobinZ
...right, that makes sense. Thank you.
3loqi
Because our predictions are more accurate.

I think it's fair to compare the LHC with past scientific experiments but if you do you should remember that no past scientific experiment destroyed the world and therefore you don't get a prior probability greater than 0 by that process.

The LHC even didn't worked the first time around. You could say the predictions of how the LHC was supposed to work were wrong. There however millions of different ways that the LHC can turn out results that aren't what anybody expects that don't include the LHC blowing up the planet.

There have been a number of posts recently on the topic of beliefs, and how fragile they can be. They would benefit A LOT by a link to Making Beliefs Pay Rent

When you say Amanda Knox either killed her roommate, or she didn't, you've moved from a universe of rational beliefs to that of human-responsibility models. It's very unclear (to me) what experience you're predicting with "killed her roomate". This confusion, not any handling of evidence or bayesean updates, explains a large divergence in estimates that people give. They're giving estim... (read more)

8Paul Crowley
This is a curious interpretation of "making beliefs pay rent". I hesitate to assert that a difference of belief about a prosaic historical fact, which you could in principle check with a "time camera", is not a real difference of belief unless you can set out specific, realistic predictions they differ in. If one person believes that Lee Harvey Oswald was in the book depository with a rifle and another believe he wasn't even in the building, I don't think they need to articulate the different predictions of their beliefs to believe that they're disagreeing.
0Dagon
The difference in expected experience is that some people think about the question given a time camera, while others think about the probability that additional evidence will come to their attention. I think the probability that I'll ever have a time camera is very low, and the chance that I'd use it to understand the details of this roomate and death relationship even lower, so there is no expected experience from this direction. Additionally, there are lots of ways for someone to have some responsibility for a death without having a hand on the weapon directly. To me, probability assignments of her guilt or innocence is primarily a matter of group consensus. There WAS an underlying physical reality, but the proposition given wasn't well enough defined for me to understand the wager.
2orthonormal
HTML links and tags don't work here. You can edit your comment and click the "Help" tag under the textbox to see how to do links and italics in this format.

How do you get from "uncertainty exists in the map, not the territory" to the following ?

given sufficient information about how-the-world-is, one's probability estimate could be made arbitrarily close to 0

One's uncertainty about the here-and-now, perhaps. In criminal cases we are dealing with backward inference, and information is getting erased all the time. Right about now perhaps the only way you could get "arbitrarily close to 0" is by scanning Knox's brain or the actual perpetrator's brain; if both should die, we would reach a ... (read more)

5Vive-ut-Vivas
"Your comparison between the Amanda Knox case and scientific knowledge leaves me cold. Science is concerned with regularities, situations where induction applies; the knowledge sought in a criminal case is of a completely different kind, by definition applying to a unique and hopefully irregular situation." I'm eerily reminded of creationists arguing that studying evolution isn't "science", because it happened in the past. I don't see how it follows that the knowledge sought in a criminal case is somehow "different" than the knowledge sought in otherwise "legitimate" scientific pursuits. At the risk of playing definition games, if science is simply the methodology used to arrive at correct answers, then science can be applied to the Amanda Knox case - resulting in "scientific" knowledge.
3komponisto
You seem to be replying to this post as if it were about the Knox case. It isn't. [ETA: Post now edited to make this clearer.] I'm not making any object-level arguments here about what the probabilities in that case should be. I only referred to it in order to introduce the point that one should think in terms of applying one's model to the data in computer-fashion to obtain probabilities, rather than imagining oneself judging a bunch of similar cases. (The two scenarios ought to be equivalent, but they feel different. ) I don't buy this for a minute. You may as well say that cosmology isn't scientific, because the Big Bang isn't repeatable in a lab; or that evolutionary biology won't become scientific until we can recreate dinosaurs.
1Morendil
The post refers to your postings on the Knox case a lot. Perhaps you should consider that other readers will share my confusion on that point. Again, I tend to agree with your conclusions, but I find the tone of the writing a distraction from the good bits. In both cases science finds plenty of regularities to reason from, so it seems you're attacking straw men. My point is that there are some matters of fact about which we cannot reduce our uncertainty below a certain level. The details of historical facts tend to belong in that category. Consider an extreme form of chick sexing. Put a chick in a blender, and while there certainly is a "fact of the matter" as to its having been male or female, you can no longer tell, you have to live with 50:50. Advances in technology can catch up with that, and I'm deliberately choosing an example which is middle-of-the-road in the amount of information that gets randomized (imagine burning the remains). You could in principle recover that information, but only if you had previously observed some regularities (say, hormonal) about chicks. That pretty much captures the difference between science and investigation.
0komponisto
I've made some edits to (hopefully) prevent that. The references are to some extent inevitable, since the Knox writings were my only posts up to this point, and the resulting discussions did help to prompt the thoughts expressed here, as a matter of historical fact. Could you perhaps give some examples? (I think I automatically tend to write in the sort of tone that I would enjoy reading.) Yes; certainty (as any technology) is definitely limited by the physics of the universe. Those limits may be considerably beyond the human level, though.
2Morendil
Here's one - "scoffed and sneered, in capital letters" - and elsewhere you used "gasped" to refer to one of my own comments (this may make me oversensitive to this pattern, compared to other readers, but the effect is still there). That sound dismissive of other's objections. A more subtle one is the profusion of hyperlinks, to comments, posts and wiki pages, not always necessary to the point being made. More generally the post advances too many distinct ideas; I'd try to say the same thing in fewer words. ("You should fly faster when your instruments are good" seems to be the thrust of the whole post.) Still more subtle, you are selective in the objections that you choose to respond to.
3komponisto
Hm...that comment did sound like a scoff or sneer to me ("I offer $50 to the AK defense fund..."), and capital letters were in fact used. What if I had used "balked" instead? This one surprises me. The use of hyperlinks to simultaneously provide convenient references and subtly convey conversational nuance has always been for me one of the more enjoyable aspects of Eliezer's writing; I probably learned it from him. Yikes. This is bad advice for me, since I already obsess about this, and as a result write very little. (I have a hard time allowing myself to just "write what's in my head".) If this is anything like a widespread view, I may have to seriously reconsider whatever plans I may have had of top-level posting in the future. I like this figure of speech; I wish I had come up with it. That's probably the case with everyone, though, isn't it? Given the constraints of time and attention, it seems hard to avoid this.
4Morendil
Doesn't work when you link to a discussion comment: you can't tell from the URL what the link points to, so you have to follow it (thank goodness for tabs), breaking the flow. No no no. Please keep 'em coming. Just, you know: spend more time revising, most of which effort should consist of deleting stuff. Case in point, if the thread post isn't about the Knox case, then just delete every para which is a reference to the Knox case. Most of the time ruthless deletion improves your writing to a surprising extent. Don't censor yourself in the writing phase, but do delete more in revising. For more on this see Peter Elbow's Writing With Power. You can do what I do: save the long version to a local text file, "in case you ever need those words again".
5komponisto
You know, you're right. I just realized that the whole section can be cut, and the post still flows. It hadn't occurred to me because the thoughts were linked in my mind -- but that doesn't mean they need to be linked in the post.
3Morendil
Welcome to the club. This is one of the things that makes writing hard; you can never read your own stuff quite as a reader sees it. The Knox reference in the para starting with "In the vanishingly unlikely event..." is now even more jarring. But the part of that para referencing "the model" continues from the previous para, so rather than delete it I'd try to reword it. Your "core" para is the one that contains the idea, "we are not stuck with the inferential powers of our ancestors" and goes on to discuss "epistemic technology". A typical good-writing suggestion is to find a way to move the key idea from where it is often found, buried in the middle of the article, to the very top. (Memorable quote which has helped me internalize this advice: "Your article is not a mystery novel. Don't keep the reader guessing until the punchline.") I wouldn't worry about "EDIT" marks, not in top level posts. Just accept that the discussion can reflect past versions, and make the post the best version you can.
2orthonormal
Agree with Morendil about the paragraph beginning "In the vanishingly unlikely event...". Without the earlier references, it's not good to have your example of something you're sure of be something that a newcomer or Googler could find so controversial. I'd suggest you either swap it for something else in which the very probably correct view is also the mainstream one within the pool of possible readers, or failing that, put your first link to your old post here instead of at the paragraph beginning with "Previously...".
0komponisto
Done. (Good catch.)
1Paul Crowley
A tricky point! But I think I would worry if I was ignoring a highly-scored argument.
1Nick_Tarleton
This may be true if you measure from inside the universe, but certainly isn't if you can measure from outside, including observing other quantum branches. (Hey, you did say "in principle.")
0[anonymous]
You'd have to examine a lot more, but certainly there would still in principle be some finite amount of information (though much of it unmeasurable within the universe, possibly including some in other quantum branches) that would suffice to run the physics backwards (with a finite computation) and figure out what happened.

I now suspect that misleading imagery may be at work here. A million statements -- that sounds like a lot, doesn't it? If you made one such pronouncement every ten seconds, a million of them would require you to spend months doing nothing but pontificating, with no eating, sleeping, or bathroom breaks. Boy, that would be tiring, wouldn't it? At some point, surely, your exhausted brain would slip up and make an error. In fact, it would surely make more than one -- in which case, poof!, there goes your calibration.

That would be misleading imagery. I don't... (read more)

2Paul Crowley
No, it's a description of a potential failing in the intuition pump that the imagery sets up.
[-]Kevin-20

This is a side-point, perhaps, but something to take into account with assigning probabilities is that while Amanda Knox is not guilty, she is certainly a liar.

When confronting someone known to be lying during something as high stakes as a murder trial, people assign them a much higher probability of guilt, because someone that lies during a murder trial is actually more likely to have committed murder. That seems to be useful evidence when we are assigning numerical probabilities, but it was a horrific bias for the judge and jury of the case.

Edit: To orthnormal, yes, that is what I meant, thank you. I also agree that it's possible that her being a sociopath and/or not neurotypical confused the prosecutor.

5orthonormal
IAWYC (and don't understand the downvotes); the point in the last paragraph is a key one. Evidence that a suspect is lying should raise the probability of their guilt, but not nearly to the extent that it actually sways judges and juries (because people have the false idea that everyone but perpetrators will be telling the truth).
4billswift
People lie all the time, mostly to protect their self-image or their image in others' minds. Just because it was done during a trial does not mean they are more likely to have committed the crime. Just as often people misremember, forget things they said before, or remember things they didn't mention before.
1Kevin
I think if we compare the set of all accused murderers that lie during their trials to those that tell the truth, the whole truth, and nothing but the truth, a higher percentage of liars will be guilty. It's improper reasoning, however, to use that as the reason for convicting someone of murder. I think there is a significant chance she was in the house at the time of the murder or otherwise knew something that she didn't tell the police, and that major lie could have really confused the prosecutor, who was also the interrogater when she implicated Patrick Lumumba.
0[anonymous]
I'm not saying that is correct, I'm identifying a cognitive bias that helped to convict Knox and Solecito.
0komponisto
I've addressed the relationship between legal and Bayesian reasoning here. In general, I think we should keep discussion of the Knox case to the post dedicated to that subject. Here I'll just note that the meme about Knox being a "liar" derives from the allegation of "changing stories", which is an uninformed misconception.
2orthonormal
Sorry to put this here instead of the other thread, but I don't think this actually came up there: It can derive from other sources as well. I ran into the case on the Eyes for Lies blog, written by an experimentally identified "truth wizard" (boy do I hate that term) with a pretty impressive track record for judging liars from their media appearances. The author sees a number of telltale signs of lying and of sociopathy. Now this shouldn't be admissible in court, and it's not unassailable Bayesian evidence that Amanda Knox is a liar or a sociopath (even these truth wizards are wrong on the order of 5% of the time). But it is evidence of those. (Still, being a sociopath only moderately raises the odds of being involved in the murder, and those are very low given the other facts of the case.)
3komponisto
I am strongly tempted to defy the data here. In fact, looking at the blog, I didn't find much data. There was a link to an unimpressive article by a psychoanalyst, with some not-particularly-expert-sounding comments from the blog author -- who also admitted to not being able to tell whether Knox was lying during the testimony without hearing the questions. Furthermore, the author's understanding of the facts of the case left a lot to be desired, to put it mildly. But even if we grant that this person has a tested above-average ability to identify characteristic signs of lying/sociopathy, and has identified Knox as possessing some of these signs (an assertion I didn't actually find, though I could have missed it), I'd want to know a lot more: what sort of likelihood ratios are we talking about? (I.e. what fraction of non-sociopaths also exhibit these signs?) Exactly what is this person's error rate? What do other "wizards" say in independent testing with strict experimental protocols? Etc. Then there's also the theoretical question: if this evidence is truly worth paying attention to, why shouldn't it be admissible in court? (Presumably there's no danger of abuse of police power or similar, so the reason for exclusion must have to do with the evidentiary strength or lack thereof.)
3orthonormal
Hmm. I was going to say that it's really a form of private evidence, if these "truth wizards" can tell more accurately on a subconscious level than they can consciously explain the reasons for. But this basically puts them in the same boat as other expert witnesses, whose authority and probity basically has to be trusted (or countered by another expert of the same type). Like I said, the usual figure is 5% false positives, and this person did list a recent case where they offered an opinion on the blog and later found themselves mistaken. Their track record otherwise looks pretty good. Why? (Serious question.) It doesn't seem to me that there's strong evidence in the other direction, just a low prior of a random person being a sociopath. But given the way that this case has gone, it's worth considering the hypothesis that Amanda Knox is a sociopath who is innocent of this particular crime, but suspected nonetheless because of her atypical behavior during the investigation. The prosecutor does appear to be a hack with an affinity for farfetched conspiracies, but he didn't try that in every case he's touched— it's reasonable to suspect that something in Knox's interrogation set him down that trail, and one plausible hypothesis is that she wasn't acting the way a neurotypical human being would act in that situation. Indeed, there are plenty of bits of evidence you mentioned to this effect, but you (rightly) treated them as mostly irrelevant to the question of whether she committed the crime. They are, however, good evidence that she's not neurotypical, and Eyes for Lies' analysis further supports that theory.
4komponisto
We may need to do some tabooing. My understanding is that "sociopath" is a much narrower category than "not neurotypical"; in particular, I was under the impression that sociopathy involved a lack of empathy. That doesn't appear to characterize Knox from anything else I have come across (there are perhaps one or two anecdotes that you could retrospectively regard as consistent with that assumption, but only if you didn't know anything else -- most information about Knox from her hometown points in the opposite direction). Start here, here, here, and here (4:50). But you may be right in the sense that I may be overestimating P(Guilty|Sociopath).
0Kevin
On applying the word liar, I wasn't intending to allude to an existing meme. First, she was found guilty of trying to implicate Patrick Lumumba in the murder. I understand she did it during duress. I'm not sure if "told during duress" changes when we can apply the word liar, but I agree that liar is a charged word. Second, I mean that I am positive she has told at least one lie while on the witness stand. There are many aspects of the defense's story that don't quite make sense. They, like the prosecution, are making up stories about what exactly happened to Merideth Kutcher that night. Also, in Italian court, defendants are legally allowed to lie on the witness stand; she was not expected to tell nothing but the truth during the trial.
2wedrifid
Can we please keep discussion of this particular court case in the relevant thread? We really don't need the politics of near mode 'justice' spreading too much into loosely related topics.
1Kevin
I was actually going to post about this in the meta-thread until I saw your reply, but I think orthonormal's statement "I don't think this actually came up there" applies for the most part. Let's please not meta-discuss outside of the meta-thread. I would however be fine if a moderator could move this entire thread to the Amanda Knox post, but I don't think that's possible. Edit: Also, discussing why the prosecution and jury and judge believed Knox and Solecito guilty with absolute certainty seems relevant.
2wedrifid
A single request is just a polite alternative (and precursor) to systematic downvoting.
1komponisto
Reply here.