I'd like to recast the problem this way: we know we're running on error-prone hardware, but standard probability theory assumes that we're running on errorless hardware, and seems to fail, at least in some situations, when running on error-prone hardware. What is the right probability theory and/or decision theory for running on error-prone hardware?
ETA: Consider ciphergoth's example:
do you think you could make a million statements along the lines of "I will not win the lottery" and not be wrong once? If not, you can't justify not playing the lottery, can you?
This kind of reasoning can be derived from standard probability theory and would work fine on someone running errorless hardware. But it doesn't work for us.
We need to investigate this problem systematically, and not just make arguments about whether we're too confident or not confident enough, trying to push the public consensus back and forth. The right answer might be completely different, like perhaps we need different kinds or multiple levels of confidence, or upper and lower bounds on probability estimates.
One simple example: do you think you could make a million statements along the lines of "I will not win the lottery" and not be wrong once? If not, you can't justify not playing the lottery, can you?
You propose to ignore the "odd" errors humans sometimes make while calculating a probability for some event. However, errors do occur, even when judging the very first case. And they (at least some of them) occur randomly. When you believe you have correctly calculated the probability, you just might have made an error anywhere in the calculation.
If you keep around the "socially accepted" levels of confidence, those errors average out pretty fast, but if you make only one error in 10^5 calculations, you should not assign probabilities smaller than 1 / 10^5. Otherwise a bet 10000 to 1 between you and me (a fair game from your perspective) will give me an expected value larger than 0 due to the errors in your thoughts you could possibly make.
This is another advantage an AI might have over humans, if the hardware is good enough, probability assignments below 10^-5 might actually be reasonable.
But Drahflow did just justify it. He said you're running on error-prone hardware. Now, there's still the question of how often the hardware makes errors, and there's the problem of privileging the hypothesis (thinking wrongly about the lottery can't make the probability of a ticket winning more than 10^-8, no matter how wrong you are), and there's the horrible LHC inconsistency, but the opposing position is not unjustified. It has justification that goes beyond just social modesty. It's a consistent trend in which people form confidence bounds that are too narrow on hard problems (and to a lesser extent, too wide on easy problems). If you went by the raw experiments then "99% probability" would translate into 40% surprises because (a) people are that stupid (b) people have no grasp of what the phrase "99% probability" means.
Now, if it is the case that she didn't, then it follows that, given sufficient information about how-the-world-is, one's probability estimate could be made arbitrarily close to 0.
What, like 1/3^^^3? There isn't that much information in the universe, and come to think, I'm not sure I can conceive of any stream of evidence which would drive the probability that low in the Knox case, because there are complicated hypotheses much less complicated than that in which you're in a computer simulation expressly created for the purpose of deluding you about the Amanda Knox case.
I thought I was stating a mathematical tautology. I didn't say there was enough information in the universe to get below 1/3^^^3. The point was only that the information controls the probability.
The fundamental question of rationality is: why do you believe what you believe? As a rationalist, you can't just pull probabilities out of your rear end. And now here's the kicker: that includes the probability of your model being wrong. The latter must, paradoxically but necessarily, be part of your model itself. If you're uncertain, there has to be a reason you're uncertain; if you expect to change your mind later, you should go ahead and change your mind now.
You're just telling people to pull different probabilities out of their rear end. Framing th...
What are the odds that, given that I didn't make a mistake pressing the buttons, that my electronic calculator (which appears to be in proper working order) will give a wrong answer on a basic arithmetic problem that it should be able to solve?
Without, ultimately, trusting science more than intuition, there's no hope of making epistemic progress.
That's not really true. 10,000 hours of deliberate practice at making predictions about a given field will improve the intuition by a lot. Intuition isn't fixed.
Summary: P(I'm right| my theory is good / I'm not having a brain fart) can be much more extreme than P(I'm right). Over time, we can become more confident in our theories.
"Generalizing from One Example" and "Reference Class of the Unreferenceclassable" links are both broken.
In accordance with a suggestion that turned out to be rather good, I've now deleted several paragraphs of unnecessary material from the post. (The contents of these paragraphs, having to do with previous posts of mine, were proving to be a distraction; luckily the post still flows without them.)
Perhaps too much is being made of the "arbitrarily close to zero" remark. Sure, there isn't enough information in the universe to get to one-over-number-made-with-up-arrow-notation. But there's certainly enough to get to, say, one in a thousand, or one in a million; and while this isn't arbitrarily close to zero in a mathematical sense, it's good enough for merely human purposes. And what's worse, the word 'arbitrary' is distracting from the actual argument being made, in what looks to me like a rather classic case of "Is that your real objection?"
A probability estimate is not a measure of "confidence" in some psychological sense.
This is one of the possible interpretations of probability. To say that this interpretation is wrong requires an argument, not simply you saying that your interpretation is the correct one.
As we acquire more and more information about the world, our Bayesian probabilities should become more and more confident.
How do we know that we are acquiring more real information? The number of open questions in science grows. It doesn't shrink.
How do we know that we are acquiring more real information?
Because Archimedes didn't have a microwave.
I think it's fair to compare the LHC with past scientific experiments but if you do you should remember that no past scientific experiment destroyed the world and therefore you don't get a prior probability greater than 0 by that process.
The LHC even didn't worked the first time around. You could say the predictions of how the LHC was supposed to work were wrong. There however millions of different ways that the LHC can turn out results that aren't what anybody expects that don't include the LHC blowing up the planet.
There have been a number of posts recently on the topic of beliefs, and how fragile they can be. They would benefit A LOT by a link to Making Beliefs Pay Rent
When you say Amanda Knox either killed her roommate, or she didn't, you've moved from a universe of rational beliefs to that of human-responsibility models. It's very unclear (to me) what experience you're predicting with "killed her roomate". This confusion, not any handling of evidence or bayesean updates, explains a large divergence in estimates that people give. They're giving estim...
How do you get from "uncertainty exists in the map, not the territory" to the following ?
given sufficient information about how-the-world-is, one's probability estimate could be made arbitrarily close to 0
One's uncertainty about the here-and-now, perhaps. In criminal cases we are dealing with backward inference, and information is getting erased all the time. Right about now perhaps the only way you could get "arbitrarily close to 0" is by scanning Knox's brain or the actual perpetrator's brain; if both should die, we would reach a ...
I now suspect that misleading imagery may be at work here. A million statements -- that sounds like a lot, doesn't it? If you made one such pronouncement every ten seconds, a million of them would require you to spend months doing nothing but pontificating, with no eating, sleeping, or bathroom breaks. Boy, that would be tiring, wouldn't it? At some point, surely, your exhausted brain would slip up and make an error. In fact, it would surely make more than one -- in which case, poof!, there goes your calibration.
That would be misleading imagery. I don't...
This is a side-point, perhaps, but something to take into account with assigning probabilities is that while Amanda Knox is not guilty, she is certainly a liar.
When confronting someone known to be lying during something as high stakes as a murder trial, people assign them a much higher probability of guilt, because someone that lies during a murder trial is actually more likely to have committed murder. That seems to be useful evidence when we are assigning numerical probabilities, but it was a horrific bias for the judge and jury of the case.
Edit: To orthnormal, yes, that is what I meant, thank you. I also agree that it's possible that her being a sociopath and/or not neurotypical confused the prosecutor.
Related: Horrible LHC Inconsistency, The Proper Use of Humility
Overconfidence, I've noticed, is a big fear around these parts. Well, it is a known human bias, after all, and therefore something to be guarded against. But I am going to argue that, at least in aspiring-rationalist circles, people are too afraid of overconfidence, to the point of overcorrecting -- which, not surprisingly, causes problems. (Some may detect implications here for the long-standing Inside View vs. Outside View debate.)
Here's Eliezer, voicing the typical worry:
I now suspect that misleading imagery may be at work here. A million statements -- that sounds like a lot, doesn't it? If you made one such pronouncement every ten seconds, a million of them would require you to spend months doing nothing but pontificating, with no eating, sleeping, or bathroom breaks. Boy, that would be tiring, wouldn't it? At some point, surely, your exhausted brain would slip up and make an error. In fact, it would surely make more than one -- in which case, poof!, there goes your calibration.
No wonder, then, that people claim that we humans can't possibly hope to attain such levels of certainty. Look, they say, at all those times in the past when people -- even famous scientists! -- said they were 99.999% sure of something, and they turned out to be wrong. My own adolescent self would have assigned high confidence to the truth of Christianity; so where do I get the temerity, now, to say that the probability of this is 1-over-oogles-and-googols?
[EDIT: Unnecessary material removed.]
A probability estimate is not a measure of "confidence" in some psychological sense. Rather, it is a measure of the strength of the evidence: how much information you believe you have about reality. So, when judging calibration, it is not really appropriate to imagine oneself, say, judging thousands of criminal trials, and getting more than a few wrong here and there (because, after all, one is human and tends to make mistakes). Let me instead propose a less misleading image: picture yourself programming your model of the world (in technical terms, your prior probability distribution) into a computer, and then feeding all that data from those thousands of cases into the computer -- which then, when you run the program, rapidly spits out the corresponding thousands of posterior probability estimates. That is, visualize a few seconds or minutes of staring at a rapidly-scrolling computer screen, rather than a lifetime of exhausting judicial labor. When the program finishes, how many of those numerical verdicts on the screen are wrong?
I don't know about you, but modesty seems less tempting to me when I think about it in this way. I have a model of the world, and it makes predictions. For some reason, when it's just me in a room looking at a screen, I don't feel the need to tone down the strength of those predictions for fear of unpleasant social consequences. Nor do I need to worry about the computer getting tired from running all those numbers.
In the vanishingly unlikely event that Omega were to appear and tell me that, say, Amanda Knox was guilty, it wouldn't mean that I had been too arrogant, and that I had better not trust my estimates in the future. What it would mean is that my model of the world was severely stupid with respect to predicting reality. In which case, the thing to do would not be to humbly promise to be more modest henceforth, but rather, to find the problem and fix it. (I believe computer programmers call this "debugging".)
A "confidence level" is a numerical measure of how stupid your model is, if you turn out to be wrong.
The fundamental question of rationality is: why do you believe what you believe? As a rationalist, you can't just pull probabilities out of your rear end. And now here's the kicker: that includes the probability of your model being wrong. The latter must, paradoxically but necessarily, be part of your model itself. If you're uncertain, there has to be a reason you're uncertain; if you expect to change your mind later, you should go ahead and change your mind now.
This is the first thing to remember in setting out to dispose of what I call "quantitative Cartesian skepticism": the view that even though science tells us the probability of such-and-such is 10-50, well, that's just too high of a confidence for mere mortals like us to assert; our model of the world could be wrong, after all -- conceivably, we might even be brains in vats.
Now, it could be the case that 10-50 is too low of a probability for that event, despite the calculations; and it may even be that that particular level of certainty (about almost anything) is in fact beyond our current epistemic reach. But if we believe this, there have to be reasons we believe it, and those reasons have to be better than the reasons for believing the opposite.
I can't speak for Eliezer in particular, but I expect that if you probe the intuitions of people who worry about 10-6 being too low of a probability that the Large Hadron Collider will destroy the world -- that is, if you ask them why they think they couldn't make a million statements of equal authority and be wrong on average once -- they will cite statistics about the previous track record of human predictions: their own youthful failures and/or things like Lord Kelvin calculating that evolution by natural selection was impossible.
To which my reply is: hindsight is 20/20 -- so how about taking advantage of this fact?
Previously, I used the phrase "epistemic technology" in reference to our ability to achieve greater certainty through some recently-invented methods of investigation than through others that are native unto us. This, I confess, was an almost deliberate foreshadowing of my thesis here: we are not stuck with the inferential powers of our ancestors. One implication of the Bayesian-Jaynesian-Yudkowskian view, which marries epistemology to physics, is that our knowledge-gathering ability is as subject to "technological" improvement as any other physical process. With effort applied over time, we should be able to increase not only our domain knowledge, but also our meta-knowledge. As we acquire more and more information about the world, our Bayesian probabilities should become more and more confident.
If we're smart, we will look back at Lord Kelvin's reasoning, find the mistakes, and avoid making those mistakes in the future. We will, so to speak, debug the code. Perhaps we couldn't have spotted the flaws at the time; but we can spot them now. Whatever other flaws may still be plaguing us, our score has improved.
In the face of precise scientific calculations, it doesn't do to say, "Well, science has been wrong before". If science was wrong before, it is our duty to understand why science was wrong, and remove known sources of stupidity from our model. Once we've done this, "past scientific predictions" is no longer an appropriate reference class for second-guessing the prediction at hand, because the science is now superior. (Or anyway, the strength of the evidence of previous failures is diminished.)
That is why, with respect to Eliezer's LHC dilemma -- which amounts to a conflict between avoiding overconfidence and avoiding hypothesis-privileging -- I come down squarely on the side of hypothesis-privileging as the greater danger. Psychologically, you may not "feel up to" making a million predictions, of which no more than one can be wrong; but if that's what your model instructs you to do, then that's what you have to do -- unless you think your model is wrong, for some better reason than a vague sense of uneasiness. Without, ultimately, trusting science more than intuition, there's no hope of making epistemic progress. At the end of the day, you have to shut up and multiply -- epistemically as well as instrumentally.