The one comes to you and loftily says: “Science doesn’t really know anything. All you have are theories—you can’t know for certain that you’re right. You scientists changed your minds about how gravity works—who’s to say that tomorrow you won’t change your minds about evolution?”

Behold the abyssal cultural gap. If you think you can cross it in a few sentences, you are bound to be sorely disappointed.

In the world of the unenlightened ones, there is authority and un-authority. What can be trusted, can be trusted; what cannot be trusted, you may as well throw away. There are good sources of information and bad sources of information. If scientists have changed their stories ever in their history, then science cannot be a true Authority, and can never again be trusted—like a witness caught in a contradiction, or like an employee found stealing from the till.

Plus, the one takes for granted that a proponent of an idea is expected to defend it against every possible counterargument and confess nothing. All claims are discounted accordingly. If even the proponent of science admits that science is less than perfect, why, it must be pretty much worthless.

When someone has lived their life accustomed to certainty, you can’t just say to them, “Science is probabilistic, just like all other knowledge.” They will accept the first half of the statement as a confession of guilt; and dismiss the second half as a flailing attempt to accuse everyone else to avoid judgment.

You have admitted you are not trustworthy—so begone, Science, and trouble us no more!

One obvious source for this pattern of thought is religion, where the scriptures are alleged to come from God; therefore to confess any flaw in them would destroy their authority utterly; so any trace of doubt is a sin, and claiming certainty is mandatory whether you’re certain or not.1

But I suspect that the traditional school regimen also has something to do with it. The teacher tells you certain things, and you have to believe them, and you have to recite them back on the test. But when a student makes a suggestion in class, you don’t have to go along with it—you’re free to agree or disagree (it seems) and no one will punish you.

This experience, I fear, maps the domain of belief onto the social domains of authority, of command, of law. In the social domain, there is a qualitative difference between absolute laws and nonabsolute laws, between commands and suggestions, between authorities and unauthorities. There seems to be strict knowledge and unstrict knowledge, like a strict regulation and an unstrict regulation. Strict authorities must be yielded to, while unstrict suggestions can be obeyed or discarded as a matter of personal preference. And Science, since it confesses itself to have a possibility of error, must belong in the second class.

(I note in passing that I see a certain similarity to they who think that if you don’t get an Authoritative probability written on a piece of paper from the teacher in class, or handed down from some similar Unarguable Source, then your uncertainty is not a matter for Bayesian probability theory.2 Someone might—gasp!—argue with your estimate of the prior probability. It thus seems to the not-fully-enlightened ones that Bayesian priors belong to the class of beliefs proposed by students, and not the class of beliefs commanded you by teachers—it is not proper knowledge).

The abyssal cultural gap between the Authoritative Way and the Quantitative Way is rather annoying to those of us staring across it from the rationalist side. Here is someone who believes they have knowledge more reliable than science’s mere probabilistic guesses—such as the guess that the Moon will rise in its appointed place and phase tomorrow, just like it has every observed night since the invention of astronomical record-keeping, and just as predicted by physical theories whose previous predictions have been successfully confirmed to fourteen decimal places. And what is this knowledge that the unenlightened ones set above ours, and why? It’s probably some musty old scroll that has been contradicted eleventeen ways from Sunday, and from Monday, and from every day of the week. Yet this is more reliable than Science (they say) because it never admits to error, never changes its mind, no matter how often it is contradicted. They toss around the word “certainty” like a tennis ball, using it as lightly as a feather—while scientists are weighed down by dutiful doubt, struggling to achieve even a modicum of probability. “I’m perfect,” they say without a care in the world, “I must be so far above you, who must still struggle to improve yourselves.”

There is nothing simple you can say to them—no fast crushing rebuttal. By thinking carefully, you may be able to win over the audience, if this is a public debate. Unfortunately you cannot just blurt out, “Foolish mortal, the Quantitative Way is beyond your comprehension, and the beliefs you lightly name ‘certain’ are less assured than the least of our mighty hypotheses.” It’s a difference of life-gestalt that isn’t easy to describe in words at all, let alone quickly.

What might you try, rhetorically, in front of an audience? Hard to say . . . maybe:

  • “The power of science comes from having the ability to change our minds and admit we’re wrong. If you’ve never admitted you’re wrong, it doesn’t mean you’ve made fewer mistakes.”
  • “Anyone can say they’re absolutely certain. It’s a bit harder to never, ever make any mistakes. Scientists understand the difference, so they don’t say they’re absolutely certain. That’s all. It doesn’t mean that they have any specific reason to doubt a theory—absolutely every scrap of evidence can be going the same way, all the stars and planets lined up like dominos in support of a single hypothesis, and the scientists still won’t say they’re absolutely sure, because they’ve just got higher standards. It doesn’t mean scientists are less entitled to certainty than, say, the politicians who always seem so sure of everything.”
  • “Scientists don’t use the phrase ‘not absolutely certain’ the way you’re used to from regular conversation. I mean, suppose you went to the doctor, and got a blood test, and the doctor came back and said, ‘We ran some tests, and it’s not absolutely certain that you’re not made out of cheese, and there’s a non-zero chance that twenty fairies made out of sentient chocolate are singing the “I love you” song from Barney inside your lower intestine.’ Run for the hills, your doctor needs a doctor. When a scientist says the same thing, it means that they think the probability is so tiny that you couldn’t see it with an electron microscope, but the scientist is willing to see the evidence in the extremely unlikely event that you have it.”
  • “Would you be willing to change your mind about the things you call ‘certain’ if you saw enough evidence? I mean, suppose that God himself descended from the clouds and told you that your whole religion was true except for the Virgin Birth. If that would change your mind, you can’t say you’re absolutely certain of the Virgin Birth. For technical reasons of probability theory, if it’s theoretically possible for you to change your mind about something, it can’t have a probability exactly equal to one. The uncertainty might be smaller than a dust speck, but it has to be there. And if you wouldn’t change your mind even if God told you otherwise, then you have a problem with refusing to admit you’re wrong that transcends anything a mortal like me can say to you, I guess.”

But, in a way, the more interesting question is what you say to someone not in front of an audience. How do you begin the long process of teaching someone to live in a universe without certainty?

I think the first, beginning step should be understanding that you can live without certainty—that if, hypothetically speaking, you couldn’t be certain of anything, it would not deprive you of the ability to make moral or factual distinctions. To paraphrase Lois Bujold, “Don’t push harder, lower the resistance.”

One of the common defenses of Absolute Authority is something I call “The Argument from the Argument from Gray,” which runs like this:

  • Moral relativists say:
    • The world isn’t black and white, therefore:
    • Everything is gray, therefore:
    • No one is better than anyone else, therefore:
    • I can do whatever I want and you can’t stop me bwahahaha.
  • But we’ve got to be able to stop people from committing murder.
  • Therefore there has to be some way of being absolutely certain, or the moral relativists win.

Reversed stupidity is not intelligence. You can’t arrive at a correct answer by reversing every single line of an argument that ends with a bad conclusion—it gives the fool too much detailed control over you. Every single line must be correct for a mathematical argument to carry. And it doesn’t follow, from the fact that moral relativists say “The world isn’t black and white,” that this is false, any more than it follows, from Stalin’s belief that 2 + 2 = 4, that “2 + 2 = 4” is false. The error (and it only takes one) is in the leap from the two-color view to the single-color view, that all grays are the same shade.

It would concede far too much (indeed, concede the whole argument) to agree with the premise that you need absolute knowledge of absolutely good options and absolutely evil options in order to be moral. You can have uncertain knowledge of relatively better and relatively worse options, and still choose. It should be routine, in fact, not something to get all dramatic about.

I mean, yes, if you have to choose between two alternatives A and B, and you somehow succeed in establishing knowably certain well-calibrated 100% confidence that A is absolutely and entirely desirable and that B is the sum of everything evil and disgusting, then this is a sufficient condition for choosing A over B. It is not a necessary condition.

Oh, and: Logical fallacy: Appeal to consequences of belief.

Let’s see, what else do they need to know? Well, there’s the entire rationalist culture which says that doubt, questioning, and confession of error are not terrible shameful things.

There’s the whole notion of gaining information by looking at things, rather than being proselytized. When you look at things harder, sometimes you find out that they’re different from what you thought they were at first glance; but it doesn’t mean that Nature lied to you, or that you should give up on seeing.

Then there’s the concept of a calibrated confidence—that “probability” isn’t the same concept as the little progress bar in your head that measures your emotional commitment to an idea. It’s more like a measure of how often, pragmatically, in real life, people in a certain state of belief say things that are actually true. If you take one hundred people and ask them each to make a statement of which they are “absolutely certain,” how many of these statements will be correct? Not one hundred.

If anything, the statements that people are really fanatic about are far less likely to be correct than statements like “the Sun is larger than the Moon” that seem too obvious to get excited about. For every statement you can find of which someone is “absolutely certain,” you can probably find someone “absolutely certain” of its opposite, because such fanatic professions of belief do not arise in the absence of opposition. So the little progress bar in people’s heads that measures their emotional commitment to a belief does not translate well into a calibrated confidence—it doesn’t even behave monotonically.

As for “absolute certainty”—well, if you say that something is 99.9999% probable, it means you think you could make one million equally strong independent statements, one after the other, over the course of a solid year or so, and be wrong, on average, around once. This is incredible enough. (It’s amazing to realize we can actually get that level of confidence for “Thou shalt not win the lottery.”) So let us say nothing of probability 1.0. Once you realize you don’t need probabilities of 1.0 to get along in life, you’ll realize how absolutely ridiculous it is to think you could ever get to 1.0 with a human brain. A probability of 1.0 isn’t just certainty, it’s infinite certainty.

In fact, it seems to me that to prevent public misunderstanding, maybe scientists should go around saying “We are not infinitely certain” rather than “We are not certain.” For the latter case, in ordinary discourse, suggests you know some specific reason for doubt.

1See “Professing and Cheering,” collected in Map and Territory and findable at rationalitybook.com and lesswrong.com/rationality.

2See “Focus Your Uncertainty” in Map and Territory.

New Comment
78 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

For all your talk about The One, I'm going to start to call you Morpheus.

I wonder what your life must be like. The way you write, it sounds as if you spend a lot of your time trying to convince crazy people (by which I mean most of humanity, of course) to be less crazy and more rational, like us. Why not just ignore them?

Then I looked at your Wikipedia entry and noticed how young you are. Ah! When I was your age, I was also trying to convert everybody. My endless arguments about software development methods, circa 1994, are still in Google's Usenet archive. So, who am I to talk?

(Note: Mostly I write comments that complain about... (read more)

I really enjoy your deep analysis of topics, but might I suggest writing shorter entries a bit more often?

Sam, if I write shorter entries, I'll never get everything said.

James: Snort. One of these days I'll do a post on "maturity bias".

Oh Eliezer, why'd you have to toss that parenthetical in about priors? The rest of the post is so wonderful. But the priors thing... hell, for my part, the objection isn't to priors that aren't imposed by some Authority, it's priors that are completely pulled out of one's arse. Demanding something beyond the whim of some metaphorical marble bouncing about in one's brain before one gets to make a probability statement is hardly the same as demanding capital-A-Authority.

The main reason people think a probability of 100% is necessary is that they assume that any other probability implies a subjective feeling of doubt, and they are aware that it is impossible to go through life in a continuous state of subjective doubt about whether or not food is necessary to sustain one's life and the like.

Once someone has separated the probability from this subjective feeling, a person can see that a subjective feeling of certainty can be justified in many cases, even though the probability is less than 100%. Once this has been admitted, I think most people would not have a problem with admitting that 100% probabilities are not possible.

1PetjaY
I would rather say that for normal people certainty is ~90% propability, you can notice this noticing that people who say something is certain aren´t willing to act in ways that would cause serious harm if they were wrong.

I once thought I had a fast, crushing argument against the existence of God. I would point to various objects around me and ask "What does that do?" e.g. point at a beach ball and they would say "bounce," point at a bird and they would say "sing." And I would triumphantly say, "See, God can't exist!" and they would look at me blankly.

In my mind, every object I had ever seen did it's own peculiar thing - that is, it didn't do "just anything." Therefore the idea of omnipotence - the ability to make objects do... (read more)

Practically all words (eg "dead") actually cut across a continuum; maybe we should reclaim the word "certainty". We are certain that evolution is how life got to be what it is, because the level of doubt is so low you can pretty much forget about it. Any other meaning you could assign to the word "certain" makes it useless because everything falls on one side.

Denis, you will definitely enjoy this one.

Thinking of science in religious terms makes the whole thing fall over, for everyone. The only way you can have 100% certainty in something is if it's not falsifiable. The only way something can be unfalsifiable is if it is mysterious, ethereal and makes no testable predictions.

My withering rejoinder? "Yes, you may have god. But do you have any knowledge?"

1christopherj
This is an excellent point, an implication that I ought to have deduced myself but totally didn't. This means not only that absolute certainty about reality is impossible to get, but more interestingly that absolute certainty about reality is entirely useless as it can't make specific predictions. Even if it were something like "can't go faster than the speed of light", being absolutely certain of this would mean that "scientists measuring something going faster than the speed of light because of experimental error" would be a valid prediction, along with "it is an illusion/I am crazy". Since neither experimental result would disprove the certain thing, it must follow that the certain thing can't predict the experimental result. In fact, I think we can claim that the probability that you're sane, should be an upper bound on probabilities you're allowed to claim. Thus to claim arbitrarily high probabilities, you'd need an arbitrarily large group of probably sane people who agree (but then what are the odds that you just imagined the group of people who agree with you?). Since you can't be absolutely certain that you and all your group are perfectly sane (along with the possibility of a coincidentally matching mass hallucination), that would make for an upper bound on certainty. In fact the whole group thing would be unnecessary if we admit the possibility that the person we're trying to convince might be insane. Next time someone claims absolute certainty about something, I'll ask them to prove that they're not insane. That should take them into neutral territory that they haven't had time to wall up, and if they did consider that they might be insane it would be an even better argument.

'Any other meaning you could assign to the word "certain" makes it useless because everything falls on one side.'

Yes, exactly. The concept of "certainty" as colloquially used has no referents. It is such a strict standard, the only things that could possibly be referents for it are statements made by an omniscient entity. A statement by any lesser entity could be wrong and therefore could not be a referent. We are beating ourselves up over a concept no more valid that "unicorn."

Ian, your God argument doesn't follow:

1) Objects behave in certain, predictable ways 2) God can make objects behave arbitrarily 4) No objects behave arbitrarily 5) There is no God

Hidden argumentation:

3) Therefore, God WILL make things behave arbitrarily

You can't assume that an omnipotent God will behave in any particular way.

You can't assume that an omnipotent God will behave in any particular way.

What happens when an immovable object meets an irresistible force?

I think you've mischaracterized Ian's argument. He seems to be arguing that because everything in his empirical experience behaves in particular ways and appears incapable of behaving arbitrarily, that this is strong evidence to suggest that no other being could exist which is capable of behaving arbitrarily.

I think the real weakness of this argument is that the characterization of things as behaving in particular ways is way too simplistic. Balls may roll as well as bounce. They can deflate or inflate, or crumple or explode, or any of a thousand other ... (read more)

LG - Your objection is only valid if you assume I am starting with the idea of omnipotence and trying to use the evidence to disprove it. In fact, I am starting with the evidence and showing that the idea of omnipotence can't be arrived at without contradiction.

1) Objects behave in certain, predictable ways 2) Therefore the suggestion that someone could make an object behave arbitrarily contradicts the evidence 3) Therefore the idea of "omnipotence" contradicts the evidence 4) Therefore the idea of God contradicts the evidence

It's a different style of reasoning: starting with reality vs. starting with imagination and then using reality only as a test.

Ian, are you arguing that the concept of omnipotence is incoherent, or merely (as Michael seems to have interpreted you:) that we have no reason to believe that any omnipotent entity actually exists?

If you really mean the latter, then I suspect most people here will agree with you: if one does not observe any evidence for omnipotence, and one accepts Occam's razor (as reasonable people do), then one concludes that no omnipotent entity exists, unless and until strong evidence to the contrary comes up.

But it remains the case that the idea of omnipotence is c... (read more)

4bigjeff5
When I was growing up in a baptist church one of the primary arguments for all the evidence that suggests the earth is over four billion years old and that the universe is nearly fourteen billion years old, was that God made it to look that way on purpose. That is, when he said "let there be light!" he didn't just make the stars and let the light take its course (which would take between thousands and billions of years, and some light we see now would never reach us at all), but made the stars with a past history and its light already hitting us. Same with all the geological evidence - God just made it look as though it were really old. So the universe was 6,000 years old, but it looked exactly like it would if it were 14 billion years old. Ostensibly this was to test our faith, however, after thinking about it for a few years after I left high school I realized that if any of this were the least bit true, if God really did exist, if he really designed a universe specifically to trick people into believe he didn't exist (it's the only valid reason I can think of for doing it - it's even what the preachers think, though they don't put it that way), and thereby send whole swaths of people to hell for no reason other than they were trying to find the truth (which the Bible does admonish one to seek) then he has to be the biggest douchebag in the universe. That's not evidence against the position though. Really there can never be any evidence against their position - it's theological phlogiston, but it does make it very easy to stop accepting God. Once you do that you realize that a god isn't necessary at all, so why would you believe in one? Especially one that is such a vile, evil, spiteful creature?

Here's an example: some time ago I was discussing evolution with a creationist, and was asked "Can you prove it?" I responded that "prove" isn't the appropriate word, but rather scientists gather and evaluate evidence to see what position the evidence most clearly supports. He crowed in jubilation. "Then you don't have any proof!" he exclaimed.

So my response in that situation has changed. I now respond, "Yes, we have the same level of proof that sends people to death row: We've got the DNA!" That's adapted from S... (read more)

Ian, your argument fails not merely because premise 1 isn't established apodictically. (Which is the flaw of inductive reasoning generally, but which, as Eliezer tries to point out to the religious, doesn't mean we don't have good reason to believe it.)

It also fails because we have counterexamples up the wazoo. Michael's point about sentient creatures is one of them. But we can generate a lot of others just by diddling around the space in which we define "objects." Balls bounce and roll, bowling balls just roll, spherical objects generally do... (read more)

Eliezer's use of "the one" is not an error or a Matrix reference, it's a deliberate echo of an ancient rabbinical trope. (Right, Eliezer?)

I think Ian makes an important point: people give their ability to imagine something the same weight as evidence. The most gratuitous example of this, relevant here because it's the impetus for inductive probabilism, is the so-called "problem of induction." Say we have two laws concerning the future evolution of some system, call them L1 and L2, such that at some future time t L2(t) gives a result that is defined only as being NOT the result given by L1(t). L1 is based on observation. L2 represents my ability to imagine that my observations will fail to hold at some future time t. The problem of induction is a result of giving MORE weight to L2 than L1.

Actually, I didn't realize "the one comes to us and says" was a rabbinical borrowing until it was pointed out to me. But it seems to have the right tone, and it's syntactical; I care not whether it is grammatical.

Poke, that's a really unhelpful way of thinking about the problem of induction. The problem of induction is a problem of logic in the first instance -- a description of the fact that we do have absolute knowledge of the truth of deductive arguments (conditional on the premises being true) but we don't have absolute knowledge of the truth of inductive arguments. And that's just because the conclusion of a deductive argument is (in some sense) contained in the premises, whereas the conclusion of a generalization isn't contained in the individual observatio... (read more)

Michael: "Balls may roll as well as bounce. They can deflate or inflate, or crumple or explode, or any of a thousand other things." Paul: "It also fails because we have counterexamples up the wazoo."

But even if an object behaves thousands of ways, it is still behaving in those ways and only those ways. If we want to work with it, we must follow cause and effect, we can't simply will it to do what we want. That is the case for all objects I know of, there are no counter-examples.

Z. M. Davis: "are you arguing that the concept of omni... (read more)

Paul Gowder,

I think your response is too general. How does the problem of induction being an deductive argument make the conclusion any less absurd? It's a deductive argument that takes as its premise my ability to imagine something being otherwise. That makes sense if you're an Empiricist philosopher, since you accept an Empiricist psychology a priori, but not a lot of sense if you're a scientist or committed to naturalism. Further, the difference you cite between deductive and inductive arguments (that the former is certain and the latter not), is the conclusion of the problem of induction; you can't use it to argue for the problem of induction.

Really like your article. Thanks

Poke: let's attack the problem a different way. You seem to want to cast doubt on the difference along the dimension of certainty between induction and deduction. ("the difference you cite between deductive and inductive arguments (that the former is certain and the latter not), is the conclusion of the problem of induction; you can't use it to argue for the problem of induction")

Either deduction and induction are different along the dimension of certainty, or they're not. So there are four possibilities. induction = certain, deduction = cert... (read more)

Ian C: What about an universal Turing machine?

Maybe you should try telling some parables about people who thought they had certain knowledge. Maybe some of them should include other people who did not think their knowledge was certain.

I cannot accept that Probability must be applied to everything. Which of course indirectly states that there are no absolutes, since probability has no 0 or 1.

If you discard absolutes, you must be willing to accept mysticism and contradictions.

I can create a long list of false or contradictory statements, and anyone who lives by probabilities must obediently tell me that every one of them is possible.

  • "Does God exist?" "Probably not, but it's possible."

  • "Can he create a boulder that he cannot lift?" "Probably not, but i

... (read more)
2Vladimir_Nesov
Not quite so. There is a lot of nearly-impossible things, and you are good to call them "impossible", even if technically they aren't. Likewise, some things are so certain that you are good to call them "absolutely certain" even if technically they aren't. See possibility, antiprediction, fallacy of gray, technical explanation, absolute certainty.
0Amaroq
Ah, but you see, I was arguing at the technical level, not on the "it's good to call it this" level. I believe that absolute certainty is required. Not in all, and probably not even in most things. But absolute certainty has to be possible, because without it, I must give technical possibility to self-contradicting statements like this one: "God exists, he is omniscient, infallible, and he can make a boulder that he cannot lift." Can you tell me that all the pieces of that statement are technically possible? P.S. I don't think I commit the fallacy of gray. I accept that there are varying shades of gray. But I believe that there must be a black and a white as well. I also apologize if I seem aggressive. I don't read or post much here. Only when I see something I believe is wrong because I, like you, want there to be less wrong. P.P.S. I am a beginner Objectivist, so my acceptance of black, white, and gray may be subject to change as I learn more.
-1bigjeff5
You screwed that up, there is no contradiction there. God must too be omnipotent to make the argument you are looking for. And really, it's not a contradiction any way. If he is all powerful, then certainly he has the ability to make a rock that he cannot lift if he so chooses. But, since he is all powerful, he can just as easily make that rock liftable again. When you are given an absurd premise, absurd outcomes are logical. Deductive conclusions are only absolute certainties relative to their premises. That is, the conclusion can never be more certain than the premise In fact, it will be at least as uncertain as the uncertainty of both premises combined. Since the premises can never be certain, the result of deductive reasoning is never certain either, only valid or invalid.
0bigjeff5
I'm curious why this was voted down. If it was because my language was a little harsh, I assure you I did not mean to offend, I simply meant he made a mistake in the wording of the argument - omniscient means "all-knowing", omnipotent means "all-powerful". I'd be surprised to be voted down even though I was right on this matter. If there is a problem with my reasoning after that, please do point it out to me rather than just voting me down. I'm new to Bayes, and as the Amanda Knox Test demonstrated, I often fail at reasoning. If this is such a case I would very much like to know about it. I can't see where I made the mistake though. [Edit to change the Amanda Knox link to the original, instead of the spoiler]
2wedrifid
Your reasoning is correct. The below quote is not self contradictory. You may consider substituting the 'too' with 'also' or moving the word order around to make the sentence flow better. When you are saying things forcefully as with "you screwed that up" it pays to be extra careful with wording - higher standards are expected.
0bigjeff5
I was a little careless with "you screwed that up"; I honestly did not intend for it to sound mean, and I could have chosen better words. I simply meant he obviously intended to use the word omnipotent instead of omnipresent. Regarding the word too, however, I completely disagree. That is a valid use of the word, unconventional sure, but valid. I've always enjoyed seeing it employed in such a manner. [Edit] Maybe putting "too" before "must" would sound a little nicer to some, but I liked the way "God must too" sounded in my head.
0wedrifid
You were curious as to why you were downvoted. That wording would, I predict, have been a contributing factor. Wording significantly influences tone. That wording came across as more petulant or crude as a follow up to 'screwed up' than an alternative would have.
0bigjeff5
I still don't see it as a very good reason for a down vote when nothing in the post is considered incorrect. I expect not to be up voted if I'm being rude and technically correct, but I don't expect to be down voted. Usually when I'm down voted it is because I'm either factually wrong or I've failed at reasoning. Getting down voted for a phrasing that someone considers a little rude seems odd on this particular website. And honestly, I was not intending to be rude in any way, it is a common phrase when someone makes a mistake. I did not intend to imply anything other than the fact that he used the wrong word in his paradox. In any case, the points aren't a big deal, and someone corrected it anyway. I was just curious if I had made a mistake, because I didn't see one even after looking over what I wrote a second and third time.
3thomblake
Downvotes for rudeness are pretty common. Especially after Defecting by Accident
2komponisto
The order affects the meaning: "must too" doesn't mean "must also"; it means "on the contrary, must!" (Cf. "did too!") I don't think that's the meaning you wanted here.
0bigjeff5
Just noticed this comment when I was looking through my messages for an old comment, and I wanted to respond. It is the word "too" that is important there, and the usage you describe is only used as an affirmative for contradicting a negative statement (at least, that's proper grammar anyway). For example, if the original statement had been "God must not make a boulder he cannot lift!" and I had responded with "God must too make a boulder he cannot lift!" you would be right, but the original statement is an affirmative statement ("God can make a boulder he cannot lift."), my own sentence before it is an affirmative (in the grammatical sense - not so much in the "uplifting" sense), so trying to contradict either with an affirmative doesn't make any sense. Also, I did a Google search, and while using "too" between must and another verb is not common, using "must too" to mean "must also" is by far the most common usage I could find. I do admit that other combinations of verb "too" verb seem to imply contradicting a negation even without the proper context, so that usage is definitely not as clear as I originally thought it would be. I still think it's pretty, though.
0HonoreDB
Are omnipotence and omniscience logically distinct? One can "know how to do something" or "be able to learn something."
0CuSithBell
Under most conceptions, omnipotence certainly entails at least the ability to become omniscient. It doesn't work the other way - knowing how to shoot a three-point shot in basketball doesn't help an omniscient cantaloupe.
0HonoreDB
You don't think it could think its way out of the box? Is causally discrete omniscience really omniscience?
2bigjeff5
If you are going to take the premise that information is the substance and causation of all that exists, then yes, an omniscient being must also be omnipotent. You need that premise first, though, or the omniscient is simply a know-it-all (literally). If no condition exists to change its lack of omnipotence given its current abilities, then no amount of knowledge will allow it to become omnipotent. Omnipotence does not necessarily imply the knowledge necessary to create omniscience, either. The ability is certainly there, but the knowledge may not be. I'm sure if the omnipotent being were clever it could figure out a way to make it happen, though. Usually when someone dreams up an all powerful being, they make it all knowing as a matter of course, and vice versa. At least they do these days, anyway. The Greeks liked their gods to have serious flaws, and I can appreciate that.
0wedrifid
Yes, they are distinct. One can "know it is impossible to do something", for example.
4Dorikka
Please tell us when you are posting a spoiler for a rationality exercise. I clicked through your link and didn't catch that it was a spoiler fast enough for the exercise itself not to be spoiled.
2bigjeff5
I'm very sorry, I didn't consider that. I actually got to the original Amanda Knox post through the spoiler, but I stopped reading at the mention of the original and went straight to that one first. I'll change the link so it doesn't trip anybody else up.
-2[anonymous]
Leave the math alone, redefine 'possible' to match your preferred meaning if you must.

In the world of the unenlightened ones, there is authority and un-authority. What can be trusted, can be trusted; what cannot be trusted, you may as well throw away. There are good sources of information and bad sources of information.

This is pretty much the standard argument against Wikipedia. It fails to address the question of "what's it for?"

I mean, suppose that God himself descended from the clouds and told you that your whole religion was true except for the Virgin Birth. If that would change your mind, you can't say you're absolutely certain of the Virgin Birth.

I think that latter statement is equivalent to this:

V = Virgin Birth
G = God appears and proclaims ~V

P(V|G) < 1
∴​P(V) < 1

But that argument is predicated on P(G) > 0. It is internally consistent to believe P(V|G) < 1 and yet P(V) = 1, as long as one also believes P(G) = 0, i.e. one is certain that God will not appear and proclaim ~V.

0robertzk
Go a little farther. Let G(X) = God appears and proclaims X. For religions with acknowledgment of divine revelation, which is all major religions, P(G(X)) has been non-zero for certain X (people have received revelation directly from God). Indeed, granting ultimate authority to God, again a factor of all major religions, means that 0 < P(G(X)) < 1 for all X (granting that there is a statement X such that humans know God will not appear and proclaim X is removing ultimate authority from God and assigning part of it to humans--by the way, we can assume the space of X's is countable so there is no problem with summing to 1). So it is not internally consistent to assume, in particular, that P(G(~V)) = 0, without abandoning ultimate authority to God (or probability theory as a way of reasoning about this stuff, as most religions opt to do). Of course the more productive question is what evolutionary mechanisms allowed human brain architecture the ability to get so off-par with reality but productive from a Darwinian point of view. Some would argue that potential to be so absurdly wrong is what gives brains their computational power in the first place! Bounded rationality under physical constraints is a very active area of research.

For technical reasons of probability theory, if it's theoretically possible for you to change your mind about something, it can't have a probability exactly equal to one.

This is supposed to be an argument against giving anything an 100% probability. I do agree with the concept, but this particular argument seems wrong. It's based on Conservation of Expected Evidence (if the "technical reasons of probability theory" refer to something else, let me know). However, the Bayes rule doesn't just imply that "having a chance of changing your mind... (read more)

2Jiro
Not everything that changes your mind is evidence within the meaning of Conservation of Expected Evidence. If there's a 50% chance you will believe X tomorrow, but that situation involves believing X because you're hypnotized, that's not evidence at all and you should not change your current beliefs based on that.
2ike
So then, moving on to the argument that "because I might believe 2+2=3 tomorrow (albeit very unlikely), I can't believe 2+2=4 100% today". If Omega tells you that tomorrow you will believe that 2+2=3, most of your probability mass is concentrated in the possibility that 2+2=4, but you'll be somehow fooled, perhaps by hypnosis or nano-editing of your brain. Very little if any probability mass is for the theory that 2+2 really equals 3, and you'll have the major revelation tomorrow. In order to use this thought experiment to show that I don't have 100% confidence in 2+2=4, you need to assert that the second probability exists, however the thought experiment is also consistent with the first probability being high or one and the second being zero (you can't assume I agree that zero is not a probability, or you're begging the question).
1Wes_W
Why do you think that is the correct thing to do in that situation? Here, in this real situation, yes you should trust your current counting abilities. But if you believe with 50% confidence that, within 24 hours, someone will be able to convince you that your ability to count is fundamentally compromised, you also don't place a high level of confidence on your ability to count things correctly - no more than 50%, in fact. "I can count correctly" and "[someone can demonstrate to me that] I'm counting incorrectly" are mutually exclusive hypotheses. Your confidence in the two ought not to add up to more than 1.
1ike
If I know that I'll actually experience that scenario tomorrow where I wake up and have all available evidence showing that 2+2=3, but now I still visualize XX+XX=XXXX, then I trust my current vast mound of evidence over a future smaller weird mound of evidence. I'm not evaluating "what will I think 2+2= tomorrow?" (as EY points out elsewhere, this kind of question is not too useful). I'm evaluating "what is 2+2?" For that, it seems irrational to trust future evidence when I might be in an unknown state of mind. The sentence EY has repeated "Those who dream do not know they dream; but when you wake you know you are awake", seems appropriate here. Just knowing that I will be convinced, however the means, is not the same as actually convincing me. What if they hack your mind and insert false memories? If you would know someone would do that tomorrow, would you think that the future memories actually happened in your past? If you're trying to make the argument that "since someone can fool me later, I can be fooled now and wouldn't notice", well, first of all, that doesn't seem to be the argument EY is making. Second, I might have to be in such a situation to be precise, but I'd expect the future that I am being fooled in would have to delete the memory of this sequence of posts (specifically the 2+2=3 post, and this series of comments). The fact that I remember seems to point to the editing/hacking not happening yet. After thinking of this I see that an intruder would just change all the references from 2+2=4 to 2+2=3 and vice versa, leaving me with the same logic to justify my belief in 2+2=3. So that didn't work. How about this: once I have to consider my thought processes hacked, I can't unwind past that anyway, so to keep sane I'll have to assume my current thoughts are not corrupted.
0Wes_W
I think deception should be treated as a special case, here. Normally, P(X | a seemingly correct argument for X) is pretty high. When you specifically expect deception, this is no longer true. I'm not sure it's useful to consider "what if they hack your mind" in this kind of conversation. Getting hacked isn't a Bayesian update, and hallucinations do not constitute evidence.
0ike
If there was a way to differentiate hallucinations from real vision, then I'd agree, but there isn't. Anyway, I thought of a (seemingly) knockdown argument for not believing future selves: what if you currently believe at 50% that tomorrow you'll be convinced of 2+2=3, the next day 2+2=5, and the next day 2+2=6? (And that it only has one answer.) If you just blindly took those as minimums, then your total probability mass would be at least 150%. Therefore, you can only trust your current self.
3Wes_W
Sure, but that is a different problem than what I'm talking about. Expecting to hallucinate is different than expecting to receive evidence. If you expect to be actually convinced, you ought to update now. If you expect to be "convinced" by hallucination, I don't think any update is required. Framing the 2+2=3 thing as being about deception is, IMO, failing to engage with the premise of the argument. I would be very confused, and very worried about my ability to separate truth from untruth. In that state, I wouldn't feel very good about trusting my current self, either.
0CCC
Not entirely. It is possible that someone may be able to provide a convincing demonstration of an untrue fact; either due to deliberate deception, or due to an extremely unlikely series of coincidences, or due to the person giving the demonstration genuinely but incorrectly thinking that what they are demonstrating is true. So, there is some small possibility that I am counting correctly and someone can demonstrate to me that I am not counting correctly. The size of this possibility depends, among other things, on how easily I can be persuaded.
0Wes_W
By the way, separate from our conversation downthread, I don't think that is the technical reason being referred to. Or at least, it's a rather indirect way of proving that point. Bayes' Theorem is P(A|B) = P(B|A)P(A)/P(B). If P(A) = 0, then P(A|B) = P(B|A)*0/P(B) = 0 as well, no matter what P(B|A) and P(B) are. Or in words: if you start with credence exactly zero in some proposition, it is impossible for any piece of evidence to make you update away from that. By the contrapositive, if it is not impossible for you to update away from your original opinion ("change your mind"), your credence is nonzero. A similar argument holds for probability 1, which should be unsurprising, since P(A) = 1 is equivalent to P(~A) = 0.
0ike
The problem with this argument is that it assumes that evidence is not altered. What I mean is that Bayesian updating implicitly assumes that all evidence previously used is included in the new calculation, and the new evidence is a strict superset of the old one. However, suppose I hypothetically assign 100% to any math fact "simple enough" that I can verify it mentally in under a minute (to choose an arbitrary time). So today, when I'm visualizing 2+2=4, I can say that I put a 100% confidence on the claim "2+2=4". Now, is this contradicted by the fact that tomorrow I will see new evidence, causing me to conclude that 2+2=3? No. Aside from seeing new evidence later, my current evidence is being changed. Right now, the evidence consists of actual brain operations that visualize 2+2. Tomorrow, that evidence is in the form of memories of brain operations. If I live in a possible world where only memories can be edited and not actual running brain processes, then tomorrow I will conclude that today's memories were faked. That is not something I can conclude today, because I can repeat the visualization at any time. (One minute after, I might be relying on memories, but at the time, I'm not.)
3Wes_W
That isn't a valid operation. For one: assigning 100% confidence in your ability to correctly do something on which you do not have, historically, a 100% track record is quite unwise. Probably you aren't even 1-10^-6 reliable, historically, and that would still be infinitely far short of 100%. But it's a toy hypothetical, so realism isn't the primary objection. More importantly, we don't get to arbitrarily assign probabilities. Bayes' Theorem, as the name implies, is a theorem. It does not assume anything about evidence; it doesn't even mention evidence. It talks strictly about probabilities. All this "evidence" stuff is high-level natural-language abstraction about what the probabilities "mean" - the math itself is a reduction of the concept of evidence. It only assumes some axioms of probability; you may attempt to dispute those if you like, but that would be a very different conversation. And, because Bayes' Theorem is a theorem, assigning 100% confidence to any proposition of which you could in principle ever cease to have 100% confidence is strictly, provably an error. The special case of reasoning while unable to trust your own sanity requires lots of conditions that are usually negligible. For example, P(X happened | I remember that X happened) is usually pretty close to 1; for most purposes we can ignore it and pretend "X happened" and "I remember X happened" are the same thing. But if you suspect your memories have been altered, this is no longer true, so you'll have that extra factor in certain calculations. Nothing that you are describing is outside the domain of the relevant math. It's just weird corner cases.
0ike
Why can't "X happened" be infinite evidence for X, while "I remember that X happened" only finite? Bayes theorem applies, but it's not being applied accurately, because of these special cases. Define "you" and "ever". I argue that the "you" who changes there mind tomorrow is not the same observer that decides with 100% probability today, because the one today has information that the one tomorrow doesn't; namely, actual brain ops, versus memories for tomorrow you. I could in principle be convinced that my 100% assesment is wrong: by removing or editing the evidence. That is not Bayesian updating, it's brain editing, and then a Bayesian update on other evidence. You're equating today me with tomorrow me, and you can't do that unless all my current evidence will still be there tomorrow. Why didn't EY use an example of a hypothetical other race (the 223ers), who think that everything is evidence for 2+2=3 as his example? Because we need the same person (or observer, is there a technical term for that thing-doing-the-assesing?) to change their mind. I assert that if memory can't be trusted, it won't count as the "same" to apply Bayes theorom straightforwardly.
3Wes_W
You could consider a proposition to be infinite evidence for itself, I guess. That seems like maybe a kinda defensible interpretation of P(A|A) = 1. I don't think it gets you anything useful, though. [∃ B: P(A|B) ∈ (0,1)] → [P(A) ∈ (0,1)]. Better? If, having made them, your own probability assessments are meaningless and unusable, who cares what values you assign? Set P(A) = 328+5i and P(B) = octopus, for all it matters. Additionally, I'm not sure it matters when the mind-changing actually occurs. At the instant of assignment, your mind as it is right that moment should already have a value for P(A|B) - how you would counterfactually update on the evidence is already a fact about you. If you would, counterfactually assuming your current mind operates without interference until it can see and process that evidence, update to some credence other than 1, it is already at that moment incorrect to assign a credence of 1. Whether that chain of events does in fact end up happening won't retroactively make you right or wrong; it was already the right or wrong choice when you made it. Or, if you get mind-hacked, your choice might be totally moot. But this is generally a poor excuse to deliberately make bad choices.
0ike
Yes, it makes it clearer what you're doing wrong. I'll do what I should have done earlier, and formalize my argument: Let's call "2+2=4" A, "2+2=3" B, "I can visualize 2+2=4" C, "I can visualize 2+2=3" D, "I can remember visualizing 2+2=4" E, "I can remember visualizing 2+2=3" F. So, my claim is that P(A|C) is 1, likewise P(B|D). (Remember, I don't think it's like this in real life, I'm trying to show that the argument put forward to prove that is not sufficient.) What is the Bayes formula for tomorrow's assessment? Not, P(A|C,D), which (if <1) would indeed disprove P(A|C)=1. But, instead, P(A|E,D). This can be less than 1 while P(A|C)=1. I'll just make up some arbitrary numbers as priors to show that. I'm assuming A and B are mutually exclusive, as are C and D. P(A)=.75 P(B)=.25 (just assume that it's either 2 or 3) P(C)=.375 P(D)=.125 P(memory of X | X happened yesterday)=.95 P(memory of X | X didn't happen yesterday)=.001 P(E)=P(C)*.95+P(~C)*.001=0.356875 P(F)= P(D)*.95+P(~D)*.001=0.119625 P(C|A)=.50 P(C|B)=0 P(D|A)=0 P(D|B)=.50 P(A|C) = P(C|A)P(A)/P(C)=(.50*.75)/(.75*.50+.25*0)=1 P(A|C,D) is undefined, because C and D are mutually exclusive (which corresponds to not being able to visualize both 2+2=3 and 2+2=4 at the same time) P(F,D)=P(D)*.95=0.11875 P(A|E,D)= P(E,D|A)P(A)/P(E,D)=0 (Because D|A is zero). Using my numbers, you need to derive a mathematical contradiction if there are, truly "technical reasons" for this being impossible. The mistake you (and EY) are making is that you're not comparing P(A) to P(A|B) for some A,B, but P(A|B) to P(A|C) for some A,B,C. Added: I made two minor errors in definitions that have been corrected. E and F are not exclusive, and C and D shouldn't be defined as "current", but rather as having happened, which can only be confirmed definately if they are current. However, they have the evidential power whenever they happened, it's just if they didn't happen now, they're devalued because of fragile memory
0CCC
While A and B being mutually exclusive seems reasonable, I don't think it holds for C and D. And I'm pretty sure that it doesn't hold at all for E and F. If I remember visualising 2+2=3 yesterday and 2+2=4 the day before, then E and F are both simultaneously true. These three statements, taken together, are impossible. Consider: Over the 0.75 probability space where C is true (second statement), A is only true in half that space (third statement). Thus, A is false in the other half of that space; therefore, there is a probability space of at least 0.375 in which A is false. Yet A is only false over a probability space of size 0.25 (first statement). In your calculations further down, you use the value P(C) = (.75.50+.250) = 0.375; using that value for P(C) instead of 0.75 removes the contradiction. Similarly, the following set of statements lead to a contradiction, considered together:
2ike
The first and third comments are correct. I made some errors in first typing it up that shouldn't take away from the argument that are now fixed. The third comment is an actual mistake that has also been fixed. This is wrong. P(C|A) is read as C given A, which is the chance of C, given that A is true. You're mixing it up with P(A|C). However, if you switch A and C in your paragraph, it becomes a valid critique, which I've fixed, substituting the correct values in. Thanks. (Did I mess anything else up?) I'm starting to appreciate mathematicians now :) You need to escape your * symbols so they output correctly.
0CCC
You're right, I had that backwards. Hmmm.... You have two different values for P(F). Similarly, the value P(E)=0.70 does not match up with P(C), P(D) and the following: None of which is going to affect your point, which seems to come down to the claim that there exist possible events A, B, C, D, E and F such that P(A|C) = 1.
3Wes_W
blink Well huh. I suppose I ought to concede that point. There are probabilities of 0 and (implicitly) 1 in the problem setup. I'm not confident it's valid to start with that; I worry it just pushes the problem back a step. But clearly, it is at least possible for probabilities of 1 to propagate to other propositions which did not start at 1. I'll have to think about it for a while.
Foolish mortal, the Quantitative Way is beyond your comprehension, and the beliefs you lightly name ‘certain’ are less assured than the least of our mighty hypotheses.

Have you considered selling merch? I'm infinitely certain I'd buy a T-shirt with that quote.

The Dalai Lama stated that "If science proves some belief of Buddhism wrong, then Buddhism will have to change."

 

I like the guy :)

1papetoast
That quote doesn't come from the passage and it is not obvious to me how it relates to the passage. What are you trying to talk about?

Another problem with some people is that they don’t consciously believe (or won’t openly admit) they have absolute certainty. In their speech, they say that they doubt this and that, that they "cannot know everything" but I guess that’s mostly a trick for them to say "and neither do you." With them, one first needs to convince them that they are lying to themselves before having a talk about certainty vs uncertainty.