Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Absolute Authority

40 Post author: Eliezer_Yudkowsky 08 January 2008 03:33AM

Followup toBut There's Still A Chance Right?, The Fallacy of Gray

The one comes to you and loftily says:  "Science doesn't really know anything.  All you have are theories—you can't know for certain that you're right.  You scientists changed your minds about how gravity works—who's to say that tomorrow you won't change your minds about evolution?"

Behold the abyssal cultural gap.  If you think you can cross it in a few sentences, you are bound to be sorely disappointed.

In the world of the unenlightened ones, there is authority and un-authority.  What can be trusted, can be trusted; what cannot be trusted, you may as well throw away.  There are good sources of information and bad sources of information.  If scientists have changed their stories ever in their history, then science cannot be a true Authority, and can never again be trusted—like a witness caught in a contradiction, or like an employee found stealing from the till.

Plus, the one takes for granted that a proponent of an idea is expected to defend it against every possible counterargument and confess nothing.  All claims are discounted accordingly.  If even the proponent of science admits that science is less than perfect, why, it must be pretty much worthless.

When someone has lived their life accustomed to certainty, you can't just say to them, "Science is probabilistic, just like all other knowledge."  They will accept the first half of the statement as a confession of guilt; and dismiss the second half as a flailing attempt to accuse everyone else to avoid judgment.

You have admitted you are not trustworthy—so begone, Science, and trouble us no more!

One obvious source for this pattern of thought is religion, where the scriptures are alleged to come from God; therefore to confess any flaw in them would destroy their authority utterly; so any trace of doubt is a sin, and claiming certainty is mandatory whether you're certain or not.

But I suspect that the traditional school regimen also has something to do with it.  The teacher tells you certain things, and you have to believe them, and you have to recite them back on the test.  But when a student makes a suggestion in class, you don't have to go along with it—you're free to agree or disagree (it seems) and no one will punish you.

This experience, I fear, maps the domain of belief onto the social domains of authority, of command, of law.  In the social domain, there is a qualitative difference between absolute laws and nonabsolute laws, between commands and suggestions, between authorities and unauthorities.  There seems to be strict knowledge and unstrict knowledge, like a strict regulation and an unstrict regulation.  Strict authorities must be yielded to, while unstrict suggestions can be obeyed or discarded as a matter of personal preference.  And Science, since it confesses itself to have a possibility of error, must belong in the second class.

(I note in passing that I see a certain similarity to they who think that if you don't get an Authoritative probability written on a piece of paper from the teacher in class, or handed down from some similar Unarguable Source, then your uncertainty is not a matter for Bayesian probability theory.  Someone might—gasp!—argue with your estimate of the prior probability.  It thus seems to the not-fully-enlightened ones that Bayesian priors belong to the class of beliefs proposed by students, and not the class of beliefs commanded you by teachers—it is not proper knowledge.)

The abyssal cultural gap between the Authoritative Way and the Quantitative Way is rather annoying to those of us staring across it from the rationalist side.  Here is someone who believes they have knowledge more reliable than science's mere probabilistic guesses—such as the guess that the moon will rise in its appointed place and phase tomorrow, just like it has every observed night since the invention of astronomical record-keeping, and just as predicted by physical theories whose previous predictions have been successfully confirmed to fourteen decimal places.  And what is this knowledge that the unenlightened ones set above ours, and why?  It's probably some musty old scroll that has been contradicted eleventeen ways from Sunday, and from Monday, and from every day of the week.  Yet this is more reliable than Science (they say) because it never admits to error, never changes its mind, no matter how often it is contradicted.  They toss around the word "certainty" like a tennis ball, using it as lightly as a feather—while scientists are weighed down by dutiful doubt, struggling to achieve even a modicum of probability.  "I'm perfect," they say without a care in the world, "I must be so far above you, who must still struggle to improve yourselves."

There is nothing simple you can say to them—no fast crushing rebuttal.  By thinking carefully, you may be able to win over the audience, if this is a public debate.  Unfortunately you cannot just blurt out, "Foolish mortal, the Quantitative Way is beyond your comprehension, and the beliefs you lightly name 'certain' are less assured than the least of our mighty hypotheses."  It's a difference of life-gestalt that isn't easy to describe in words at all, let alone quickly.

What might you try, rhetorically, in front of an audience?  Hard to say... maybe:

  • "The power of science comes from having the ability to change our minds and admit we're wrong.  If you've never admitted you're wrong, it doesn't mean you've made fewer mistakes."
  • "Anyone can say they're absolutely certain.  It's a bit harder to never, ever make any mistakes.  Scientists understand the difference, so they don't say they're absolutely certain.  That's all.  It doesn't mean that they have any specific reason to doubt a theory—absolutely every scrap of evidence can be going the same way, all the stars and planets lined up like dominos in support of a single hypothesis, and the scientists still won't say they're absolutely sure, because they've just got higher standards.  It doesn't mean scientists are less entitled to certainty than, say, the politicians who always seem so sure of everything."
  • "Scientists don't use the phrase 'not absolutely certain' the way you're used to from regular conversation.  I mean, suppose you went to the doctor, and got a blood test, and the doctor came back and said, 'We ran some tests, and it's not absolutely certain that you're not made out of cheese, and there's a non-zero chance that twenty fairies made out of sentient chocolate are singing the 'I love you' song from Barney inside your lower intestine.'  Run for the hills, your doctor needs a doctor.  When a scientist says the same thing, it means that he thinks the probability is so tiny that you couldn't see it with an electron microscope, but he's willing to see the evidence in the extremely unlikely event that you have it."
  • "Would you be willing to change your mind about the things you call 'certain' if you saw enough evidence?  I mean, suppose that God himself descended from the clouds and told you that your whole religion was true except for the Virgin Birth.  If that would change your mind, you can't say you're absolutely certain of the Virgin Birth.  For technical reasons of probability theory, if it's theoretically possible for you to change your mind about something, it can't have a probability exactly equal to one.  The uncertainty might be smaller than a dust speck, but it has to be there.  And if you wouldn't change your mind even if God told you otherwise, then you have a problem with refusing to admit you're wrong that transcends anything a mortal like me can say to you, I guess."

But, in a way, the more interesting question is what you say to someone not in front of an audience.  How do you begin the long process of teaching someone to live in a universe without certainty?

I think the first, beginning step should be understanding that you can live without certainty—that if, hypothetically speaking, you couldn't be certain of anything, it would not deprive you of the ability to make moral or factual distinctions.  To paraphrase Lois Bujold, "Don't push harder, lower the resistance."

One of the common defenses of Absolute Authority is something I call "The Argument From The Argument From Gray", which runs like this:

  • Moral relativists say:
    • The world isn't black and white, therefore:
    • Everything is gray, therefore:
    • No one is better than anyone else, therefore:
    • I can do whatever I want and you can't stop me bwahahaha.
  • But we've got to be able to stop people from committing murder.
  • Therefore there has to be some way of being absolutely certain, or the moral relativists win.

Reversed stupidity is not intelligence.  You can't arrive at a correct answer by reversing every single line of an argument that ends with a bad conclusion—it gives the fool too much detailed control over you.  Every single line must be correct for a mathematical argument to carry.  And it doesn't follow, from the fact that moral relativists say "The world isn't black and white", that this is false, any more than it follows from Stalin's belief that 2 + 2 = 4 that "2 + 2 = 4" is false.  The error (and it only takes one) is in the leap from the two-color view to the single-color view, that all grays are the same shade.

It would concede far too much (indeed, concede the whole argument) to agree with the premise that you need absolute knowledge of absolutely good options and absolutely evil options in order to be moral.  You can have uncertain knowledge of relatively better and relatively worse options, and still choose.  It should be routine, in fact, not something to get all dramatic about.

I mean, yes, if you have to choose between two alternatives A and B, and you somehow succeed in establishing knowably certain well-calibrated 100% confidence that A is absolutely and entirely desirable and that B is the sum of everything evil and disgusting, then this is a sufficient condition for choosing A over B.  It is not a necessary condition.

Oh, and:  Logical fallacy:  Appeal to consequences of belief.

Let's see, what else do they need to know?  Well, there's the entire rationalist culture which says that doubt, questioning, and confession of error are not terrible shameful things.

There's the whole notion of gaining information by looking at things, rather than being proselytized.  When you look at things harder, sometimes you find out that they're different from what you thought they were at first glance; but it doesn't mean that Nature lied to you, or that you should give up on seeing.

Then there's the concept of a calibrated confidence—that "probability" isn't the same concept as the little progress bar in your head that measures your emotional commitment to an idea.  It's more like a measure of how often, pragmatically, in real life, people in a certain state of belief say things that are actually true.  If you take one hundred people and ask them to list one hundred statements of which they are "absolutely certain", how many will be correct?  Not one hundred.

If anything, the statements that people are really fanatic about are far less likely to be correct than statements like "the Sun is larger than the Moon" that seem too obvious to get excited about.  For every statement you can find of which someone is "absolutely certain", you can probably find someone "absolutely certain" of its opposite, because such fanatic professions of belief do not arise in the absence of opposition.  So the little progress bar in people's heads that measures their emotional commitment to a belief does not translate well into a calibrated confidence—it doesn't even behave monotonically.

As for "absolute certainty"—well, if you say that something is 99.9999% probable, it means you think you could make one million equally strong independent statements, one after the other, over the course of a solid year or so, and be wrong, on average, around once.  This is incredible enough.  (It's amazing to realize we can actually get that level of confidence for "Thou shalt not win the lottery.")  So let us say nothing of probability 1.0.  Once you realize you don't need probabilities of 1.0 to get along in life, you'll realize how absolutely ridiculous it is to think you could ever get to 1.0 with a human brain.  A probability of 1.0 isn't just certainty, it's infinite certainty.

In fact, it seems to me that to prevent public misunderstanding, maybe scientists should go around saying "We are not INFINITELY certain" rather than "We are not certain".  For the latter case, in ordinary discourse, suggests you know some specific reason for doubt.

 

Part of the Overly Convenient Excuses subsequence of How To Actually Change Your Mind

Next post: "How to Convince Me That 2 + 2 = 3"

Previous post: "The Fallacy of Gray"

Comments (71)

Sort By: Old
Comment author: denis_bider 08 January 2008 04:45:04AM 4 points [-]

For all your talk about The One, I'm going to start to call you Morpheus.

Comment author: James_Bach 08 January 2008 05:08:40AM 3 points [-]

I wonder what your life must be like. The way you write, it sounds as if you spend a lot of your time trying to convince crazy people (by which I mean most of humanity, of course) to be less crazy and more rational, like us. Why not just ignore them?

Then I looked at your Wikipedia entry and noticed how young you are. Ah! When I was your age, I was also trying to convert everybody. My endless arguments about software development methods, circa 1994, are still in Google's Usenet archive. So, who am I to talk?

(Note: Mostly I write comments that complain about something you say, but please understand that there's a selection bias here. Even though I often find myself thinking "What an interesting way to think about that. Great idea, Eliezer!" I would rather write comments that have some kind of content, and those tend to be the critical ones.)

Comment author: Sam5 08 January 2008 05:22:17AM 0 points [-]

I really enjoy your deep analysis of topics, but might I suggest writing shorter entries a bit more often?

Comment author: Eliezer_Yudkowsky 08 January 2008 05:45:53AM 10 points [-]

Sam, if I write shorter entries, I'll never get everything said.

James: Snort. One of these days I'll do a post on "maturity bias".

Comment author: Paul_Gowder 08 January 2008 05:52:01AM 2 points [-]

Oh Eliezer, why'd you have to toss that parenthetical in about priors? The rest of the post is so wonderful. But the priors thing... hell, for my part, the objection isn't to priors that aren't imposed by some Authority, it's priors that are completely pulled out of one's arse. Demanding something beyond the whim of some metaphorical marble bouncing about in one's brain before one gets to make a probability statement is hardly the same as demanding capital-A-Authority.

Comment author: Unknown 08 January 2008 06:12:40AM 5 points [-]

The main reason people think a probability of 100% is necessary is that they assume that any other probability implies a subjective feeling of doubt, and they are aware that it is impossible to go through life in a continuous state of subjective doubt about whether or not food is necessary to sustain one's life and the like.

Once someone has separated the probability from this subjective feeling, a person can see that a subjective feeling of certainty can be justified in many cases, even though the probability is less than 100%. Once this has been admitted, I think most people would not have a problem with admitting that 100% probabilities are not possible.

Comment author: Grammarian 08 January 2008 06:18:58AM -2 points [-]

Eliezer, every time you say "the one" you mean to say "one".

Comment author: Ian_C. 08 January 2008 06:32:39AM 3 points [-]

I once thought I had a fast, crushing argument against the existence of God. I would point to various objects around me and ask "What does that do?" e.g. point at a beach ball and they would say "bounce," point at a bird and they would say "sing." And I would triumphantly say, "See, God can't exist!" and they would look at me blankly.

In my mind, every object I had ever seen did it's own peculiar thing - that is, it didn't do "just anything." Therefore the idea of omnipotence - the ability to make objects do whatever you please is contradicted by all the evidence, and therefore God (a supposedly omnipotent being) is too.

What I didn't grasp, was that all the evidence of every object they had seen in their entire life wasn't convincing to them(!), as long as they could still *imagine* a counter-example. They gave their own imagination the same weight as real evidence. So it wasn't a quick argument after all, it would require explaining evidence vs. imagination and that would lead to another thing, and another, and before you know it, it's an entire website full of articles. So I have to agree with Eliezer, there's no simple way to convey an entire mental framework in short order.

Comment author: Paul_Crowley2 08 January 2008 09:00:31AM 2 points [-]

Practically all words (eg "dead") actually cut across a continuum; maybe we should reclaim the word "certainty". We are certain that evolution is how life got to be what it is, because the level of doubt is so low you can pretty much forget about it. Any other meaning you could assign to the word "certain" makes it useless because everything falls on one side.

Comment author: Ben_Jones 08 January 2008 09:45:21AM 2 points [-]

Denis, you will definitely enjoy this one.

Thinking of science in religious terms makes the whole thing fall over, for everyone. The only way you can have 100% certainty in something is if it's not falsifiable. The only way something can be unfalsifiable is if it is mysterious, ethereal and makes no testable predictions.

My withering rejoinder? "Yes, you may have god. But do you have any knowledge?"

Comment author: christopherj 12 October 2013 10:42:51PM 0 points [-]

This is an excellent point, an implication that I ought to have deduced myself but totally didn't. This means not only that absolute certainty about reality is impossible to get, but more interestingly that absolute certainty about reality is entirely useless as it can't make specific predictions. Even if it were something like "can't go faster than the speed of light", being absolutely certain of this would mean that "scientists measuring something going faster than the speed of light because of experimental error" would be a valid prediction, along with "it is an illusion/I am crazy". Since neither experimental result would disprove the certain thing, it must follow that the certain thing can't predict the experimental result.

In fact, I think we can claim that the probability that you're sane, should be an upper bound on probabilities you're allowed to claim. Thus to claim arbitrarily high probabilities, you'd need an arbitrarily large group of probably sane people who agree (but then what are the odds that you just imagined the group of people who agree with you?). Since you can't be absolutely certain that you and all your group are perfectly sane (along with the possibility of a coincidentally matching mass hallucination), that would make for an upper bound on certainty. In fact the whole group thing would be unnecessary if we admit the possibility that the person we're trying to convince might be insane.

Next time someone claims absolute certainty about something, I'll ask them to prove that they're not insane. That should take them into neutral territory that they haven't had time to wall up, and if they did consider that they might be insane it would be an even better argument.

Comment author: Ian_C. 08 January 2008 12:09:35PM 0 points [-]

'Any other meaning you could assign to the word "certain" makes it useless because everything falls on one side.'

Yes, exactly. The concept of "certainty" as colloquially used has no referents. It is such a strict standard, the only things that could possibly be referents for it are statements made by an omniscient entity. A statement by any lesser entity could be wrong and therefore could not be a referent. We are beating ourselves up over a concept no more valid that "unicorn."

Comment author: LG 08 January 2008 02:12:04PM 1 point [-]

Ian, your God argument doesn't follow:

1) Objects behave in certain, predictable ways 2) God can make objects behave arbitrarily 4) No objects behave arbitrarily 5) There is no God

Hidden argumentation:

3) Therefore, God WILL make things behave arbitrarily

You can't assume that an omnipotent God will behave in any particular way.

Comment author: Caledonian2 08 January 2008 02:24:35PM 1 point [-]

You can't assume that an omnipotent God will behave in any particular way.

What happens when an immovable object meets an irresistible force?

Comment author: Michael_Sullivan 08 January 2008 03:48:50PM 0 points [-]

I think you've mischaracterized Ian's argument. He seems to be arguing that because everything in his empirical experience behaves in particular ways and appears incapable of behaving arbitrarily, that this is strong evidence to suggest that no other being could exist which is capable of behaving arbitrarily.

I think the real weakness of this argument is that the characterization of things as behaving in particular ways is way too simplistic. Balls may roll as well as bounce. They can deflate or inflate, or crumple or explode, or any of a thousand other things. As you get more complex than balls, the range of options get wider and wider. For semi-intelligent animals the range is already spectacularly wide, and for sentient creatures, the array of possibility is literally terrifying to behold.

We see this vast range in our experience of things, and the range of behaviors and powers that they have, that it seems doubtful we can circumscribe too closely what some unknown being would be able to do. Now, complete omnipotence poses huge philosophical and mathematical problems not unlike infinite sets or probabilities of 1. Intuitively I can see that the same arguments rendering probabilities of 1 impossible (or at least impossible to prove) would seem to work equally well against total ominipotence.

But what if omnipotence, like the normal use of "certainty", doesn't have to mean the absolute ability to do anything at all, but merely so much power and range of use of power that it can do anything we could practically conceive for it to do. This is probably the sense in which early writers mean to claim that God is all-powerful, but the lack of precision in language tripped them up.

I suggest we don't have any strong evidence to suggest that such a being could never exist. In fact, anyone who doesn't consider interest in a potential singularity a complete load of horse manure must agree with me that it's entirely possible that some of us will either become create such beings.

In my mind, either this is no argument against religions with omnipotent gods or it's a damning argument against the singularity. Which is it?

Comment author: Ian_C. 08 January 2008 05:30:48PM 0 points [-]

LG - Your objection is only valid if you assume I am starting with the idea of omnipotence and trying to use the evidence to disprove it. In fact, I am starting with the evidence and showing that the idea of omnipotence can't be arrived at without contradiction.

1) Objects behave in certain, predictable ways 2) Therefore the suggestion that someone could make an object behave arbitrarily contradicts the evidence 3) Therefore the idea of "omnipotence" contradicts the evidence 4) Therefore the idea of God contradicts the evidence

It's a different style of reasoning: starting with reality vs. starting with imagination and then using reality only as a test.

Comment author: Z._M._Davis 08 January 2008 08:14:06PM 0 points [-]

Ian, are you arguing that the concept of omnipotence is incoherent, or merely (as Michael seems to have interpreted you:) that we have no reason to believe that any omnipotent entity actually exists?

If you really mean the latter, then I suspect most people here will agree with you: if one does not observe any evidence for omnipotence, and one accepts Occam's razor (as reasonable people do), then one concludes that no omnipotent entity exists, unless and until strong evidence to the contrary comes up.

But it remains the case that the idea of omnipotence is compatible with the evidence. The religious can, without logical self-contradiction, claim that God-in-Her-Infinite-Wisdom chooses to make created objects behave in predictable ways. It's true that one would be silly to believe this story: that would be violating Occam's razor, "starting with imagination, and then using reality only as a test"--however you want to phrase it--but it's not contradictory.

If you want to show that an omnipotent entity cannot exist (that P(God-exists) is closer to, say, P(1+1=9) than P(there's-an-invisible-unicorn-following-you)), you have to do a little more work. Fortunately, it's already been done (see Caledonian's comment).

Comment author: bigjeff5 03 March 2011 05:04:46PM 2 points [-]

When I was growing up in a baptist church one of the primary arguments for all the evidence that suggests the earth is over four billion years old and that the universe is nearly fourteen billion years old, was that God made it to look that way on purpose. That is, when he said "let there be light!" he didn't just make the stars and let the light take its course (which would take between thousands and billions of years, and some light we see now would never reach us at all), but made the stars with a past history and its light already hitting us. Same with all the geological evidence - God just made it look as though it were really old. So the universe was 6,000 years old, but it looked exactly like it would if it were 14 billion years old.

Ostensibly this was to test our faith, however, after thinking about it for a few years after I left high school I realized that if any of this were the least bit true, if God really did exist, if he really designed a universe specifically to trick people into believe he didn't exist (it's the only valid reason I can think of for doing it - it's even what the preachers think, though they don't put it that way), and thereby send whole swaths of people to hell for no reason other than they were trying to find the truth (which the Bible does admonish one to seek) then he has to be the biggest douchebag in the universe.

That's not evidence against the position though. Really there can never be any evidence against their position - it's theological phlogiston, but it does make it very easy to stop accepting God. Once you do that you realize that a god isn't necessary at all, so why would you believe in one? Especially one that is such a vile, evil, spiteful creature?

Comment author: RBH2 08 January 2008 08:18:26PM 1 point [-]

Here's an example: some time ago I was discussing evolution with a creationist, and was asked "Can you prove it?" I responded that "prove" isn't the appropriate word, but rather scientists gather and evaluate evidence to see what position the evidence most clearly supports. He crowed in jubilation. "Then you don't have any proof!" he exclaimed.

So my response in that situation has changed. I now respond, "Yes, we have the same level of proof that sends people to death row: We've got the DNA!" That's adapted from Sean B. Carroll, author of Making of the Fittest.

With respect to the first potential response you identify,

"The power of science comes from having the ability to change our minds and admit we're wrong. If you've never admitted you're wrong, it doesn't mean you've made fewer mistakes."

I tend to simplify that to "Yup, that even has a technical name: It's called learning. I commend it to your attention." :)

RBH

Comment author: Paul_Gowder 08 January 2008 10:21:40PM 0 points [-]

Ian, your argument fails not merely because premise 1 isn't established apodictically. (Which is the flaw of inductive reasoning generally, but which, as Eliezer tries to point out to the religious, doesn't mean we don't have good reason to believe it.)

It also fails because we have counterexamples up the wazoo. Michael's point about sentient creatures is one of them. But we can generate a lot of others just by diddling around the space in which we define "objects." Balls bounce and roll, bowling balls just roll, spherical objects generally do all sorts of crazy things. So the "spherical things" case is a counterexample too, just so far as you define the class of objects in such a way that spherical things count as objects.

You get a one-to-one mapping of object to function only by defining the objects on the functions, by picking as your object a uni-function (or few-function) idea like "ball." So your argument is actually circular in a sense.

Comment author: g 08 January 2008 11:47:38PM 1 point [-]

Eliezer's use of "the one" is not an error or a Matrix reference, it's a deliberate echo of an ancient rabbinical trope. (Right, Eliezer?)

Comment author: poke 09 January 2008 12:49:47AM 1 point [-]

I think Ian makes an important point: people give their ability to imagine something the same weight as evidence. The most gratuitous example of this, relevant here because it's the impetus for inductive probabilism, is the so-called "problem of induction." Say we have two laws concerning the future evolution of some system, call them L1 and L2, such that at some future time t L2(t) gives a result that is defined only as being NOT the result given by L1(t). L1 is based on observation. L2 represents my ability to imagine that my observations will fail to hold at some future time t. The problem of induction is a result of giving MORE weight to L2 than L1.

Comment author: Eliezer_Yudkowsky 09 January 2008 02:47:11AM 4 points [-]

Actually, I didn't realize "the one comes to us and says" was a rabbinical borrowing until it was pointed out to me. But it seems to have the right tone, and it's syntactical; I care not whether it is grammatical.

Comment author: Paul_Gowder 09 January 2008 02:58:14AM 0 points [-]

Poke, that's a really unhelpful way of thinking about the problem of induction. The problem of induction is a problem of logic in the first instance -- a description of the fact that we *do* have absolute knowledge of the truth of deductive arguments (conditional on the premises being true) but we don't have absolute knowledge of the truth of inductive arguments. And that's just because the conclusion of a deductive argument is (in some sense) contained in the premises, whereas the conclusion of a generalization isn't contained in the individual observations. What's contained in the individual observations (putting on social scientist hat here) is a probability, given one's underlying distribution, of finding data like what you found if the world is a certain way.

That's a real distinction -- it doesn't come from somehow giving weight to imaginary possibilities, it reflects the difference between logical truth (which IS absolute) and empirical truth (which is not).

Comment author: Ian_C. 09 January 2008 07:58:40AM 0 points [-]

Michael: "Balls may roll as well as bounce. They can deflate or inflate, or crumple or explode, or any of a thousand other things." Paul: "It also fails because we have counterexamples up the wazoo."

But even if an object behaves thousands of ways, it is still behaving in those ways and only those ways. If we want to work with it, we must follow cause and effect, we can't simply *will* it to do what we want. That is the case for all objects I know of, there are no counter-examples.

Z. M. Davis: "are you arguing that the concept of omnipotence is incoherent, or merely [...] that we have no reason to believe that any omnipotent entity actually exists?"

I am arguing that observation proves that omnipotence is impossible. If object behavior is determined by the kind of thing it is (which appears to be the case, since the same kinds of things act the same), then it is not determined by anything else, such as the will of an external agent.

"The religious can, without logical self-contradiction, claim that God-in-Her-Infinite-Wisdom chooses to make created objects behave in predictable ways."

Their claim that object behavior is determined by God contradicts the observation that it is determined by what *kind* of thing an object is.

p.s. I think this discussion has gotten a little OT (sorry Eliezer)

Comment author: poke 09 January 2008 11:33:41PM 0 points [-]

Paul Gowder,

I think your response is too general. How does the problem of induction being an deductive argument make the conclusion any less absurd? It's a deductive argument that takes as its premise my ability to imagine something being otherwise. That makes sense if you're an Empiricist philosopher, since you accept an Empiricist psychology a priori, but not a lot of sense if you're a scientist or committed to naturalism. Further, the difference you cite between deductive and inductive arguments (that the former is certain and the latter not), is the conclusion of the problem of induction; you can't use it to argue for the problem of induction.

Comment author: gutzperson 10 January 2008 08:40:30AM 0 points [-]

Really like your article. Thanks

Comment author: Paul_Gowder 10 January 2008 09:07:13AM 0 points [-]

Poke: let's attack the problem a different way. You seem to want to cast doubt on the difference along the dimension of certainty between induction and deduction. ("the difference you cite between deductive and inductive arguments (that the former is certain and the latter not), is the conclusion of the problem of induction; you can't use it to argue for the problem of induction")

Either deduction and induction are different along the dimension of certainty, or they're not. So there are four possibilities. induction = certain, deduction = certain (IC, DC); InC, DnC; IC, DnC; and InC, DC.

Surely, you don't agree that induction gives us certain knowledge. The "imagination-based" story: the fact that the coin came up heads the last three million times gives us very high probability for the proposition that the coin is loaded, but not certain. But you've rejected the "imagination-based" story. I'm fine with that. Because there are real stories. Countless real stories. Every time one scientist repeats another scientist's experiment and gets a different result, it's a demonstration of the fact that inductive knowledge isn't certain: the first scientist validly drew a conclusion from induction as a result of his/her experiments (do you disagree with that??), and the second scientist showed that the conclusion was wrong or at least incomplete. Ergo, induction doesn't give us certain knowledge.

That eliminates two possibilities, leaving us with InC, DnC and InC, DC. The following is a deductive argument. "1. A. 2. A-->B. 3. B." Assume 1 and 2 are true. Do you think we thereby have certain knowledge that B? If so, you seem to be committed to DC, and thereby to a difference between induction and deduction on the domain of certainty.

(Heavens... the things I do rather than sleep.)

Comment author: Manon_de_Gaillande2 13 January 2008 10:32:55AM 0 points [-]

Ian C: What about an universal Turing machine?

Comment author: Kragen_Javier_Sitaker 14 February 2008 01:14:02PM 0 points [-]

Maybe you should try telling some parables about people who thought they had certain knowledge. Maybe some of them should include other people who did not think their knowledge was certain.

Comment author: David_Gerard 19 January 2011 12:37:28PM 1 point [-]

In the world of the unenlightened ones, there is authority and un-authority. What can be trusted, can be trusted; what cannot be trusted, you may as well throw away. There are good sources of information and bad sources of information.

This is pretty much the standard argument against Wikipedia. It fails to address the question of "what's it for?"

Comment author: kaz 18 August 2011 09:52:55PM *  3 points [-]

I mean, suppose that God himself descended from the clouds and told you that your whole religion was true except for the Virgin Birth. If that would change your mind, you can't say you're absolutely certain of the Virgin Birth.

I think that latter statement is equivalent to this:

V = Virgin Birth
G = God appears and proclaims ~V

P(V|G) < 1
∴​P(V) < 1

But that argument is predicated on P(G) > 0. It is internally consistent to believe P(V|G) < 1 and yet P(V) = 1, as long as one also believes P(G) = 0, i.e. one is certain that God will not appear and proclaim ~V.

Comment author: Technoguyrob 18 December 2011 08:07:15AM *  0 points [-]

Go a little farther. Let G(X) = God appears and proclaims X. For religions with acknowledgment of divine revelation, which is all major religions, P(G(X)) has been non-zero for certain X (people have received revelation directly from God). Indeed, granting ultimate authority to God, again a factor of all major religions, means that 0 < P(G(X)) < 1 for all X (granting that there is a statement X such that humans know God will not appear and proclaim X is removing ultimate authority from God and assigning part of it to humans--by the way, we can assume the space of X's is countable so there is no problem with summing to 1). So it is not internally consistent to assume, in particular, that P(G(~V)) = 0, without abandoning ultimate authority to God (or probability theory as a way of reasoning about this stuff, as most religions opt to do).

Of course the more productive question is what evolutionary mechanisms allowed human brain architecture the ability to get so off-par with reality but productive from a Darwinian point of view. Some would argue that potential to be so absurdly wrong is what gives brains their computational power in the first place! Bounded rationality under physical constraints is a very active area of research.

Comment author: ike 03 August 2014 04:53:44AM 0 points [-]

For technical reasons of probability theory, if it's theoretically possible for you to change your mind about something, it can't have a probability exactly equal to one.

This is supposed to be an argument against giving anything an 100% probability. I do agree with the concept, but this particular argument seems wrong. It's based on Conservation of Expected Evidence (if the "technical reasons of probability theory" refer to something else, let me know). However, the Bayes rule doesn't just imply that "having a chance of changing your mind" -> "you are not 100% certain", it also gives us bounds on what posteriors we can have. If we evaluate a 5% chance to changing our minds on something, that would seem to imply that we cannot put a >95% in our original claim.

So, the reason I reject this is as follows:

EY lays out possible evidence for 2+2=3 here. Imagine you believe at 50% level that someone will cause you to view that evidence tomorrow. Hypnosis, or some other method. Applying Bayes rule like EY seems to be applying it here, you should evaluate right now at most a 50% chance that 2+2=4. I think the rational thing to do in that situation (where putting the earplugs together does in fact show 2+2 equaling 4), is to believe that 2+2=4, with around the same much confidence as you do now. Therefore, there is something wrong with this line of reasoning.

If anyone can point to what I'm doing wrong, or thinks that in the situation I outlined, the rational thing to do is to evaluate a 50% or lower chance of 2+2=4, I'd like to hear about it.

Comment author: Jiro 03 August 2014 02:45:56PM 1 point [-]

Not everything that changes your mind is evidence within the meaning of Conservation of Expected Evidence. If there's a 50% chance you will believe X tomorrow, but that situation involves believing X because you're hypnotized, that's not evidence at all and you should not change your current beliefs based on that.

Comment author: ike 03 August 2014 08:50:08PM 1 point [-]

So then, moving on to the argument that "because I might believe 2+2=3 tomorrow (albeit very unlikely), I can't believe 2+2=4 100% today".

If Omega tells you that tomorrow you will believe that 2+2=3, most of your probability mass is concentrated in the possibility that 2+2=4, but you'll be somehow fooled, perhaps by hypnosis or nano-editing of your brain. Very little if any probability mass is for the theory that 2+2 really equals 3, and you'll have the major revelation tomorrow. In order to use this thought experiment to show that I don't have 100% confidence in 2+2=4, you need to assert that the second probability exists, however the thought experiment is also consistent with the first probability being high or one and the second being zero (you can't assume I agree that zero is not a probability, or you're begging the question).

Comment author: Wes_W 03 August 2014 07:12:07PM 1 point [-]

EY lays out possible evidence for 2+2=3 here. Imagine you believe at 50% level that someone will cause you to view that evidence tomorrow. Hypnosis, or some other method. Applying Bayes rule like EY seems to be applying it here, you should evaluate right now at most a 50% chance that 2+2=4. I think the rational thing to do in that situation (where putting the earplugs together does in fact show 2+2 equaling 4), is to believe that 2+2=4, with around the same much confidence as you do now.

Why do you think that is the correct thing to do in that situation?

Here, in this real situation, yes you should trust your current counting abilities. But if you believe with 50% confidence that, within 24 hours, someone will be able to convince you that your ability to count is fundamentally compromised, you also don't place a high level of confidence on your ability to count things correctly - no more than 50%, in fact.

"I can count correctly" and "[someone can demonstrate to me that] I'm counting incorrectly" are mutually exclusive hypotheses. Your confidence in the two ought not to add up to more than 1.

Comment author: ike 03 August 2014 09:23:08PM 1 point [-]

If I know that I'll actually experience that scenario tomorrow where I wake up and have all available evidence showing that 2+2=3, but now I still visualize XX+XX=XXXX, then I trust my current vast mound of evidence over a future smaller weird mound of evidence. I'm not evaluating "what will I think 2+2= tomorrow?" (as EY points out elsewhere, this kind of question is not too useful). I'm evaluating "what is 2+2?" For that, it seems irrational to trust future evidence when I might be in an unknown state of mind. The sentence EY has repeated "Those who dream do not know they dream; but when you wake you know you are awake", seems appropriate here. Just knowing that I will be convinced, however the means, is not the same as actually convincing me. What if they hack your mind and insert false memories? If you would know someone would do that tomorrow, would you think that the future memories actually happened in your past?

If you're trying to make the argument that "since someone can fool me later, I can be fooled now and wouldn't notice", well, first of all, that doesn't seem to be the argument EY is making. Second, I might have to be in such a situation to be precise, but I'd expect the future that I am being fooled in would have to delete the memory of this sequence of posts (specifically the 2+2=3 post, and this series of comments). The fact that I remember seems to point to the editing/hacking not happening yet.

After thinking of this I see that an intruder would just change all the references from 2+2=4 to 2+2=3 and vice versa, leaving me with the same logic to justify my belief in 2+2=3. So that didn't work.

How about this: once I have to consider my thought processes hacked, I can't unwind past that anyway, so to keep sane I'll have to assume my current thoughts are not corrupted.

Comment author: Wes_W 04 August 2014 08:25:16AM 0 points [-]

I think deception should be treated as a special case, here. Normally, P(X | a seemingly correct argument for X) is pretty high. When you specifically expect deception, this is no longer true.

I'm not sure it's useful to consider "what if they hack your mind" in this kind of conversation. Getting hacked isn't a Bayesian update, and hallucinations do not constitute evidence.

Comment author: ike 04 August 2014 10:43:26AM 0 points [-]

If there was a way to differentiate hallucinations from real vision, then I'd agree, but there isn't.

Anyway, I thought of a (seemingly) knockdown argument for not believing future selves: what if you currently believe at 50% that tomorrow you'll be convinced of 2+2=3, the next day 2+2=5, and the next day 2+2=6? (And that it only has one answer.) If you just blindly took those as minimums, then your total probability mass would be at least 150%. Therefore, you can only trust your current self.

Comment author: Wes_W 04 August 2014 03:31:41PM 1 point [-]

If there was a way to differentiate hallucinations from real vision, then I'd agree, but there isn't.

Sure, but that is a different problem than what I'm talking about. Expecting to hallucinate is different than expecting to receive evidence. If you expect to be actually convinced, you ought to update now. If you expect to be "convinced" by hallucination, I don't think any update is required.

Framing the 2+2=3 thing as being about deception is, IMO, failing to engage with the premise of the argument.

Anyway, I thought of a (seemingly) knockdown argument for not believing future selves: what if you currently believe at 50% that tomorrow you'll be convinced of 2+2=3, the next day 2+2=5, and the next day 2+2=6?

I would be very confused, and very worried about my ability to separate truth from untruth. In that state, I wouldn't feel very good about trusting my current self, either.

Comment author: CCC 04 August 2014 10:10:11AM 0 points [-]

"I can count correctly" and "[someone can demonstrate to me that] I'm counting incorrectly" are mutually exclusive hypotheses. Your confidence in the two ought not to add up to more than 1.

Not entirely. It is possible that someone may be able to provide a convincing demonstration of an untrue fact; either due to deliberate deception, or due to an extremely unlikely series of coincidences, or due to the person giving the demonstration genuinely but incorrectly thinking that what they are demonstrating is true.

So, there is some small possibility that I am counting correctly and someone can demonstrate to me that I am not counting correctly. The size of this possibility depends, among other things, on how easily I can be persuaded.

Comment author: Wes_W 05 August 2014 04:13:59AM *  0 points [-]

It's based on Conservation of Expected Evidence (if the "technical reasons of probability theory" refer to something else, let me know).

By the way, separate from our conversation downthread, I don't think that is the technical reason being referred to. Or at least, it's a rather indirect way of proving that point.

Bayes' Theorem is P(A|B) = P(B|A)P(A)/P(B).

If P(A) = 0, then P(A|B) = P(B|A)*0/P(B) = 0 as well, no matter what P(B|A) and P(B) are. Or in words: if you start with credence exactly zero in some proposition, it is impossible for any piece of evidence to make you update away from that. By the contrapositive, if it is not impossible for you to update away from your original opinion ("change your mind"), your credence is nonzero.

A similar argument holds for probability 1, which should be unsurprising, since P(A) = 1 is equivalent to P(~A) = 0.

Comment author: ike 07 August 2014 03:00:22PM 0 points [-]

The problem with this argument is that it assumes that evidence is not altered. What I mean is that Bayesian updating implicitly assumes that all evidence previously used is included in the new calculation, and the new evidence is a strict superset of the old one. However, suppose I hypothetically assign 100% to any math fact "simple enough" that I can verify it mentally in under a minute (to choose an arbitrary time). So today, when I'm visualizing 2+2=4, I can say that I put a 100% confidence on the claim "2+2=4".

Now, is this contradicted by the fact that tomorrow I will see new evidence, causing me to conclude that 2+2=3? No. Aside from seeing new evidence later, my current evidence is being changed. Right now, the evidence consists of actual brain operations that visualize 2+2. Tomorrow, that evidence is in the form of memories of brain operations. If I live in a possible world where only memories can be edited and not actual running brain processes, then tomorrow I will conclude that today's memories were faked. That is not something I can conclude today, because I can repeat the visualization at any time. (One minute after, I might be relying on memories, but at the time, I'm not.)

Comment author: Wes_W 07 August 2014 07:14:25PM *  1 point [-]

However, suppose I hypothetically assign 100% to any math fact "simple enough" that I can verify it mentally in under a minute (to choose an arbitrary time).

That isn't a valid operation.

For one: assigning 100% confidence in your ability to correctly do something on which you do not have, historically, a 100% track record is quite unwise. Probably you aren't even 1-10^-6 reliable, historically, and that would still be infinitely far short of 100%. But it's a toy hypothetical, so realism isn't the primary objection.

More importantly, we don't get to arbitrarily assign probabilities.

The problem with this argument is that it assumes that evidence is not altered. What I mean is that Bayesian updating implicitly assumes that all evidence previously used is included in the new calculation, and the new evidence is a strict superset of the old one.

Bayes' Theorem, as the name implies, is a theorem. It does not assume anything about evidence; it doesn't even mention evidence. It talks strictly about probabilities. All this "evidence" stuff is high-level natural-language abstraction about what the probabilities "mean" - the math itself is a reduction of the concept of evidence. It only assumes some axioms of probability; you may attempt to dispute those if you like, but that would be a very different conversation.

And, because Bayes' Theorem is a theorem, assigning 100% confidence to any proposition of which you could in principle ever cease to have 100% confidence is strictly, provably an error.

The special case of reasoning while unable to trust your own sanity requires lots of conditions that are usually negligible. For example, P(X happened | I remember that X happened) is usually pretty close to 1; for most purposes we can ignore it and pretend "X happened" and "I remember X happened" are the same thing. But if you suspect your memories have been altered, this is no longer true, so you'll have that extra factor in certain calculations.

Nothing that you are describing is outside the domain of the relevant math. It's just weird corner cases.

Comment author: ike 07 August 2014 08:13:46PM *  0 points [-]

Why can't "X happened" be infinite evidence for X, while "I remember that X happened" only finite?

Bayes theorem applies, but it's not being applied accurately, because of these special cases.

And, because Bayes' Theorem is a theorem, assigning 100% confidence to any proposition of which you could in principle ever cease to have 100% confidence is strictly, provably an error.

Define "you" and "ever". I argue that the "you" who changes there mind tomorrow is not the same observer that decides with 100% probability today, because the one today has information that the one tomorrow doesn't; namely, actual brain ops, versus memories for tomorrow you.

I could in principle be convinced that my 100% assesment is wrong: by removing or editing the evidence. That is not Bayesian updating, it's brain editing, and then a Bayesian update on other evidence.

You're equating today me with tomorrow me, and you can't do that unless all my current evidence will still be there tomorrow.

Why didn't EY use an example of a hypothetical other race (the 223ers), who think that everything is evidence for 2+2=3 as his example? Because we need the same person (or observer, is there a technical term for that thing-doing-the-assesing?) to change their mind. I assert that if memory can't be trusted, it won't count as the "same" to apply Bayes theorom straightforwardly.

Comment author: Wes_W 08 August 2014 08:06:01AM *  1 point [-]

You could consider a proposition to be infinite evidence for itself, I guess. That seems like maybe a kinda defensible interpretation of P(A|A) = 1. I don't think it gets you anything useful, though.

Define "you" and "ever". I argue that the "you" who changes there mind tomorrow is not the same observer that decides with 100% probability today, because the one today has information that the one tomorrow doesn't; namely, actual brain ops, versus memories for tomorrow you.

[∃ B: P(A|B) ∈ (0,1)] → [P(A) ∈ (0,1)]. Better?

If, having made them, your own probability assessments are meaningless and unusable, who cares what values you assign? Set P(A) = 328+5i and P(B) = octopus, for all it matters.

Additionally, I'm not sure it matters when the mind-changing actually occurs. At the instant of assignment, your mind as it is right that moment should already have a value for P(A|B) - how you would counterfactually update on the evidence is already a fact about you. If you would, counterfactually assuming your current mind operates without interference until it can see and process that evidence, update to some credence other than 1, it is already at that moment incorrect to assign a credence of 1. Whether that chain of events does in fact end up happening won't retroactively make you right or wrong; it was already the right or wrong choice when you made it.

Or, if you get mind-hacked, your choice might be totally moot. But this is generally a poor excuse to deliberately make bad choices.

Comment author: ike 08 August 2014 11:49:52AM *  0 points [-]

[∃ B: P(A|B) ∈ (0,1)] → [P(A) ∈ (0,1)]. Better?

Yes, it makes it clearer what you're doing wrong. I'll do what I should have done earlier, and formalize my argument:

Let's call "2+2=4" A, "2+2=3" B, "I can visualize 2+2=4" C, "I can visualize 2+2=3" D, "I can remember visualizing 2+2=4" E, "I can remember visualizing 2+2=3" F.

So, my claim is that P(A|C) is 1, likewise P(B|D). (Remember, I don't think it's like this in real life, I'm trying to show that the argument put forward to prove that is not sufficient.)

What is the Bayes formula for tomorrow's assessment?

Not, P(A|C,D), which (if <1) would indeed disprove P(A|C)=1.

But, instead, P(A|E,D). This can be less than 1 while P(A|C)=1. I'll just make up some arbitrary numbers as priors to show that.

I'm assuming A and B are mutually exclusive, as are C and D.

P(A)=.75

P(B)=.25 (just assume that it's either 2 or 3)

P(C)=.375

P(D)=.125

P(memory of X | X happened yesterday)=.95

P(memory of X | X didn't happen yesterday)=.001

P(E)=P(C)*.95+P(~C)*.001=0.356875

P(F)= P(D)*.95+P(~D)*.001=0.119625

P(C|A)=.50

P(C|B)=0

P(D|A)=0

P(D|B)=.50

P(A|C) = P(C|A)P(A)/P(C)=(.50*.75)/(.75*.50+.25*0)=1

P(A|C,D) is undefined, because C and D are mutually exclusive (which corresponds to not being able to visualize both 2+2=3 and 2+2=4 at the same time)

P(F,D)=P(D)*.95=0.11875

P(A|E,D)= P(E,D|A)P(A)/P(E,D)=0 (Because D|A is zero).

Using my numbers, you need to derive a mathematical contradiction if there are, truly "technical reasons" for this being impossible.

The mistake you (and EY) are making is that you're not comparing P(A) to P(A|B) for some A,B, but P(A|B) to P(A|C) for some A,B,C.

Added: I made two minor errors in definitions that have been corrected. E and F are not exclusive, and C and D shouldn't be defined as "current", but rather as having happened, which can only be confirmed definately if they are current. However, they have the evidential power whenever they happened, it's just if they didn't happen now, they're devalued because of fragile memory.

Added: Fixed numerical error and F where it was supposed to be E. (And errors with evaluating E and F. I really should not have assumed any values that I could have calculated from values I already assumed. I have less degrees of freedom than I thought.)

Comment author: CCC 08 August 2014 12:23:08PM *  0 points [-]

I'm assuming A and B are mutually exclusive, as are C and D, and E and F.

While A and B being mutually exclusive seems reasonable, I don't think it holds for C and D. And I'm pretty sure that it doesn't hold at all for E and F.

If I remember visualising 2+2=3 yesterday and 2+2=4 the day before, then E and F are both simultaneously true.

P(A)=.75

P(C)=.75

P(C|A)=.50

These three statements, taken together, are impossible. Consider:

Over the 0.75 probability space where C is true (second statement), A is only true in half that space (third statement). Thus, A is false in the other half of that space; therefore, there is a probability space of at least 0.375 in which A is false. Yet A is only false over a probability space of size 0.25 (first statement).

In your calculations further down, you use the value P(C) = (.75.50+.250) = 0.375; using that value for P(C) instead of 0.75 removes the contradiction.

Similarly, the following set of statements lead to a contradiction, considered together:

P(A)=.75

P(B)=.25 (just assume that it's either 2 or 3)

P(D)=.25

P(D|A)=0

P(D|B)=.50

Comment author: ike 08 August 2014 04:12:32PM 1 point [-]

The first and third comments are correct. I made some errors in first typing it up that shouldn't take away from the argument that are now fixed. The third comment is an actual mistake that has also been fixed.

Over the 0.75 probability space where C is true (second statement), A is only true in half that space (third statement).

This is wrong. P(C|A) is read as C given A, which is the chance of C, given that A is true. You're mixing it up with P(A|C). However, if you switch A and C in your paragraph, it becomes a valid critique, which I've fixed, substituting the correct values in. Thanks. (Did I mess anything else up?)

I'm starting to appreciate mathematicians now :)

You need to escape your * symbols so they output correctly.

Comment author: CCC 09 August 2014 05:39:51AM 0 points [-]

You're mixing it up with P(A|C). However, if you switch A and C in your paragraph, it becomes a valid critique, which I've fixed, substituting the correct values in. Thanks.

You're right, I had that backwards.

(Did I mess anything else up?)

Hmmm....

P(F)=.20

P(F)= P(D)*.95+P(C)*.001=0.119125

You have two different values for P(F). Similarly, the value P(E)=0.70 does not match up with P(C), P(D) and the following:

P(memory of X | X happened yesterday)=.95

P(memory of X | X didn't happen yesterday)=.001

None of which is going to affect your point, which seems to come down to the claim that there exist possible events A, B, C, D, E and F such that P(A|C) = 1.

Comment author: Wes_W 08 August 2014 09:37:45PM 0 points [-]

P(A|C) = P(C|A)P(A)/P(C)=(.50.75)/(.75.50+.25*0)=1

blink

Well huh. I suppose I ought to concede that point.

There are probabilities of 0 and (implicitly) 1 in the problem setup. I'm not confident it's valid to start with that; I worry it just pushes the problem back a step. But clearly, it is at least possible for probabilities of 1 to propagate to other propositions which did not start at 1. I'll have to think about it for a while.