It may come as a surprise to some readers of this blog, that I do not always advocate using probabilities.

    Or rather, I don't always advocate that human beings, trying to solve their problems, should try to make up verbal probabilities, and then apply the laws of probability theory or decision theory to whatever number they just made up, and then use the result as their final belief or decision.

    The laws of probability are laws, not suggestions, but often the true Law is too difficult for us humans to compute.  If P != NP and the universe has no source of exponential computing power, then there are evidential updates too difficult for even a superintelligence to compute - even though the probabilities would be quite well-defined, if we could afford to calculate them.

    So sometimes you don't apply probability theory.  Especially if you're human, and your brain has evolved with all sorts of useful algorithms for uncertain reasoning, that don't involve verbal probability assignments.

    Not sure where a flying ball will land?  I don't advise trying to formulate a probability distribution over its landing spots, performing deliberate Bayesian updates on your glances at the ball, and calculating the expected utility of all possible strings of motor instructions to your muscles.

    Trying to catch a flying ball, you're probably better off with your brain's built-in mechanisms, then using deliberative verbal reasoning to invent or manipulate probabilities.

    But this doesn't mean you're going beyond probability theory or above probability theory.

    The Dutch Book arguments still apply.  If I offer you a choice of gambles ($10,000 if the ball lands in this square, versus $10,000 if I roll a die and it comes up 6), and you answer in a way that does not allow consistent probabilities to be assigned, then you will accept combinations of gambles that are certain losses, or reject gambles that are certain gains...

    Which still doesn't mean that you should try to use deliberative verbal reasoning.  I would expect that for professional baseball players, at least, it's more important to catch the ball than to assign consistent probabilities.  Indeed, if you tried to make up probabilities, the verbal probabilities might not even be very good ones, compared to some gut-level feeling - some wordless representation of uncertainty in the back of your mind.

    There is nothing privileged about uncertainty that is expressed in words, unless the verbal parts of your brain do, in fact, happen to work better on the problem.

    And while accurate maps of the same territory will necessarily be consistent among themselves, not all consistent maps are accurate.  It is more important to be accurate than to be consistent, and more important to catch the ball than to be consistent.

    In fact, I generally advise against making up probabilities, unless it seems like you have some decent basis for them.  This only fools you into believing that you are more Bayesian than you actually are.

    To be specific, I would advise, in most cases, against using non-numerical procedures to create what appear to be numerical probabilities.  Numbers should come from numbers.

    Now there are benefits from trying to translate your gut feelings of uncertainty into verbal probabilities.  It may help you spot problems like the conjunction fallacy.  It may help you spot internal inconsistencies - though it may not show you any way to remedy them.

    But you shouldn't go around thinking that, if you translate your gut feeling into "one in a thousand", then, on occasions when you emit these verbal words, the corresponding event will happen around one in a thousand times.  Your brain is not so well-calibrated.  If instead you do something nonverbal with your gut feeling of uncertainty, you may be better off, because at least you'll be using the gut feeling the way it was meant to be used.

    This specific topic came up recently in the context of the Large Hadron Collider, and an argument given at the Global Catastrophic Risks conference:

    That we couldn't be sure that there was no error in the papers which showed from multiple angles that the LHC couldn't possibly destroy the world.  And moreover, the theory used in the papers might be wrong.  And in either case, there was still a chance the LHC could destroy the world.  And therefore, it ought not to be turned on.

    Now if the argument had been given in just this way, I would not have objected to its epistemology.

    But the speaker actually purported to assign a probability of at least 1 in 1000 that the theory, model, or calculations in the LHC paper were wrong; and a probability of at least 1 in 1000 that, if the theory or model or calculations were wrong, the LHC would destroy the world.

    After all, it's surely not so improbable that future generations will reject the theory used in the LHC paper, or reject the model, or maybe just find an error.  And if the LHC paper is wrong, then who knows what might happen as a result?

    So that is an argument - but to assign numbers to it?

    I object to the air of authority given these numbers pulled out of thin air.  I generally feel that if you can't use probabilistic tools to shape your feelings of uncertainty, you ought not to dignify them by calling them probabilities.

    The alternative I would propose, in this particular case, is to debate the general rule of banning physics experiments because you cannot be absolutely certain of the arguments that say they are safe.

    I hold that if you phrase it this way, then your mind, by considering frequencies of events, is likely to bring in more consequences of the decision, and remember more relevant historical cases.

    If you debate just the one case of the LHC, and assign specific probabilities, it (1) gives very shaky reasoning an undue air of authority, (2) obscures the general consequences of applying similar rules, and even (3) creates the illusion that we might come to a different decision if someone else published a new physics paper that decreased the probabilities.

    The authors at the Global Catastrophic Risk conference seemed to be suggesting that we could just do a bit more analysis of the LHC and then switch it on.  This struck me as the most disingenuous part of the argument.  Once you admit the argument "Maybe the analysis could be wrong, and who knows what happens then," there is no possible physics paper that can ever get rid of it.

    No matter what other physics papers had been published previously, the authors would have used the same argument and made up the same numerical probabilities at the Global Catastrophic Risk conference.  I cannot be sure of this statement, of course, but it has a probability of 75%.

    In general a rationalist tries to make their minds function at the best achievable power output; sometimes this involves talking about verbal probabilities, and sometimes it does not, but always the laws of probability theory govern.

    If all you have is a gut feeling of uncertainty, then you should probably stick with those algorithms that make use of gut feelings of uncertainty, because your built-in algorithms may do better than your clumsy attempts to put things into words.

    Now it may be that by reasoning thusly, I may find myself inconsistent.  For example, I would be substantially more alarmed about a lottery device with a well-defined chance of 1 in 1,000,000 of destroying the world, than I am about the Large Hadron Collider being switched on.

    On the other hand, if you asked me whether I could make one million statements of authority equal to "The Large Hadron Collider will not destroy the world", and be wrong, on average, around once, then I would have to say no.

    What should I do about this inconsistency?  I'm not sure, but I'm certainly not going to wave a magic wand to make it go away.  That's like finding an inconsistency in a pair of maps you own, and quickly scribbling some alterations to make sure they're consistent.

    I would also, by the way, be substantially more worried about a lottery device with a 1 in 1,000,000,000 chance of destroying the world, than a device which destroyed the world if the Judeo-Christian God existed.  But I would not suppose that I could make one billion statements, one after the other, fully independent and equally fraught as "There is no God", and be wrong on average around once.

    I can't say I'm happy with this state of epistemic affairs, but I'm not going to modify it until I can see myself moving in the direction of greater accuracy and real-world effectiveness, not just moving in the direction of greater self-consistency.  The goal is to win, after all.  If I make up a probability that is not shaped by probabilistic tools, if I make up a number that is not created by numerical methods, then maybe I am just defeating my built-in algorithms that would do better by reasoning in their native modes of uncertainty.

    Of course this is not a license to ignore probabilities that are well-founded.  Any numerical founding at all is likely to be better than a vague feeling of uncertainty; humans are terrible statisticians.  But pulling a number entirely out of your butt, that is, using a non-numerical procedure to produce a number, is nearly no foundation at all; and in that case you probably are better off sticking with the vague feelings of uncertainty.

    Which is why my Overcoming Bias posts generally use words like "maybe" and "probably" and "surely" instead of assigning made-up numerical probabilities like "40%" and "70%" and "95%".  Think of how silly that would look.  I think it actually would be silly; I think I would do worse thereby.

    I am not the kind of straw Bayesian who says that you should make up probabilities to avoid being subject to Dutch Books.  I am the sort of Bayesian who says that in practice, humans end up subject to Dutch Books because they aren't powerful enough to avoid them; and moreover it's more important to catch the ball than to avoid Dutch Books.  The math is like underlying physics, inescapably governing, but too expensive to calculate.  Nor is there any point in a ritual of cognition which mimics the surface forms of the math, but fails to produce systematically better decision-making.  That would be a lost purpose; this is not the true art of living under the law.

    New Comment
    46 comments, sorted by Click to highlight new comments since: Today at 3:19 AM

    Hardly the most profound addendum, I know, but dummy numbers can be useful for illustrative purposes - for instance, to show how steeply probabilities decline as claims are conjoined.

    For instance, suppose you have a certain level of gut feeling X that the papers saying LHC will not destroy the world have missed something, a gut feeling Y that, if something has been missed, the LHC would destroy the world, and a third gut feeling Z that the LHC will destroy the world when switched on. Since humans lack multiplication hardware, we can expect the probability Z ≠ X·Y (and probably Z > X·Y, which might help explain why a girl committed suicide over LHC fears). Should we trust Z directly instead of computing X·Y? I think not. It is better to pull numbers out of your butt and do the math, than pull the result of the math out of your butt directly.

    If I could prevent only one of these events, I would prevent the lottery.

    I'm assuming that this is in a world where there are no payoffs to the LHC; we could imagine a world in which it's decided that switching the LHC on is too risky, but before it is mothballed a group of rogue physicists try to do the riskiest experiment they can think of on it out of sheer ennui.

    In the sentence "Trying to catch a flying ball, you're probably better off with your brain's built-in mechanisms, then using deliberative verbal reasoning to invent or manipulate probabilities," I think you meant "than" rather than "then"?

    This is mostly what economists refer to as the difference between implicit and explicit knowledge. The difference between skills and verbal knowledge. I strongly recommend Thomas Sowell's "Knowledge and Decisions".

    If all you have is a gut feeling of uncertainty, then you should probably stick with those algorithms that make use of gut feelings of uncertainty, because your built-in algorithms may do better than your clumsy attempts to put things into words.

    I would like to add something to this. Your gut feeling is of course the sum of experience you have had in this life plus your evolutionary heritage. This may not be verbalized because your gut feeling (as an example) also includes single neurons firing which don't necessarily contribute to the stability of a concept in your mind.

    But I warn against then simply following one's gut feeling; of course, if you have to decide immediately (in an emergency), there is no alternative. Do it! You can't get better than the sum of your experience in that moment.

    But usually only having a gut feeling and not being able to verbalize should mean one thing for you: Go out and gather more information! (Read books to stabilize or create concepts in your mind; do experiments; etc etc)

    You will find that gut feelings can change quite dramatically after reading a good book on a subject. So why should you trust them if you have the time to do something about them, viz. transfer them into the symbol space of your mind so the concepts are available for higher-order reasoning?

    This is excellent advice.

    I'd like to add though, that the original phrase was "algorithms that make use of gut feelings... ". This isn't the same as saying "a policy of always submitting to your gut feelings".

    I'm picturing a decision tree here: something that tells you how to behave when your gut feeling is "I'm utterly convinced" {Act on the feeling immediately}, vs how you might act if you had feelings of "vague unease" {continue cautiously, delay taking any steps that constitute a major commitment, while you try to identify the source of the unease}. Your algorithm might also involve assessing the reliability of your gut feeling; experience and reason might allow you to know that your gut is very reliable in certain matters, and much less reliable in others.

    The details of the algorithm are up for debate of course. For the purposes of this discussion, i place no importance on the details of the algorithm i described. The point is just that these procedures are helpful for rational thinking, they aren't numerical procedures, and a numerical procedure wouldn't automatically be better just because it's numerical.

    Calculating probabilities about nearly any real world event is extremely complex. Someone who accepts the logic of your post shouldn't believe there is much value to Bayesian analysis other then allowing you to determine whether new information should cause you to increase or decrease your estimate of the probability of some event occurring.

    It should be possible for someone to answer the following question: Is the probability of X occurring greater or less than Y? And if you answer enough of these questions you can basically determine the probability of X.

    What's the probability the LHC will save the world? That either some side effect of running it, or some knowledge gained from it, will prevent a future catastrophe? At least of the same order of fuzzy small non-zeroness as the doomsday scenario.

    I think that's the larger fault here. You don't just have to show that X has some chance of being bad in order to justify being against it, you also have to show it it's predictably worse than not-X. If you can't, then the uncertain badness is better read as noise at the straining limit of your ability to predict - and that to me adds back up to normality.

    For example, I would be substantially more alarmed about a lottery device with a well-defined chance of 1 in 1,000,000 of destroying the world, than I am about the Large Hadron Collider switched on. If I could prevent only one of these events, I would prevent the lottery.

    On the other hand, if you asked me whether I could make one million statements of authority equal to "The Large Hadron Collider will not destroy the world", and be wrong, on average, around once, then I would have to say no.

    Hmm... might this be the heuristic that makes people prefer a 1% chance of 1000 deaths to a definite death for 5? The lottery would definately destroy worlds, with as many deaths as killing over six thousand people in each Everett branch. Running the LHC means a higher expected number of dead worlds by your own estimates, but it's all or nothing across universes. It will most probably just be safe.

    If you had a definate number for both P(Doomsday Lottery Device Win) and P(Doomsday LHC) you'd shut up and multiply, but you haven't so you don't. But you still should because you're pretty sure P(D-LHC) >> P(DLDW) even if you don't know a figure for P(DLHC).

    This assumes Paul's assumption, above.

    Recently I did some probability calculations, starting with "made-up" numbers, and updating using Bayes' Rule, and the result was that something would likely happen which my gut said most firmly would absolutely not, never, ever, happen.

    I told myself that my probability assignments must have been way off, or I must have made an error somewhere. After all, my gut couldn't possibly be so mistaken.

    The thing happened, by the way.

    This is one reason why I agree with RI, and disagree with Eliezer.

    Could you give more details, at least about the domain in which you were reasoning? Intuitions vary wildly in calibration with topic.

    "The lottery would definately destroy worlds, with as many deaths as killing over six thousand people in each Everett branch."

    We speak so casually about interpreting probabilities as frequencies across the many worlds, but I would suggest we need a rigorous treatment of what those other worlds are proportionally like before confidently doing so. (Cf. mine and Hal's commments in the June Open Thread.)

    Unknown: I would REALLY like to know details.

    Gunther Greindl: In my gut, I STRONGLY agree. My revealed preferences also match it. However, Philip Tetlocks' "Expert Political Judgment" tells me that among political experts, who have much better predictive powers than educated lay-people, specialists in X don't outperform specialists in Y in making predictions about X. This worries me A LOT. Another thing that worries me is that decomposing events exhaustively into their subcomponents makes the aggregate event seem more likely and it seems to me that by becoming an expert you come to automatically decompose events into their subcomponents.

    Eliezer: I am pretty confident that it would be possible in principle, though not due to time constraints, to make a billion statements and get none wrong while keeping correlations fairly low.

    I would not be comfortable with the inconsistency you describe about the lottery. I'm not sure how you can let it stand. I guess the problem is that you don't know which instinct to fix, and just reversing one belief at random is not going to improve accuracy on average.

    Still, wouldn't careful introspection be likely to expose either some more fundamental set of inconsistent beliefs, that you can fix; or at least, to lead you to decide that one of the two beliefs is in fact stronger than the other, in which case you should reverse the weaker one? It seems unlikely that the two beliefs are exactly balanced in your degree of credence.

    For the reactor, I'd say that the reasoning about one in a thousand odds is in fact a good way to go about analyzing the problem. It's how I approach other, similar issues. If I'm considering one of two routes through heavy traffic, I do roughly estimate the odds of running into a traffic jam. These are very crude estimates but they are better than nothing.

    The biggest criticism I would give to such reasoning in this case is that as we go out the probability scale, we have much less experience, and our estimates are going to be far less accurate and calibrated. Furthermore, often in these situations we end up comparing or dividing probabilities, and error percentages go up astronomically in such calculations. So while the final figure may represent a mean, the deviation is so large that even slight differences in approach could have led to a dramatically different answer.

    I would give substantially higher estimates that our theories are wrong - indeed by some measures, we know for sure our theories are wrong since they are inconsistent and none of the unifications work. However I'd give much lower estimates that the theories are wrong in just such a way that would lead to us destroying the earth.

    I assume you were being facetious when you gave 75% odds that the authors would have maintained their opinion in different circumstances. Yet to me, it is a useful figure to read, and does offer insight into how strongly you believe. Without that number, I'd have guessed that you felt more strongly than that.

    If P != NP and the universe has no source of exponential computing power, then there are evidential updates too difficult for even a superintelligence to compute - even though the probabilities would be quite well-defined, if we could afford to calculate them.

    ...

    Trying to catch a flying ball, you're probably better off with your brain's built-in mechanisms, then [than?] using deliberative verbal reasoning to invent or manipulate probabilities.

    There's more than just P != NP that defeats trying to catch a flying ball by predicting where it will land and going there. Or, for that matter, trying to go there by computing a series of muscular actions and then doing them. You can't sense where the ball is or what your body is doing accurately enough to plan, then execute actions with the precision required. A probability cloud perfectly calculated from all the available information isn't good enough, if it's bigger than your hand.

    This is how to catch a ball: move so as to keep its apparent direction (both azimuth and elevation) constant.

    But this doesn't mean you're going beyond probability theory or above probability theory.

    It doesn't mean you're doing probability theory either, even when you reliably win. The rule "move so as to keep the apparent direction constant" says nothing about probabilities. If anyone wants to try at a probability-theoretic account of its effectiveness, I would be interested, but sceptical in advance.

    There's more than just P != NP that defeats trying to catch a flying ball by predicting where it will land and going there. Or, for that matter, trying to go there by computing a series of muscular actions and then doing them.
    You DO realize that some humans are perfectly capable of accomplishing precisely that action, right?

    Eliezer, the correct way to resolve your inconsistency seems to be to be less approving of novel experiments, especially when they aren't yet necessary or probably very useful, and when a bit later we will likely have more expertise with regard to them. I refer to a comment I just made in another thread.

    Can't give details, there would be a risk of revealing my identity.

    I have come up with a hypothesis to explain the inconsistency. Eliezer's verbal estimate of how many similar claims he can make, while being wrong on average only once, is actually his best estimate of his subjective uncertainty. How he would act in relation to the lottery is his estimate influenced by the overconfidence bias. This is an interesting hypothesis because it would provide a measurement of his overconfidence. For example, which would he stop: The "Destroy the earth if God exists" lottery, or "Destroy the earth at odds of one in a trillion"? How about a quadrillion? A quintillion? A googleplex? One in Graham's number? At some point Eliezer will have to prefer to turn off the God lottery, and comparing this to something like one in a billion, his verbal estimate, would tell us exactly how overconfident he is.

    Since the inconsistency would allow Eliezer to become a money-pump, Eliezer has to admit that some irrationality must be responsible for it. I assign at least a 1% chance to the possibility that the above hypothesis is true. Given even such a chance, and given Eliezer's work, he should come up with methods to test the hypothesis, and if it is confirmed, he should change his way of acting in order to conform with his actual best estimate of reality, rather than his overconfident estimate of reality.

    Unfortunately, if the hypothesis is true, by that very fact, Eliezer is unlikely to take these steps. Determining why can be left as an exercise to the reader.

    Unknown, describe the money pump. Also, are you the guy who converted to Christianity due to Pascal's Wager or am I thinking of someone else?

    The tug-of-war in "How extreme a low probability to assign?" is driven, on the one hand, by the need for our probabilities to sum to 1 - so if you assign probabilities >> 10^-6 to unjustified statements of such complexity that more than a million of them could be produced, you will be inconsistent and Dutch-bookable. On the other hand, it's extremely hard to be right about anything a million times in a row.

    My instinct is to look for a deontish human strategy for handling this class of problem, one that takes into account both human overconfidence and the desire-to-dismiss, and also the temptation for humans to make up silly things with huge consequences and claim "but you can't know I'm wrong".

    Eliezer, you are thinking of Utilitarian (also begins with U, which may explain the confusion.) See http://utilitarian-essays.com/pascal.html

    I'll get back to the other things later (including the money pump.) Unfortunately I will be busy for a while.

    Was this speaker a believer in Discworldian probability theory? Which states, of course, that million-to-one chances come up 100% of the time, but thousand-to-one chances never. Maybe those numbers weren't plucked out of the air.

    All we have to do is operate the LHC while standing on one foot, and the probability of the universe exploding will be nudged away from million-to-one (doesn't matter which direction - whoever heard of a 999,999-1 chance coming up?) and the universe will be saved.

    Someone actually bought Pascal's wager? Oh boy. That essay looks to me like a perfect example of someone pulling oh-so-convenient numbers out of their fundament and then updating on them. See, it's math, I'm not delusional. sigh

    Do you know what you get when you mix high energy colliders with Professor Otto Rossler?s charged micro black hole theory?

    Answer: a golf ball (in 50 months to 50 years...)

    http://translate.google.com/translate?u=http%3A%2F%2Fwww.20min.ch%2Fnews%2Fwissen%2Fstory%2F24668213&hl=en&ie=UTF8&sl=de&tl=en

    Eliezer, the money pump results from circular preferences, which should exist according to your description of the inconsistency. Suppose we have a million statements, each of which you believe to be true with equal confidence, one of which is "The LHC will not destroy the earth."

    Suppose I am about to pick a random statement from the list of a million, and I will destroy the earth if I happen to pick a false statement. By your own admission, you estimate that there is more than one false statement in the list. You will therefore prefer that I play a lottery with odds of 1 in a million, destroying the earth only if I win.

    It makes no difference if I pick a number randomly between one and a million, and then play the lottery mentioned (ignoring the number picked.)

    But now if I pick a number randomly between one and a million, and then play the lottery mentioned only if I didn't pick the number 500,000, while if I do pick the number 500,000, I destroy the earth only if the LHC would destroy the earth, then you would prefer this state of affairs, since you prefer "destroy the earth if the LHC would destroy the earth" to "destroy the earth with odds of one in a million."

    But now I can also substitute the number 499,999 with some other statement that you hold with equal confidence, so that if I pick 499,999, instead of playing the lottery, I destroy the earth if this statement is false. You will also prefer this state of affairs for the same reason, since you hold this statement with equal confidence to "The LHC will not destroy the earth."

    And so on. It follows that you prefer to go back to the original state of affairs, which constitutes circular preferences and implies a money pump.

    I would advise, in most cases, against using non-numerical procedures to create what appear to be numerical probabilities. Numbers should come from numbers.

    I very much disagree with this quote, and much of the rest of the post. Most of our reasoning about social stuff does not start from concrete numbers, so this rule would forbid my giving numbers to most of what I reason about. I say go ahead and pick a number out of the air, but then be very willing to revise it upon the slightest evidence that it doesn't fit will with your other numbers. It is anchoring that is the biggest problem. Being forced to pick numbers can be a great and powerful discipline to help you find and eliminate errors in your reasoning.

    Unknown: God exists is not well specified. For something like "Zeus Exists" (not exactly that, some guy named Zeus does exist, and in some quantum branch there's probably an AGI that creates the world of Greek Myth in simulation) I would say that my confidence in its falsehood is greater than my confidence in the alleged probability of winning a lottery could be.

    I say go ahead and pick a number out of the air,

    A somewhat arbitrary starting number is also useful as a seed for a process of iterative approximation to a true value.

    I strongly agree with Robin here. Thanks Robin for making the point so clearly. I have to admit that not using numbers may be a better rule for a larger number of people than what they are currently using, as is majoritarianism, but neither is a good rule for people who are trying to reach the best available beliefs.

    It is anchoring that is the biggest problem.

    In the strongest form, that points directly opposite your advice.

    I think that Robin was saying that anchoring, not the arbitrariness of starting points, is the big problem for transitioning from qualitative to quantitative thinking. You can make up numbers and so long as you update them well you get to the right place, but if you anchor too much against movement from your starting point but not against movement towards it you never get to an accurate destination.

    I suspect my statement is the one that needed clarification. I was measuring the size of a problem by the psychological difficulty of overcoming it. If anchoring is too big to overcome, it is better to avoid situations where it applies. And identifying the bias is not (necessarily) much of a step towards overcoming it.

    Numbers are not needed for anchoring. We could arrange the probabilities of the truth of statements into partially ordered sets. This po set can even include statements about the probabilistic relation between statements.

    Well, we should be careful to avoid the barbers paradox though... things like x = {x is more likely then y} are a bad idea

    I think it would be better to avoid just making up numbers until we absolutely have to, we actually find our selves playing a lottery for the continued existence of Earth, or there is some numerical process grounded in statistics that provides the numbers, resting on some assumptions. However, by anchoring probabilities in post sets we might get bounds on things for which we can not compute probabilities.

    Me: There's more than just P != NP that defeats trying to catch a flying ball by predicting where it will land and going there. Or, for that matter, trying to go there by computing a series of muscular actions and then doing them.

    Caledonian: You DO realize that some humans are perfectly capable of accomplishing precisely that action, right?

    People can catch balls. Nobody can do it by the mechanism described. Fielders in ball games will turn away from the ball and sprint towards where they think it will come down, if they can't run fast enough while keeping it in sight, but they still have to look at the ball again to stand any chance of catching it. The initial sense data itself doesn't determine the answer, however well processed.

    When what you need is a smaller probability cloud, calculating the same cloud more precisely doesn't help. Precision about your ignorance is not knowledge.

    athmwiji, yes, numbers are not necessary for anchoring. I think that they make the anchoring worse, but it would be very bad to avoid numbers just because they make it easy to see anchoring.

    I'll also disagree with the argument Eliezer gives here. See Robin's post. In addition to coming up with a probability with which we think an event will occur, we should also quantify how sure we are that that is the best possible estimate of the probability.

    e.g. I can calculate the odds I'll win a lottery and if someone thinks their estimate of the odds is much better then (if we lack time or capital constraints) we can arrange bets about whose predictions will prove more accurate over many lotteries.

    Re: Someone actually bought Pascal's wager? Oh boy.

    E.g. see: Dinesh D'Souza, 8 minutes in.

    if I make up a number that is not created by numerical methods, then maybe I am just defeating my built-in algorithms that would do better by reasoning in their native modes of uncertainty.

    I must remember this post. I argue along those lines from time to time, though I'm pretty sure I think humans are much worse at math (and better at judgement) than you do, so I recommend against talking in probabilities more often.

    One of my favorite lessons from Bayesianism is that the task of calculating the probability of an event can be broken down into simpler calculations, so that even if you have no basis for assigning a number to P(H) you might still have success estimating the likelihood ratio.

    How is that information by itself useful?

    Good question. I didn't have an answer right away. I think it's useful because it gives structure to the act of updating beliefs. When I encounter evidence for some H I immediately know to estimate P(E|H) and P(E|~H) and I know that this ratio alone determines the direction and degree of the update. Even if the numbers are vague and ad hoc this structure precludes a lot of clever arguing I could be doing, leads to productive lines of inquiry, and is immensely helpful for modeling my disagreement with others. Before reading LW I could have told you, if asked, that P(H), P(E|H), and P(E|~H) were worth considering; but becoming acutely aware that these are THE three quantities I need, no more and no less, has made a huge difference in my thinking for the better (not to sound dogmatic; I'll use different paradigms when I think they're more appropriate e.g. when doing math).

    The alternative I would propose, in this particular case, is to debate the general rule of banning physics experiments because you cannot be absolutely certain of the arguments that say they are safe.

    Giving up on debating the probability of a particular proposition, and shifting to debating the merits of a particular rule, is I feel one of the ideas behind frequentist statistics. Like, I'm not going to say anything about whether the true mean is in my confidence interval in this particular case. But note that using this confidence interval formula works pretty well on average.

    This reminds me of your comparison of vague vs precise theories in A Technical Explanation of Technical Explanation - if both are correct, then the precise theory is more accurate then the vague one. But if the precise theory is incorrect and the vague is correct, the vague theory is more accurate. Preciseness is worthless without correctness.

    While the distinction there was about granularity, I think the lesson that preciseness is necessary but not sufficient for accuracy applies here as well. Using numbers makes your argument seem more mathematical, but unless they are the correct numbers - or even a close enough estimate of the correct numbers - can't make your argument more accurate.

    If P != NP and the universe has no source of exponential computing power, then there are evidential updates too difficult for even a superintelligence to compute 

    What a strange thing for my past self to say.  This has nothing to do with P!=NP and I really feel like I knew enough math to know that in 2008; and I don't remember saying this or what I was thinking.

    To execute an exact update on the evidence, you've got to be able to figure out the likelihood of that evidence given every hypothesis; if you allow all computable Cartesian environments as possible explanations, exact updates aren't computable.  All exact updates take place inside restricted hypothesis classes and they've often got to be pretty restrictive.  Even if every individual hypothesis fits inside your computer, the whole set probably won't.

    I think a large part of the reason why a lottery with a 1 in 1 billion chance of destroying the world alarms me is that it implies someone built a world-destroying machine to connect to the lottery. Machines turn on accidentally fairly often. I've seen it happen more than once.