Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Science Isn't Strict Enough

12 Post author: Eliezer_Yudkowsky 16 May 2008 06:51AM

Followup toWhen Science Can't Help

Once upon a time, a younger Eliezer had a stupid theory.  Eliezer18 was careful to follow the precepts of Traditional Rationality that he had been taught; he made sure his stupid theory had experimental consequences.  Eliezer18 professed, in accordance with the virtues of a scientist he had been taught, that he wished to test his stupid theory.

This was all that was required to be virtuous, according to what Eliezer18  had been taught was virtue in the way of science.

It was not even remotely the order of effort that would have been required to get it right.

The traditional ideals of Science too readily give out gold stars. Negative experimental results are also knowledge, so everyone who plays gets an award.  So long as you can think of some kind of experiment that tests your theory, and you do the experiment, and you accept the results, you've played by the rules; you're a good scientist.

You didn't necessarily get it right, but you're a nice science-abiding citizen.

(I note at this point that I am speaking of Science, not the social process of science as it actually works in practice, for two reasons.  First, I went astray in trying to follow the ideal of Science—it's not like I was shot down by a journal editor with a grudge, and it's not like I was trying to imitate the flaws of academia.  Second, if I point out a problem with the ideal as it is traditionally preached, real-world scientists are not forced to likewise go astray!)

Science began as a rebellion against grand philosophical schemas and armchair reasoning.  So Science doesn't include a rule as to what kinds of hypotheses you are and aren't allowed to test; that is left up to the individual scientist.  Trying to guess that a priori, would require some kind of grand philosophical schema, and reasoning in advance of the evidence.  As a social ideal, Science doesn't judge you as a bad person for coming up with heretical hypotheses; honest experiments, and acceptance of the results, is virtue unto a scientist.

As long as most scientists can manage to accept definite, unmistakable, unambiguous experimental evidence, science can progress.  It may happen too slowly—it may take longer than it should—you may have to wait for a generation of elders to die out—but eventually, the ratchet of knowledge clicks forward another notch.  Year by year, decade by decade, the wheel turns forward.  It's enough to support a civilization.

So that's all that Science really asks of you—the ability to accept reality when you're beat over the head with it.  It's not much, but it's enough to sustain a scientific culture.

Contrast this to the notion we have in probability theory, of an exact quantitative rational judgment.  If 1% of women presenting for a routine screening have breast cancer, and 80% of women with breast cancer get positive mammographies, and 10% of women without breast cancer get false positives, what is the probability that a routinely screened woman with a positive mammography has breast cancer?  7.5%.  You cannot say, "I believe she doesn't have breast cancer, because the experiment isn't definite enough."  You cannot say, "I believe she has breast cancer, because it is wise to be pessimistic and that is what the only experiment so far seems to indicate."  7.5% is the rational estimate given this evidence, not 7.4% or 7.6%.  The laws of probability are laws.

It is written in the Twelve Virtues, of the third virtue, lightness:

If you regard evidence as a constraint and seek to free yourself, you sell yourself into the chains of your whims.  For you cannot make a true map of a city by sitting in your bedroom with your eyes shut and drawing lines upon paper according to impulse.  You must walk through the city and draw lines on paper that correspond to what you see.  If, seeing the city unclearly, you think that you can shift a line just a little to the right, just a little to the left, according to your caprice, this is just the same mistake.

In Science, when it comes to deciding which hypotheses to test, the morality of Science gives you personal freedom of what to believe, so long as it isn't already ruled out by experiment, and so long as you move to test your hypothesis.  Science wouldn't try to give an official verdict on the best hypothesis to test, in advance of the experiment.  That's left up to the conscience of the individual scientist.

Where definite experimental evidence exists, Science tells you to bow your stubborn neck and accept it.  Otherwise, Science leaves it up to you.  Science gives you room to wander around within the boundaries of the experimental evidence, according to your whims.

And this is not easily reconciled with Bayesianism's notion of an exactly right probability estimate, one with no flex or room for whims, that exists both before and after the experiment.  It doesn't match well with the ancient and traditional reason for Science—the distrust of grand schemas, the presumption that people aren't rational enough to get things right without definite and unmistakable experimental evidence.  If we were all perfect Bayesians, we wouldn't need a social process of science.

Nonetheless, around the time I realized my big mistake, I had also been studying Kahneman and Tversky and Jaynes.  I was learning a new Way, stricter than Science.  A Way that could criticize my folly, in a way that Science never could.  A Way that could have told me, what Science would never have said in advance:  "You picked the wrong hypothesis to test, dunderhead."

But the Way of Bayes is also much harder to use than Science.  It puts a tremendous strain on your ability to hear tiny false notes, where Science only demands that you notice an anvil dropped on your head.

In Science you can make a mistake or two, and another experiment will come by and correct you; at worst you waste a couple of decades.

But if you try to use Bayes even qualitatively—if you try to do the thing that Science doesn't trust you to do, and reason rationally in the absence of overwhelming evidence—it is like math, in that a single error in a hundred steps can carry you anywhere.  It demands lightness, evenness, precision, perfectionism.

There's a good reason why Science doesn't trust scientists to do this sort of thing, and asks for further experimental proof even after someone claims they've worked out the right answer based on hints and logic.

But if you would rather not waste ten years trying to prove the wrong theory, you'll need to essay the vastly more difficult problem: listening to evidence that doesn't shout in your ear.

(For the benefit of those in the audience who have not been following along this whole time:  Even if you can't look up the priors for a problem in the Handbook of Chemistry and Physics—even if there's no Authoritative Source telling you what the priors are—that doesn't mean you get a free, personal choice of making the priors whatever you want.  It means you have a new guessing problem which you must carry out to the best of your ability.

If the mind, as a cognitive engine, could generate correct estimates by fiddling with priors according to whims, you could know things without looking them, or even alter them without touching them.  But the mind is not magic.  The rational probability estimate has no room for any decision based on whim, even when it seems that you don't know the priors.

Similarly, if the Bayesian answer is difficult to compute, that doesn't mean that Bayes is inapplicable; it means you don't know what the Bayesian answer is.  Bayesian probability theory is not a toolbox of statistical methods, it's the law that governs any tool you use, whether or not you know it, whether or not you can calculate it.

As for using Bayesian methods on huge, highly general hypothesis spaces—like, "Here's the data from every physics experiment ever; now, what would be a good Theory of Everything?"—if you knew how to do that in practice, you wouldn't be a statistician, you would be an Artificial General Intelligence programmer.  But that doesn't mean that human beings, in modeling the universe using human intelligence, are violating the laws of physics / Bayesianism by generating correct guesses without evidence.)

Added:  Nick Tarleton says:

The problem is encouraging a private, epistemic standard as lax as the social one.

which pinpoints the problem I was trying to indicate much better than I did.

 

Part of The Quantum Physics Sequence

Next post: "Do Scientists Already Know This Stuff?"

Previous post: "When Science Can't Help"

Comments (58)

Sort By: Old
Comment author: Hopefully_Anonymous 16 May 2008 07:01:20AM 0 points [-]

getting colder, in my opinion. I think you were more on track back when you posited that the scientific method, etc. were subsumed by Bayes, than this current contrasting of "the ideals of science" and bayesian reasoning/probability.

Comment author: Will_Pearson 16 May 2008 07:36:20AM -2 points [-]

"Contrast this to the notion we have in probability theory, of an exact quantitative rational judgment. If 1% of women presenting for a routine screening have breast cancer, and 80% of women with breast cancer get positive mammographies, and 10% of women without breast cancer get positive mammographies, what is the probability that a routinely screened woman has breast cancer? 7.5%. You cannot say, "I believe she doesn't have breast cancer, because the experiment isn't definite enough." You cannot say, "I believe she has breast cancer, because it is wise to be pessimistic and that is what the only experiment so far seems to indicate." 7.5% is the rational estimate given this evidence, not 7.4% or 7.6%. The laws of probability are laws."

What is the probability that whatever methods you use to determine whether someone has breast cancer in this case are 100% correct?

We do not live in a maths problem, data can be bad.

Comment author: Eliezer_Yudkowsky 16 May 2008 07:52:12AM 2 points [-]

HA, please include justifications with your Olympian judgments. Thanks.

Will, probabilities are states of partial information, not objective properties of the problem. Unless you have reason to believe the data are wrong in a particular direction, your corrected estimate including meta-uncertainty is no different from the original as betting odds.

Comment author: Ben_Jones 16 May 2008 10:29:54AM 1 point [-]

Science Isn't Strict Enough

Terrible title for a post that talks sense.

The scientific method is there to test theories. It does that perfectly. You're talking about how we formulate those theories in the first place, making best use of priors, current knowledge etc. If that's not Science's job, why the criticism?

If you want to tell us how to formulate better theories, using Bayes, just do that.

Comment author: RobinHanson 16 May 2008 11:43:19AM 2 points [-]

Science only demands that you notice an anvil dropped on your head.

This isn't fair - the Science ideal has higher and more difficult standards than this. Recall this aneqdote:

To keep himself honest, Riley would give a colleague the exact value of a key parameter, and himself use this value with noise added in. Only when he had done all he could to reduce errors would he ask for the exact value.

When a hundred mistakes can screw up your experiment, it is damn hard to try to fix them all without using them as excuses to get the result you "expect."

Comment author: Caledonian2 16 May 2008 12:58:35PM -2 points [-]

Knowing an estimated probability of a thing is not the same as knowing the reality of the thing. Whether the estimation is precise or not is irrelevant. The woman either has breast cancer, or she does not have breast cancer - there is no 'probability' about it.

Statistical analysis suggests that there is a 7.5% chance the woman has breast cancer. What does that mean? It means that whether she has breast cancer or not is unknown. You do not believe that she does not have breast cancer. You do not believe that she does have breast cancer. You do not have enough evidence to justify either conclusion.

Comment author: DanielLC 08 May 2012 06:39:37AM 1 point [-]

It means that whether she has breast cancer or not is unknown.

It's always unknown. There is nothing that could possibly happen that will make you absolutely certain. If you want to be sane, you must learn to make decisions with incomplete information. If you want to do this in a sane manner, you must follow the laws of probability.

Comment author: army1987 08 May 2012 10:19:49AM 3 points [-]

Fallacy of Gray. Just because probabilities can't exactly equal 0 or 1 doesn't mean you shouldn't say “I know she has cancer” if the probability is 99.999%. (And, answering to the grandparent, if I say “I believe that she does not have cancer” I just mean that the posterior probability of her not having cancer given everything I know is greater than 50%.)

Comment author: DanielLC 08 May 2012 06:22:35PM 2 points [-]

That's my point. If you treat all shades of gray the same, the result is insanity. If you treat all shades of gray in any manner that doesn't follow the laws of probability, the result is insanity.

And, answering to the grandparent, if I say “I believe that she does not have cancer” I just mean that the posterior probability of her not having cancer given everything I know is greater than 50%.

You can use "believe" that way, but you can't act like everything is true iff it has a higher than 50% chance. You wouldn't want to leave someone untreated on the basis that they only have a 49% chance of having cancer.

Comment author: army1987 08 May 2012 07:23:28PM 0 points [-]

you can't act like everything is true iff it has a higher than 50% chance

I said "believe" not "assume"...

Comment author: Stuart_Armstrong 16 May 2008 01:28:53PM 7 points [-]

But the Way of Bayes is also much harder to use than Science. It puts a tremendous strain on your ability to hear tiny false notes, where Science only demands that you notice an anvil dropped on your head.[...]

But if you try to use Bayes, it is math, and a single error in a hundred steps can carry you anywhere.

Hum... so basically Science works by coarsening the space of outcomes into three states (proven - falsified - uncertain), and then making "proven" and "falsified" into attractors for the scientific method. Since these are attractors, errors are correctable.

While Bayescraft keeps the full space of outcomes and does not create attractors, thus allowing greater precision (usefull for hypothesis formulation), but allowing errors to affect the result.

Comment author: cousin_it 09 May 2011 09:42:49PM 2 points [-]

Well, a Bayesian learner should eventually converge on the truth if the prior supports it, so that can be viewed as an "attractor" too...

Comment author: Stuart_Armstrong 12 May 2011 10:27:08PM 6 points [-]

Yes, a perfect Bayesian making perfect updates is perfect, we all know that :-)

My point is that I can remember easily that things are false, or that they are true. But to remember that they are somewhere in between is much harder, unless it's things I really care about. You have to keep track of the data, and compare it with new results.

Comment author: Luke_A_Somers 08 May 2012 02:04:54PM 0 points [-]

But to remember that they are somewhere in between is much harder, unless it's things I really care about.

It isn't in between. Your knowledge of the question is in between. You would like it to be closer to one end or the other. You can apply a whole lot of heuristics without messing this part up.

Comment author: Stuart_Armstrong 08 May 2012 04:09:16PM *  2 points [-]

Your knowledge of the question is in between.

Yes. And that's what's harder to remember. I "know" that Lincoln was assassinated, and I "know" that Charles de Gaulle didn't die in Burma. But trying to remember what my estimate is as to whether it's good or bad for overweight people to go on a diet... that's a lot harder.

Comment author: Stuart_Armstrong 16 May 2008 01:45:39PM 1 point [-]

But if you would rather not waste ten years trying to prove the wrong theory, you'll need to essay the vastly more difficult problem: listening to evidence that doesn't shout in your ear.

I'd say that, in practice, Science has an edge over Bayescraft in some areas of hypothesis formulation (mainly in the "hard" sciences). The laws of gravity are not formulated in a Bayesian fashion, nor are most of the laws of physics. The ability to say "electrons exist, they all behave identically, and they are different from muons" is very useful to creating reasonable hypotheses about their behaviour. The corresponding Bayesian statement, with its probabilistic formulation of the same statement, would be more of a barrier to efficient hypothesis formulation.

Similarly, Newton's laws were formulated with incredible precision and simplicity, based on frankly little experimental evidence. The equivalent Bayesian formulation would have been messy and complicated, and would probably have obscured the essential simplicity of what was going on.

Comment author: ME3 16 May 2008 02:00:17PM 1 point [-]

Similarly, if the Bayesian answer is difficult to compute, that doesn't mean that Bayes is inapplicable; it means you don't know what the Bayesian answer is.

So then what good is this Bayes stuff to us exactly, us of the world where the vast majority of things can't be computed?

Comment author: Caledonian2 16 May 2008 02:03:52PM 1 point [-]

Look, science already has standards for what constitutes valuable hypotheses. It simply doesn't force people to apply those standards in order to be practicing the scientific method.

Maybe you should think more about why those standards aren't seen as a necessary requirement, before you insist that such unenforcement is a weakness and a laxity.

Comment author: billswift 16 May 2008 02:33:58PM 0 points [-]

Given the complications of the calculations and the necessity for evidence in advance to calculate Bayesian probablities, I suspect coming up with hypotheses, experiments to test them, and running the experiments might take less time than doing the calculations to develop a hypothesis and running an experiment to test it. Of course, if you are advocating doing away with the final experiment to make sure you didn't make mistakes, I don't see how this is much different from Medieval Scholasticism, except you call your answers probabilities rather than Truth.

Comment author: bambi 16 May 2008 03:30:48PM 0 points [-]

Where do we get sufficient self-confidence to pull probabilities for ill-defined and under-measured quantities out of our butts so we can use them in The Formula?

Is there any actually interesting intellectual task that rests on nice justifiable grounded probabilities?

Comment author: Kevin_Dick 16 May 2008 05:05:17PM 1 point [-]

Elizer. I've been a Believer for 20 years now, so I'm with you. But it seems like you're losing people a little bit on Bayes v Science. You've probably already thought of this, but it might make sense to take smaller pedagogical steps here to cover the inferential distance.

One candidate step I thought of was to first describe where Bayes can _supplement_ Science. You've already identified choosing which hypotheses to test. But it might help to list them all out. Off the top of my head, there's also obviously what to do in the face of conflicting experimental evidence, what to do when the experimental evidence is partially but not exactly on point, what to do when faced with weird (i.e., highly unexpected) experimental evidence, and how to allocate funds to different experiments (e.g., was funding the LHC rational?). I'm certain that you have even more in mind.

Then you can perhaps spiral out from these areas of supplementation to convince people of your larger point. Just a thought.

Comment author: Nick_Tarleton 16 May 2008 06:02:45PM 13 points [-]

People, Bayes-structure doesn't require Bayes-math! Thinking about the math allows us, among other things, to pick computationally efficient approximations that are closer to normative reasoning, as exemplified by past posts like (picks at random) this one. I don't need to have any idea what numerical probabilities I should be assigning to know that P(A&B)<=P(A), P(A|B)>=P(A), if seeing A increases P(B) then seeing ~A must decrease it, and so on. It's a little like how (would a physicist please stop me if this is inaccurate?) a knowledge of quantum mechanics allows you to create semiclassical approximations that make better predictions than classical models but are more tractable than the genuine QM math.

As a social ideal, Science doesn't judge you as a bad person for coming up with heretical hypotheses

Surely this is a good thing, else fear of being thought stupid would overly discourage people from raising novel hypotheses. The problem is encouraging a private, epistemic standard as lax as the social one.

Comment author: Unknown 16 May 2008 06:25:44PM 1 point [-]

This is a great post, despite the comments of those like Caledonian who like to be critical just for the sake of being critical.

Comment author: Nick_Tarleton 16 May 2008 07:00:27PM 0 points [-]

P(A|B)>=P(A)

By | here I mean "or", not "given". Sorry.

Comment author: Caledonian2 16 May 2008 07:07:13PM 0 points [-]

Surely this is a good thing, else fear of being thought stupid would overly discourage people from raising novel hypotheses. The problem is encouraging a private, epistemic standard as lax as the social one.

1) How exactly do you intend to enforce the more rigorous private, epistemic standard?

2) How do you expect novel hypotheses to be raised in public if people will not consider them in private?

Comment author: poke 16 May 2008 08:00:02PM 2 points [-]

The problem is that when you talk about "ideal Science" it sounds like you mean something scientific practice attempts to achieve but falls short of but what you're actually discussing is a second-hand imprecise (idealized) description of science. This sort of "science as hypothesis-testing" is a philosophical model. Historians often use it to interpret the history of science (although this has thankfully changed in recent years) and even scientists will resort to it when pressed for a description of their methods. But it's not used (or aimed for) within science; I didn't get any classes on general scientific method (or logic or inductive probabilism), I just learned a set of practical (including mathematical) skills. Science itself is an institutional and social practice and like all institutional and social practices we don't presently understand how it works.

To expand on what I said about your other essay: Being able to create relevant hypotheses is an important skill and one a scientist spends a great deal of his or her time developing. It may not be part of the traditional description of science but that doesn't mean it's not included in the actual social institution of science that produces actual real science here in the real world; it's your description and not science that is faulty. Think of science as movement along a trajectory. The period of apprenticeship in the scientific community that every practicing scientist goes through exists in order to calibrate the budding scientist and set him or her on the right course; it's to get us all moving in the same direction. That this can't be encapsulated into a set of neat rules isn't a failure of science but a failure of descriptions of science.

This isn't unique to science; it's an issue in most institutions. When developing countries try to create a simulcrum of industrial practice from theory and description the result is usually a failure. When developing countries open themselves to foreign industry the newly established facilities, run by foreign experts who have causal ties through history to the very site of origin of their practices, impart a skillset on the local population who often then manage to combine that skillset and their unique understanding of their own culture to create their own businesses that can out-compete foreign industry. This is necessary because we don't have a general understanding of institutions and therefore any description or theory designed to encapsulate what we need to do in order to copy their practices is necessarily incomplete or wrong.

Now, if you're just saying the problem is that you, Eliezer, had a crappy understanding of Science and therefore went astray then what I'm saying supports your thesis. But you seem to be going further than that and making a claim about scientific practice. (It's ambiguous though so I apologize if I have misinterpreted your intent.) I still, however, would reject the notion that Bayesianism is the hidden structure behind the success of science. What you would perhaps say is that when scientists learn to develop worthy hypotheses they are secretly learning how to become good Bayesians or learning cognitive practices that approximate what Bayes would tell us to do. But inasmuch as Bayes can be made to fit any scientific inferences it's being used to address pseudo-problems (i.e., problems of justification) that the inferences did not need to be defended against to begin with; it's in this respect that I think it's unnecessary.

The difference between a scientist and a theologian is not a difference of rationality or a difference between how their cognitive processes approximate Bayesian insights. The difference is simply that one studied science and trained as a scientist and now works in a laboratory while the other studied theology and trained as a theologian and now works in the theology department. The scientist avoids coming to theological conclusions about his scientific studies as a matter of socialization. It is not necessary, however, that this socialization involve a general method for coming to the right conclusions. Science doesn't need any such thing.

The Great Secret of Science, the reason scientists more often than not are the ones who produce science, is that science has all the science. Science begets science. What you learn from rolling balls down an inclined plane allows you to predict the trajectories projectiles, which allows you to discover that motion is parabolic or analyze the periodic motion of pendulums and eventually you, or one your colleagues, develops the calculus, and so on. This doesn't all happen in one institution because of some general methodology or some universal recipe for getting the truth; it happens because that institution has all the experts. It's always going to be the guy who understands the science who uses it to create new science because you need to understand the old science to create the new science. Beyond that there's really nothing more left to explain; we have a complete causal explanation of science. If we wanted a philosophical justification of why we should accept science in the face of philosophical skepticism, then we would need to invoke Bayes (or whatever), but I'm not sure you think we need one of those. You seem to apply Bayes as the hidden cause of scientific success rather than the philosophical justification.

Comment author: Caledonian2 16 May 2008 08:24:02PM 1 point [-]

It is not necessary, however, that this socialization involve a general method for coming to the right conclusions. Science doesn't need any such thing.

What science has is a general method for getting rid of incorrect conclusions. One of the many differences between science and theology is that theology does not conform itself to an objective reality. Science has a demonstrated capacity to detect deviation between itself and this reality, a property that no system of thought before it possessed, and none since has developed.

Comment author: Eliezer_Yudkowsky 16 May 2008 08:26:49PM 6 points [-]

Poke: The difference between a scientist and a theologian is not a difference of rationality or a difference between how their cognitive processes approximate Bayesian insights. The difference is simply that one studied science and trained as a scientist and now works in a laboratory while the other studied theology and trained as a theologian and now works in the theology department.

A... fascinating... perspective. But presumably even you admit that they're doing something different, or what does the scientist even learn from their mentor?

Perhaps you would have a different perspective on these matters if you were trying to build a scientist.

Comment author: Tim_Tyler 16 May 2008 08:30:15PM 0 points [-]

Re: "Contrast this to the notion we have in probability theory, of an exact quantitative rational judgment. [...] The laws of probability are laws."

Those who believe this sort of stuff should read up on Hume's problem of induction.

Comment author: ME3 16 May 2008 08:33:28PM 0 points [-]

P(A&B)<=P(A), P(A|B)>=P(A)

Isn't this just ordinary logic? It doesn't really require all of probability theory. I believe that logic is a fairly uncontroversial element of scientific thought, though of course occasionally misapplied.

Comment author: TGGP4 16 May 2008 09:35:54PM 0 points [-]

I might have linked to Hypotheses are overrated before, but I figured I'd do so again.

Comment author: DaveInNYC 16 May 2008 09:50:02PM 2 points [-]

"ME" - I've noticed that people on this forum seem to label ANYTHING that has to do with conditional probability "Bayesian". I'm not quite sure why this is; I have a hard enough time figuring out the real difference between a "frequentist" and a "Bayesian", but reading some of these posts I get the feeling that "Bayesian" around here means "someone who knows basic logic".

Comment author: Richard_Hollerith2 16 May 2008 09:57:49PM 0 points [-]

poke, in my humble opinion, you are not saying anything useful or worthwhile that Eliezer does not already know, and if you would not try so hard to inform Eliezer, it will probably be easier for you to learn useful and worthwhile things from Eliezer. I do not know how to say this more gently.

Comment author: Psy-Kosh 16 May 2008 10:22:43PM 1 point [-]

Tim Tyler: um... maybe I'm completely and utterly not understanding the point, but doesn't the knowledge that the laws of probability are the proper way to represent subjective uncertainty, itself, solve the problem of induction?

Or am I utterly missing some concept here?

Comment author: timtyler 04 June 2011 08:59:36PM 0 points [-]
Comment author: Eliezer_Yudkowsky 16 May 2008 11:01:50PM 3 points [-]

I disagree pretty strongly with poke's last comment. There's no difference between a scientist and a theologian except that they happen to work in different fields? Maybe that's true of some scientists and some theologians, people who got pushed through the PhD grinder the way that extra cow parts are compressed into sausages. But not the scientists who lead their field - or for that matter, the religious who really care.

Comment author: JulianMorrison 16 May 2008 11:03:24PM -1 points [-]

Poke: I think you have some explaining to do, given that theology had a ten thousand year head start, and sometimes a monopoly on experts. Science came from behind!

Eliezer: here's something that occurred to me, that might amuse you. Po science was the first AI, and the third optimization process.

Comment author: Caledonian2 16 May 2008 11:51:11PM 0 points [-]

Science explicitly rejects the idea of revelatory truth. Theology is pretty much okay with the concept. In fact, that's pretty much all they have, since theologians can't even demonstrate that the supposed object of their study even exists.

So even ignoring all the other possible arguments, I'm going to have to reject your foray into Feyerabendianism on those grounds, poke.

Comment author: RobinHanson 17 May 2008 12:55:34AM 1 point [-]

"What exactly is 'science'?" is a difficult complex question, which can profitably be studied for many years. It is the sort of question best avoided unless one wants to tackled it head on.

Comment author: Will_Pearson 17 May 2008 06:30:49AM 0 points [-]

"Will, probabilities are states of partial information, not objective properties of the problem. Unless you have reason to believe the data are wrong in a particular direction, your corrected estimate including meta-uncertainty is no different from the original as betting odds."

This is not quite what I meant, I was more point out that you didn't leave the us the opportunity to deny your data, which we would always have in the real world.

I'm also curious about the math for this. any math I have tried that assumes some error in the data seems to push the probabilities closer to 50% although by an unknown amount.

E.g. If you are trying to guess the bias in a (quantum) coin flip experiment and you have faulty detector of when things are heads and tails. Let us say you have 70 heads and 30 tails. If you assume false negatives (it is heads when you see tails) and false positives happen at the same rate and that they are independent. If the unknown error rate is E, you have 70E chance of at least one false positive and 30E chance of at least one false negative.

You might get the right results as you tend the amount of data to infinity but in limited trials things don't look so rosy.

Comment author: Tim_Tyler 20 May 2008 07:50:37PM 0 points [-]

Hume's problem of induction is widely regarded as being insoluble. Hume thought it was insoluble - and most people still agree with him. If someone tells you they have solved this problem, they are probably selling something.

Comment author: Eliezer_Yudkowsky 20 May 2008 07:56:50PM 1 point [-]

Insoluble? Give me a break. Somehow I had no trouble predicting, successfully, that the Sun would rise today.

Perhaps you meant to say The Problem of Justifying Induction or The Problem Of Explaining Why Induction Works. There is no problem of induction.

Comment author: Nick_Tarleton 20 May 2008 08:02:44PM 0 points [-]

Don't be disingenuous.

Comment author: Caledonian2 20 May 2008 08:12:39PM 1 point [-]

I have to agree with Eliezer - at least partially - on this one.

Part of the solution to the "Problem with Induction" is to recognize that induction does not produce even the illusion of objective truth. It is justified, and it needs to be justified, only so far as the individual perspective of the reasoner and his limited data. It cannot be taken beyond this, but it doesn't need to.

The real key is to recognize that deduction is actually a subset of induction, and it no more offers absolute certainty or confidence than any other sort. If the concept of 'knowledge' is to have any utility at all, it cannot rely on those impossibilities - and so the problem ceases to be a problem because it is ubiquitous and inevitable.

I can be justified in my belief that the sun will rise tomorrow; I can be justified in my belief that two plus three equals five. These are not different in kind, only in degree.

Comment author: Tim_Tyler 20 May 2008 08:25:27PM 0 points [-]

Hume's problem of induction is ancient. It is a basic issue in the philosophy of science:

"The problem puts in doubt all empirical claims made in everyday life or through the scientific method."

http://en.wikipedia.org/wiki/Problem_of_induction

Successfully predicting multiple sunrises doesn't begin to address the problem.

Bayesian reasoning doesn't solve the problem. It simply assumes a solution to the problem, and proceeds from there. A perfectly rational agent who denies the validity of induction would be totally unimpressed by Bayesian arguments.

Comment author: Caledonian2 20 May 2008 08:31:08PM 1 point [-]

A totally rational agent who denied the validity of induction would be unable to think.

The Hume's complaint is that there is uncertainty and doubt in all conclusions. That's a "problem" in precisely the same way that Godel's Incompleteness Theorems are a "problem" for our attempts to make a consistent and complete model of mathematics.

Comment author: Tim_Tyler 20 May 2008 08:35:49PM 0 points [-]

Here's how Binmore puts it:

"Someone who acts as if Bayesianism were correct will be said to be a Bayesianite.

It is important to distinguish a Bayesian like myself—someone convinced by Savage’s arguments that Bayesian decision theory makes sense in small worlds—from a Bayesianite. In particular, a Bayesian need not join the more extreme Bayesianites in proceeding as though:

* All worlds are small. * Rationality endows agents with prior probabilities. * Rational learning consists simply in using Bayes’ rule to convert a set of prior probabilities into posterior probabilities after registering some new data.

Bayesianites are often understandably reluctant to make an explicit commitment to these principles when they are stated so baldly, because it then becomes evident that they are implicitly claiming that David Hume was wrong to argue that the principle of scientific induction cannot be justified by rational argument."

- http://www.carloalberto.org/files/binmore.pdf

Re: "A totally rational agent who denied the validity of induction would be unable to think."

No, they would be a perfectly rational agent, quite capable of logical thought.

Comment author: Caledonian2 20 May 2008 10:26:09PM 0 points [-]

Since all reasoning is inductive, it would have a little consistency problem.

Comment author: poke 20 May 2008 11:00:03PM 0 points [-]

Hume defends two separate theses, inductive fallibilism and inductive skepticism, at different points in his work. Inductive fallibilism, that inductive arguments are inherently fallible, is widely accepted in philosophy. Inductive skepticism, that induction can never be justified, is not. Inductive probabilism, that induction gives us probabilities, is a position that accepts inductive fallibilism. David Stove's Scientific Irrationalism gives a good account of why inductive fallibilism succeeds where inductive skepticism fails. He also hammers on the important point that the problem of induction is a logical thesis and not a historical thesis; it's a problem of justifying induction and not a description of induction. Induction is still possible even if you can't justify it. The problem of induction is also only a problem if you accept Hume's premises (big-E Empiricism) and, obviously, the methodology of philosophy to begin with.

Comment author: Unknown 21 May 2008 03:13:17AM 0 points [-]

Inductive skepticism, as I understand it, is Hume's position that observing the sun rise today does not increase the probability that the sun will rise tomorrow.

This would be true only on the assumption that there is a 100% chance that whether or not the sun rises is completely random. If there is at least a one in a billion chance that the sun rises according to rule, then observing the sun rise once will increase the probability that it will rise next time.

How does a position merit the title "skeptical" when it maintains an infinite certainty of something completely contrary to experience, namely that everything is totally random?

Comment author: Tim_Tyler 21 May 2008 07:58:18AM 0 points [-]

Re: "Since all reasoning is inductive, it would have a little consistency problem." No: see deductive reasoning. Re: "Hume's complaint is that there is uncertainty and doubt in all conclusions" - Hume's problem of induction is only concerned with induction. Re: Acceptance of induction among philosophers: Hume's point was not that induction was common or mistaken - but that it is not rational, it lacks justification that would convince a sceptical rational agent. Re: "How does a position merit the title "skeptical" when it maintains an infinite certainty of something completely contrary to experience" - the idea that the past cannot predict the future is not contrary to experience. It conflicts with evolutionary biology - but good luck with convincing an inductive sceptic that evolutionary biology is correct - the whole enterprise is founded on induction.

Comment author: Unknown 21 May 2008 08:07:28AM 0 points [-]

Tim: contrary to experience or not, how is it justified to be 100% certain that everything is random?

If you are not 100% certain of this, then induction can be justified, and without any circularity.

Comment author: Tim_Tyler 21 May 2008 08:51:11AM 0 points [-]

Re: "how is it justified to be 100% certain that everything is random". That is not what inductive sceptics think. They think that using induction to understand the world has no logical basis (and they are perfectly right about that). That does not mean that the world is without pattern or meaning - just that using induction in an attempt to extract the patterns is not justifiable behaviour. If you want to put induction in your toolbox, then fine, but you can't pretend that this behaviour has a coherent justification - because you have no counter-argument to a sceptic who says it should be left out.

Hume's problem of induction is basic philosophy of science material - and you ought to know about it if you are discussing this kind of material. Not familiar with the topic? Don't ask here - instead, hit the library, there is a lot of existing material on the subject.

Comment author: Unknown 21 May 2008 11:46:46AM -1 points [-]

Tim, I am very familiar with the topic.

So do you think that observing the sun rise today does not increase the probability that it will rise tomorrow?

(One warning in advance: unless you say that you are 100% certain that the sun rises by chance, your inductive skepticism can be logically demonstrated to be false.)

Comment author: timtyler 04 June 2011 09:04:56PM -3 points [-]

Er, I like induction - but my personal feelings on the matter were programmed into me by evolution, and make not the slightest difference to the problem of induction.

I am still a bit weirded-out at how people don't seem to be familiar with this issue. Isn't this rather basic material?

Comment author: Dihymo 01 June 2008 10:48:02PM -1 points [-]

A rising sun might increase the data pointing to the provability that it will rise tomorrow, but the probability remains the same.

The discovery that the Earth rotates, easily done by studying the stars from two different places, would dramatically send the probability to 100% because the provability went there first.

So if you want to know about the sun rise you'll have to study the stars first. At night. It's like trying to figure out why ice melts without having any source of heat.

Stop with the dialectics. Try three not two not one and not zero.

Comment author: afeller08 01 November 2012 01:59:31AM 0 points [-]

Contrast this to the notion we have in probability theory, of an exact quantitative rational judgment. If 1% of women presenting for a routine screening have breast cancer, and 80% of women with breast cancer get positive mammographies, and 10% of women without breast cancer get false positives, what is the probability that a routinely screened woman with a positive mammography has breast cancer? 7.5%. You cannot say, "I believe she doesn't have breast cancer, because the experiment isn't definite enough." You cannot say, "I believe she has breast cancer, because it is wise to be pessimistic and that is what the only experiment so far seems to indicate." 7.5% is the rational estimate given this evidence, not 7.4% or 7.6%. The laws of probability are laws.

I try to do the math when you pose a problem. I'm pretty sure in this case the rational estimate is 7.4%. If 1000 women get tested, you expect 8 of those women to be true positives and 100 to be false positives. 8/108 is .074074... (ellipsis for repeating, I don't know how to do a superscripted bar in a comment here). I have no particular objections to rounding for ease of communication, and would ordinarily consider this sort of correction to be an unnecessary nitpick, but in this case, I'm objecting to the statement that 7.4% is not the correct rational estimate given the evidence, not the statement that 7.5% is. If you happen to read this comment, you might want to change that.

Comment author: lavalamp 01 November 2012 03:27:00AM *  0 points [-]

8/108 is not the correct calculation. You want 8/107. That's women with cancer and a positive test divided by all women with a positive test. Out of 1000 women, there are 99, not 100 false positives (10% of 990 women without cancer).

or: .01 * .8 / (.01 * .8 + .99 * .1) = 7.4766355%