Today's post, Science Doesn't Trust Your Rationality was originally published on 14 May 2008. A summary (taken from the LW wiki):

 

The reason Science doesn't always agree with the exact, Bayesian, rational answer, is that Science doesn't trust you to be rational. It wants you to go out and gather overwhelming experimental evidence.


Discuss the post here (rather than in the comments to the original post).

This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was The Dilemma: Science or Bayes?, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.

Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.

New Comment
24 comments, sorted by Click to highlight new comments since:
[-]Aharon180

Science is built around the assumption that you're too stupid and self-deceiving to just use Solomonoff induction. After all, if it was that simple, we wouldn't need a social process of science... right?

Seeing how often overconfidence bias is brought up as a problem, and how the rationality camps etc. are implemented to battle this bias, among others, this assumption doesn't seem to be a bad starting point.

[-]Shmi80

In the beginning came the idea that we can't just toss out Aristotle's armchair reasoning and replace it with different armchair reasoning. We need to talk to Nature, and actually listen to what It says in reply. This, itself, was a stroke of genius.

If you do a probability-theoretic calculation correctly, you're going to get the rational answer.

How does one make sure that this "probability-theoretic calculation" is not a "different armchair reasoning"?

Science doesn't trust your rationality, and it doesn't rely on your ability to use probability theory as the arbiter of truth. It wants you to set up a definitive experiment. [...] Science is built around the assumption that you're too stupid and self-deceiving to just use Solomonoff induction.

This seems like a safe assumption. On the other hand, trusting in your powers of Solomonoff induction and Bayesianism doesn't seem like one: what if you suck at estimating priors and too unimaginative to account for all the likely alternatives?

So, are you going to believe in faster-than-light quantum "collapse" fairies after all? Or do you think you're smarter than that?

Again a straw-collapse. No one believes in faster-than-light quantum "collapse", except for maybe some philosophers of physics.

Again a straw-collapse. No one believes in faster-than-light quantum "collapse", except for maybe some philosophers of physics.

Speed of light or slower collapse, applied to spatially separated measurements of entangled particles, seems even more ridiculous.

[-]Shmi40

The collapse model says that after performing a local measurement, the wavefunction locally evolves from the eigenstate that has been measured, nothing else. For a local observer spacelike separated events do no exist until they come into causal contact with it. That's the earliest time that can be called a measurement time.

For a local observer spacelike separated events do no exist until they come into causal contact with it.

That sounds like mind projection fallacy. That the observer does not know about the events doesn't mean they don't exist.

That's the earliest time that can be called a measurement time.

That would imply that whatever measurements we make locally, from the perspective of an observer who hasn't yet interacted with those measurements, our wave function hasn't collapsed yet, and we remain in superposition.

So, how does it make sense that the wave function has collapsed from our perspective?

[-]Shmi20

That sounds like mind projection fallacy. That the observer does not know about the events doesn't mean they don't exist.

"Exist" should be a taboo word, until you can explain it in terms of other QM concepts.

For a thing to exist, it means that thing is part of the reality that embeds our minds and our experience, whether or not that thing has an effect on our minds and our experience. Of course when I say something exists, it is a prediction of my model of reality. And you might ask how I can defend my model in favor of an alternative that says different things about events with no effect on my experience, and my answer would be that I prefer models that use the same rules whether or not I am looking, in which my reducible mind is not treated as ontologically fundamental.

[-]Shmi00

For a thing to exist, it means that thing is part of the reality that embeds our minds and our experience,

"Reality" is another taboo word. We have no direct QM experience.

I prefer models that use the same rules whether or not I am looking, in which my reducible mind is not treated as ontologically fundamental.

If there is a single lesson from QM, it is that looking (=measurement) affects what happens. This has nothing to do with minds.

... Solomonoff induction ...

Totally agreed. Thing is in general incomputable, how much more you need not to trust yourself doing it correctly? Clearly you can't have a process that relies on computing incomputable things right. I'm becoming increasingly convinced, either via confirmation bias, or via proper updates, that Eliezer skipped helluva lot of fundamentals.

[-]Shmi40

For those interested in the current state of the Born rule research, there is a review in the latest Foundations of Physics.

A note of warning: this journal is heavily skewed towards philosophy and sometimes publishes complete crankery. Even its new chief editor, a Nobel prize winner Gerard 't Hooft, writes stuff like this on occasion.

[-]asr30

I think Yudkowsky's analysis here isn't putting enough weight on the social aspects. "Science", as we know it, is a social process, in a way that Bayesian reasoning is not.

The point of science isn't to convince yourself -- it's to convince an audience of skeptical experts.

A large group of people, with different backgrounds, experiences, etc aren't going to agree on their priors. As a result, there won't be any one probability on a given idea. Different readers will have different background knowledge, and that can make a given hypothesis seem more or less believable.

(This isn't avoidable, even in principle. The Solomonoff prior of an idea is not uniquely defined, since encodings of ideas aren't unique. You and the reviewers are not necessarily wrong in putting different priors on an idea even if you are both using a Solomonoff prior. The problem wouldn't go away, even if you and the reviewers did have identical knowledge, which you don't.)

Yudkowsky is right that this makes science much more cautious in updating than a pure Bayesian. But I think that's desirable in practice. There is a lot of value to having a scientific community all use the same theoretical language and have the same set of canonical examples. It's expensive (in both human time and money) to retrain a lot of people. Societies cannot change their minds as quickly or easily as the members can, so it makes sense to move more slowly if the previous theory is still useful.

Other issue is that the process should be difficult to maliciously subvert (or non maliciously by rationalization of erroneous belief). That results in a boatload of features that may be frustrating to those wanting to introduce unjustified untestable propositions for fun and profit (or to justify erroneous beliefs).

[-]asr00

Hrm. My impression is that science mostly isn't organized to catch malicious fraud. It's comparatively rare for outsiders to do a real audit of data or experimental method, particularly if the result isn't super exciting. In compensation, the penalties for being caught falsifying data are ferocious -- I believe it's treated as an absolute career-ending move.

I agree that the process is pretty good at squelching over-enthusiastic rationalization. That's an aspect I thought Yudkowsky captured quite well.

It is a part of difficulty to subvert - it is difficult to arrange a scheme with positive expected utility for falsifying data. At the same time there's plenty of subtle falsifications such as discarding of negative results. And when it comes to rationality - if you have a hypothesis X that is supported by arguments A,B,C,D and is debunked by arguments E,F,G,H , you can count on rational self interested agents to put more effort into finding the first four but not the last four, as payoff for former is bigger. (The real agent's reasoning costs utility, and it is expensive to find those arguments)

Consider some issue like AI risk. If you can pick out the few reasons why AI would kill everyone, even very bad reasons that rely on some oracular stuff that is not implementable, you are set for life (and you don't even have to invent them, you can pick out of fiction and simply collect them and promote together). If you can make a few equally good reasons not to, that's pure waste of your time as far as self interest is concerned. Of course science does not trust you to put equal effort when it is clearly irrational to put equal effort, for anyone but the true angels (and then for the true angels it is also rational to try to grab as much money (which would be ill spent otherwise) as they can as easily as they can, and then donate it to charities etc, so for purpose of fact finding you can't trust even the selfless angels).

It is a part of difficulty to subvert - it is difficult to arrange a scheme with positive expected utility for falsifying data.

Given that one gets fame for "spectacular" discoveries, not at all especially in fields like biology where there are frequently lots of confounding variables that you can use to provide cover.

That has always been the problem with experimental science, sometimes you can't really protect from falsification.

Actually, thing is, given the list of biases, one shouldn't trust one's own rationality, let alone rationality of other people (If a rationalist trusts his own rationality while knowing of biases... that's just a new kind of irrationalist). Other issue is that introduction of novel hypotheses with 'correct priors' allows to introduce a cherry picked selection of hypotheses that would lead to a new hypothesis with undue confidence that wouldn't have existed if all possible hypotheses were considered. (i.e. you may want to introduce hypothesis A with undue confidence, you introduce hypotheses B,C,D,E,F... which would raise probability of A, but not G,H,I,J... which would lower probability of A). A fully rational even slightly selfish agent would do such a thing. It is insufficient to converge when all hypotheses are considered. One has to provide best approximation at any time. That pretty much makes most methods that sound great in abstract unbounded theory entirely inapplicable.

Also, BTW, science does trust your rationality and does trust your ability to set up a probabilistic argument. But it only does so when it makes sense for you to trust your probabilistic argument - when you are actually doing bulletproof math with no gaps where errors creep in.

I disagree with his statements on the effects of state power. Regulation seems to work well enough over here; I don't know from where he takes the unsourced assumption that it doesn't.

I disagree with the quoted part of the post. Science doesn't reject your bayesian conclusion (provided it is rational), it's simply unsatisfied by the fact that it's a probabilistic conclusion. That is, probabilistic conclusions are never knowledge of truth. They are estimations of the likelihood of truth. Science will look at your bayesian conclusion and say "99% confident? That's good!, but lets gather more data and raise the bar to 99.9%!). Science is the constant pursuit of knowledge. It will never reach it it, but it will demand we never stop trying to get closer.

Beyond that, I think in a great many cases (not all) there are also some inherent problems in using explicit bayesian (or otherwise) reasoning for models of reality because we simply have no idea what the space of hypotheses could be. As is such, the best bayesian can ever do in this context is give an ordering of models (e.g., this model is better than this model), not definitive probabilities. This doesn't mean science rejects correct bayesian reasoning for the reason previously stated, but it would mean that you can't get definitive probabilistic conclusions with bayesian reasoning in the first place for many contexts.

Libertarianism secretly relies on most individuals being prosocial enough to tip at a restaurant they won't ever visit again.

Libertarianism might rely on individuals generally being that prosocial, but that specific thing isn't necessary. Most jobs don't get tips. There's no reason waiters need them.

As I understand it, in the USA waiting staff get paid below minimum wage and are expected to live off tips.

If tipping stopped, waiting staff wages would increase and so would food prices (to pay for the wage increases).