Vladimir_Nesov comments on Making Beliefs Pay Rent (in Anticipated Experiences) - Less Wrong

110 Post author: Eliezer_Yudkowsky 28 July 2007 10:59PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (245)

Sort By: Old

You are viewing a single comment's thread. Show more comments above.

Comment author: Vladimir_Nesov 09 August 2010 08:25:58PM *  1 point [-]

When a belief (hypothesis) is about reality, it responds to new evidence, or arguments about previously known evidence. It's reasonable to expect that as a result, some beliefs will turn out incorrect, and some certainly correct. Either way it's not a problem: you do learn things about the world as a result, whatever the conclusion. You learn that there are no ghosts, but there are rainbows.

The problem are the beliefs that purport to be speaking about reality, but really don't, and so you become deceived by them. Not being connected to reality through anticipated experience, they take your attention where there is no use for them, influence your decisions for no good reason, and protect themselves by ignoring any knowledge about the world you obtain.

It is a great heuristic to treat any beliefs that don't translate into anticipated experience with utmost suspicion, or even to run away from them in horror.

Comment author: Dpar 09 August 2010 08:47:13PM 0 points [-]

How would you learn that there are no ghosts? You form the belief "there are ghosts" which leads to the anticipated experience (by your definition of such) that "I will read about ghosts in a book", you go and read about ghosts in a book. Criteria met, belief validated. Same goes for UFOs, psychics, astrology etc. What value does the concept of anticipated experience have if it fails to filter out even the most common fallacious beliefs?

Comment author: Vladimir_Nesov 09 August 2010 09:06:32PM 2 points [-]

That there are books about ghosts is evidence for ghosts existing (but also for lots of other things). There are also arguments against this hypothesis, both a priori and observational. A good model/theory also explains why you'd read about ghosts even though there is no such thing.

Comment author: Dpar 09 August 2010 09:25:01PM 0 points [-]

You're not addressing my core point though. If the criteria of anticipated experience as you define it is as likely to be satisfied by fallacious beliefs as it is by valid ones, what purpose does it serve?

Comment author: Vladimir_Nesov 09 August 2010 09:28:00PM *  1 point [-]

I addressed that question in this comment; if something is unclear, ask away. The difference is between a belief that is incorrect, and a belief that is not even wrong.

Comment author: Dpar 09 August 2010 09:42:10PM 1 point [-]

Alright, I think I see what you're getting it, but I still can't help but think that your definition of sensory experience is too broad to be really useful. I mean the only type of belief that it seems to filter out is absolute nonsense like "I have a third leg that I can never see or feel", did I get that about right?

Comment author: Vladimir_Nesov 09 August 2010 09:49:11PM *  1 point [-]

I mean the only type of belief that it seems to filter out is absolute nonsense like "I have a third leg that I can never see or feel", did I get that about right?

Yes. It happens all the time. It's one way nonsense protects itself, to persist for a long time in minds of individual people and cultures.

(More generally, see anti-epistemology.)

Comment author: Dpar 09 August 2010 09:55:46PM 1 point [-]

So essentially what you and Eliezer are referring to as "anticipated experience" is just basic falsifiability then?

Comment author: Vladimir_Nesov 09 August 2010 10:03:10PM 4 points [-]

With a bayesian twist: things don't actually get falsified, don't become wrong with absolute certainty, rather observations can adjust your level of belief.

Comment author: SilasBarta 09 August 2010 10:31:02PM 2 points [-]

Slightly OT, but this relates to something that really bugs me. People often bring up the importance of statistical analysis and the possibility of flukes/lab error, in order to prove that, "Popper was totally wrong, we get to completely ignore him and this out-dated, long-refuted notion of falsifiability."

But the way I see it, this doesn't refute Popper, or the notion of falsifiability: it just means we've generalized the notion to probabilistic cases, instead of just the binary categorization of "unfalsified" vs. "falsified". This seems like an extension of Popper/falsifiability rather than a refutation of it. Go fig.

Comment author: Dpar 09 August 2010 11:03:27PM *  4 points [-]

Ok, I understand what you mean now. Now that you've clarified what Eliezer meant by anticipated experience my original objection to it is no longer applicable. Thank you for an interesting and thought provoking discussion.

Comment author: jimrandomh 09 August 2010 10:09:28PM 1 point [-]

Falsifiability can be quantified, in bits. If the only test you have for whether something's true or not is something lame like whether it appears in stories or not, then you have a tiny amount of falsifiability. If there is a large supply of experiments you can do, each of which provides good evidence, then it has lots of falsifiability.

(This really deserves to be formalized, in terms of something along the lines of expected bits of net evidence, but I'm not sure how to do so, exactly. Expected bits of evidence does not work, because of scenarios where there is a small chance of lots of evidence being available, but a large chance of no evidence being available.)

Comment author: SilasBarta 09 August 2010 10:20:53PM *  3 points [-]

Just a note about terminology: "expected bits of evidence" also goes by the name of entropy, and is a good thing to maximize in designing an experiment. (My previous comment on the issue.)

And if I understand you correctly, you're saying that the problem with entropy as a measure of falsifiability, is that someone can come up with a crank theory that gives the same predictions in every single case, except one that is near impossible to observe, but which, if it happened, would completely vindicate them?

If so, the problem with such theories is that they have to provide a lot of bits to specify that improbable event, which would be penalized under the MML formalism because it lengthens the hypothesis significantly. That may be want you want to work into a measure of falsifiability.

But then, at that point, I'm not sure if you're measuring falsifiability per se, or just general "epistemic goodness". It's okay to have those characteristics you want as a separate desideratum from falsifiability.

Comment author: Dpar 09 August 2010 11:09:43PM 1 point [-]

Isn't it an essential criteria of falsifiability to be able to design an experiment that can DEFINITIVELY prove the theory false?

Comment author: RobinZ 10 August 2010 12:49:25AM *  4 points [-]

That is the criterion which the Bayesian idea of evidence lets you relax. Instead of saying that "you need to be able to define experiments where at least one result would be completely impossible by the theory", a Bayesian will tell you that "you need to be able to define experiments where the probability of one result under the theory is significantly different from the probability of another result".

Look at, say, the theory that a coin is weighted towards heads. If you want to be pedantic, no result can "definitely prove" that it is not (unusual events can happen), but an even split of heads and tails (or a weighting towards tails) is much more unusual given that theory than a weighting towards heads.

Edit PS: I am totally stealing the meme that "Bayes is a generalization of Popper" from SilasBarta.

Comment author: SilasBarta 10 August 2010 02:05:17PM *  1 point [-]

Steal the meme, and spread it as far and as wide as you possibly can! The sooner it beats out "Popper is so 70 years ago", the better. (Kind of ironic that Bayes long predated Popper, though the formalization of [what we now call] Bayesian inference did not.)

Example of my academically-respected arch-nemesis arguing the exact anti-falsificationist view I was criticizing.

Comment author: thomblake 10 August 2010 02:18:50PM 2 points [-]

Edit PS: I am totally stealing the meme that "Bayes is a generalization of Popper" from SilasBarta.

I'm pretty sure that was handily discussed in An Intuitive Explanation of Bayes's Theorem and A Technical Explanation of Technical Explanation.

Comment author: JoshuaZ 10 August 2010 02:15:05PM 4 points [-]

As Robin's explained below Bayesianism doesn't do that. You should also see the works of Lakatos and Quine where they discuss the idea that falsification is flawed because all claims have auxiliary hypotheses and one can't falsify any hypothesis in isolation even if you are trying to construct a neo-Popperian framework.

Comment author: SilasBarta 10 August 2010 02:42:18PM 3 points [-]

Yes, but that still doesn't show falsificationism to be wrong, as opposed to "narrow" or "insufficiently generalized". Lakatos and Quine have also failed to show how it's a problem that you can't rigidly falsifiy a hypothesis in isolation: Just as you can generalize Popper's binary "falsified vs. unfalsified" to probabilistic cases, you can construct a Bayes net that shows how your various beliefs (including the auxiliary hypotheses) imply particular observations.

The relative likelihoods they place on the observations allow you to know the relative amount by which those various beliefs are attenuated or amplified by any particular observation. This method gives you the functional equivalent of testing hypotheses in isolation, since some of them will be attenuated the most.

Comment author: satt 10 August 2010 09:38:07PM *  1 point [-]

If I remember rightly, that's where poor old Popper came unstuck: having thought of the falsifiability criterion, he couldn't work out how to rigorously make it flexible. And as no experiment's exactly 100% uppercase-D Definitive, that led to some philosophers piling on the idea of falsifiability, as JoshuaZ said.

But more recent work in philosophy of science suggests a more sophisticated way to talk about how falsifiability can work in the real world.

The key idea is "severe testing", where a "severe test" is a test likely to expose a specific error in a model, if such an error is present. Those models that pass more, and more severe, tests can be regarded as more useful than those that don't. This approach also disarms the "auxiliary hypotheses" objection JoshuaZ paraphrased; one can just submit those hypotheses to severe testing too. (I wouldn't be surprised to find out that's roughly equivalent to the Bayes net approach SilasBarta mentioned.)