aspera comments on Welcome to Less Wrong! - Less Wrong

48 Post author: MBlume 16 April 2009 09:06AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (1953)

You are viewing a single comment's thread. Show more comments above.

Comment author: aspera 08 October 2012 03:55:29AM 1 point [-]

Thanks Tim.

In the post I'm referring to, EY evaluates a belief in the laws of kinematics based on predicting how long a bowling ball will take to hit the ground when tossed off a building, and then presumably testing it. In this case, our belief clearly "pays rent" in anticipated experience. But what if we know that we can't measure the fall time accurately? What if we can only measure it to within an uncertainty of 80% or so? Then our belief isn't strictly falsifiable, but we can gather some evidence for or against it. In that case, would we say it pays some rent?

My argument is that nearly every belief pays some rent, and no belief pays all the rent. Almost everything couples in some weak way to anticipated experience, and nothing couples perfectly.

Comment author: TimS 10 October 2012 05:47:23PM *  1 point [-]

I think you are conflating the issue of falsifiability with the issue of instrument accuracy. Falsifiability is just one of several conditions for labeling a theory as scientific. Specifically, the requirement is that a theory must detail in advance what phenomena won't happen. The theory of gravity says that we won't see a ball "fall" up or spontaneously enter orbit. When more specific predictions are made, instrument errors (and other issues like air friction) become an issue, but that not the core concern of falsifiability.

For example, Karl Popper was concerned about the mutability of Freudian psychoanalysis, which seemed capable of explaining both an occurrence and its negative without difficulty. But contrast, the theory of gravity standing alone admits that it cannot explain when an object falls to Earth at a different rate than 9.88 m/s^2. Science as a whole has explanations, but gravity doesn't.

Committing to falsifiability helps prevent failure modes like belief in belief.

Comment author: aspera 10 October 2012 06:38:00PM 0 points [-]

There are a couple things I still don't understand about this.

Suppose I have a bent coin, and I believe that P(heads) = 0.6. Does that belief pay rent? Is it a "floating belief?" It is not, in principle, falsifiable. It's not a question of measurement accuracy in this case (unless you're a frequentist, I guess). But I can gather some evidence for or against it, so it's not uninformative either. It is useful to have something between grounded and floating beliefs to describe this belief.

Second, when LWers talk about beliefs, or "the map," are they referring to a model of what we expect to observe, or how things actually happen? This would dictate how we deal with measurement uncertainties. In the first case, they must be included in the map, trivially. In the second case, the map still has an uncertainty associated with it that results from back-propagation of measurement uncertainty in the updating process. But then it might make sense to talk only about grounded or floating beliefs, and to attribute the fuzzy stuff in between to our inability to observe without uncertainty.

Your distinction makes sense - I'm just not sure how to apply it.

Comment author: TimS 10 October 2012 07:25:19PM *  3 points [-]

Strictly speaking, no proposition is proven false (i.e. probability zero). A proposition simply becomes much less likely than competing, inconsistent explanations. To speak that strictly, falsifiability requires the ability to say in advance what observations would be inconsistent (or less consistent) with the theory.

Your belief that the coin is bent does pay rent - you would be more surprised by 100 straight tails than if you thought the coin was fair. But both P=.6 and P=.5 are not particularly consistent with the new observations.

Map & Territory is a slightly different issue. Consider the toy example of the colored balls in the opaque bag. Map & Territory is a metaphor to remind you that your belief in the proportion of red and blue balls is distinct from the actual proportion. Changes in your beliefs cannot change the actual proportions.

Your distinction makes sense - I'm just not sure how to apply it.

When examining a belief, ask "What observations would make this belief less likely?" If your answer is "No such observations exist" then you should have grave concerns about the belief.

Note the distinction between:

  • Observations that would make the proposition less likely

  • Observations I expect

I don't expect to see a duck have sex with an otter and give birth to a platypus, but if I did, I'd start having serious reservations about the theory of evolution.

Comment author: BerryPick6 10 October 2012 09:31:49PM 0 points [-]

I found this extremely helpful as well, thank you.

Comment author: aspera 10 October 2012 09:05:20PM 0 points [-]

That's very helpful, thanks. I'm trying to shove everything I read here into my current understanding of probability and estimation. Maybe I should just read more first.

Comment author: beoShaffer 08 October 2012 04:24:31AM *  0 points [-]

But what if we know that we can't measure the fall time accurately? What if we can only measure it to within an uncertainty of 80% or so? Then our belief isn't strictly falsifiable, but we can gather some evidence for or against it. In that case, would we say it pays some rent?

Yes. As a more general clarification, making beliefs pay rent is supposed to highlight the same sorts of failure modes as falsifiablility while allowing useful but technically unfalsifiable beliefs (e.g., your example, some classes of probabilistic theories).