TheAncientGeek comments on Self-Congratulatory Rationalism - Less Wrong

51 Post author: ChrisHallquist 01 March 2014 08:52AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (395)

You are viewing a single comment's thread. Show more comments above.

Comment author: Lumifer 23 April 2014 08:30:33PM 2 points [-]

What I wanted to communicate with those terms was communicated by the analogies to the dice cup and to the scientific theory: it's perfectly possible for two hypotheses to have the same present probability but different expectations of future change to that probability.

I think you are talking about what's in local parlance is called a "weak prior" vs a "strong prior". Bayesian updating involves assigning relative importance the the prior and to the evidence. A weak prior is easily changed by even not very significant evidence. On the other hand, it takes a lot of solid evidence to move a strong prior.

In this terminology, your pre-roll estimation of the probability of double sixes is a weak prior -- the evidence of an actual roll will totally overwhelm it. But your estimation of the correctness of the modern evolutionary theory is a strong prior -- it will take much convincing evidence to persuade you that the theory is not correct after all.

Of course, the posterior of a previous update becomes the prior of the next update.

Using this language, then, you are saying that prima facie evidence of someone's stupidity should be a minor update to the strong prior that she is actually a smart, reasonable, and coherent human being.

And I don't see why this should be so.

Comment author: TheAncientGeek 23 April 2014 08:35:21PM *  -1 points [-]

Because you are not engaged in establishing facts about how smart someone is, you are instead trying to establish facts about what they mean by what they say.