Perplexed comments on Taking Ideas Seriously - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (257)
True. But you seem to be assuming that a "theory" has to be a universal law of nature. You are too attached to physics. But in other sciences, you can have a theory which is quite explanatory, but is not in any sense a "law", but rather it is an event. Examples:
Probabilities can be assigned to these theories.
And even for universal theories, you can talk about the relative odds of competing theories being correct - say between a supersymetric GUT based on E6 and one based on E8. (Notice, I said "talk about the odds", not "calculate them") And you can definitely calculate how much one particular experimental result shifts those odds.
As you pointed out earlier, we have two ostensibly different ways of investigating the theory that the Chinese discovered America in 1421: the Popperian way, in which this theory and alternatives to it are criticized. And the Bayesian way, in which those criticisms are broken down into atomic criticisms, and likelihood ratios are attached and multiplied.
I've seen plenty of rigorous Popperian discussions but not very many very rigorous -- or even modestly rigorous -- Bayesian discussions, even on this website. One piece of evidence for the China-discovered-America theory is some business about old Chinese maps. How does a Bayesian go about estimating the likelihood ratio P(China discovered America | old maps) / P(China discovered America | no old maps)?
I think you want to ask about P(maps|discover) / P(no maps|discover). Unless both wikipedia and my intuition are wrong.
Does catching you in this error relieve me of the responsibility of answering the question? I hope so. Because I would want to instead argue using something like P(maps|discover) vs P(maps|not discover). That doesn't take you all the way to P(discover), but it does at least give you a way to assess the evidential weight of the map evidence.
Now P(Sewing-Machine is a phony) = ?
Here's another personal example of Bayesianism in action. Do you have a sense of how much you updated by? P(Richard Dawkins praises Steven Pinker | EP is bunk)/ P(Richard Dawkins praises Steven Pinker | EP is not bunk) is .5? .999? Any idea?
P("Sewing Machine" is a nym) = 1.0
P(Sewing Machine has been disingenuous) = 0.5 and rising
P(Dawkins praises Pinker|EP is not bunk) is ill defined because
P(EP is not bunk) = ~0
but I have updated P(Dawkins believes EP is not bunk) to at least 0.5
I don't know what "disingenuous" means.