Too many people attempt to use logic when they should be using probabilities - in fact, when they are using probabilities, but don't mention it. Here are some of the major fallacies caused by misusing logic and probabilities this way:
- "It's not certain" does not mean "It's impossible" (and vice versa).
- "We don't know" absolutely does not imply "It's impossible".
- "There is evidence against it" doesn't mean much on its own.
- Being impossible *in a certain model*, does not mean being impossible: it changes the issue to the probability of the model.
In Jaynes' bayesian setting, a probability is a number you assign to a proposition. Models as generally used are not propositions.
Don't like that one. For any model, you can generally conceive of an infinite number of slightly tweaked, slightly better versions, so that for any particular model P(model is the appropriate one) is 0.
The probability that some "random sample" from some set of models will have improved performance?
What aggregated error function to quantify "better"? How was the domain of the model sampled for the error function?
I see an ocean of structural commitments being imposed on the problem, commitments about how you choose to think about the problem, to define a "probability of a model".
And after all that, I still don't see a proposition that you're assigning a probability to, I see a model. I could just as well define the probability of my shoe. I could have all sorts of structural commitments about the meaning of "the probability of my shoe". But in the end, that doesn't make my shoe a proposition, nor the probability of a shoe that I've just defined the same category of thing as the probability of a proposition.
The Map is not the Territory. There is no "true" map. There is no "true" model. The relevant thing for a model is how well it gets you to where you want to go.
It's true that models are maps. It's also true, to recall a George Box quote, that "all models are false but some are useful".
I agree that
...and that, to my mind, supports the notion of the "probability of a model", or, rather, the "probability of this particular model being sufficiently good to get you to where you want to go".
I think it's a fairly practical concept -- if I'm modeling something and I am fitting several models which give me vario... (read more)