Too many people attempt to use logic when they should be using probabilities - in fact, when they are using probabilities, but don't mention it. Here are some of the major fallacies caused by misusing logic and probabilities this way:
- "It's not certain" does not mean "It's impossible" (and vice versa).
- "We don't know" absolutely does not imply "It's impossible".
- "There is evidence against it" doesn't mean much on its own.
- Being impossible *in a certain model*, does not mean being impossible: it changes the issue to the probability of the model.
I think that's good too - Jaynes advocated including a "something else that I didn't think of" hypothesis to your hypothesis to avoid accepting something strongly when all you've done is eliminate the alternatives you've considered.
"Is accurate" isn't much of a proposition in itself, as it leaves out the level of accuracy.
Probability of a proposition. Propositions are true or false. Level of accuracy of a model. Models are more or less accurate.
Maybe "Is accurate enough that it doesn't change our answer by an unacceptable amount"? The level of accuracy we want depends on context.
How would you measure the accuracy of a model, other than by its probability of giving accurate answers? "Accurate" depends on what margin of error you accept, or you can define it with increasing penalties for increased divergence from reality.