Bayesian view of scientific virtues

Discuss the wikitag on this page. Here is the place to ask questions and propose changes.
New Comment
7 comments, sorted by

I feel like this highlights a property I have come to suspect, and I think may be under-emphasized: Bayesian reasoning does not tell you how probable your hypothesis is. It tells you how probable it is compared to the other options you've put on the table. Thinking of a new explanation can radically change things. ... It looks like https://arbital.com/p/227/ has good information relevant to this.

I wrote this out for myself in attempt to fully grasp this and maybe someone else might find it useful:

You have two theories, A and B. A is more complex then B, but has sharper/more precise predictions for it's observables. i.e. given a test, where it's either +-ve or -ve (true or false), then we necessitate that P(+ | A) > P(+ | B).

Say that P(+ | A) : P(+ | B) = 10 : 1, a favorable likelihood ratio.

Then each successful +-ve test gives 10 : 1 odds for theory A over theory B. You can penalize A initially for algorithmic complexity and estimate/assign it 1 : 10^5 odds for it; i.e. you think it is borderline absurd.

But if you get 5 consecutive +-ve tests, then your posterior odds become 1 : 1; meaning your initial odds estimate was grossly wrong. In fact, given 5 more consecutive +-ve tests, it is theory B which should at this point be considered absurd.

Of course in real problems, the favorable likelihood ratio could be as low as 1.1 : 1, and your prior odds are not as ridiculous; maybe 1 : 100 against. Then you'd need about 50 updates before you get posterior odds of about 1 : 1. You then seriously question the validity of your prior odds. After another 50 updates, you're essentially fully convinced that the new theory contestant is much better then the original theory.

Does "sure" mean 100% confidence? If so, is this a correct statement?

Or would it be more correct to say: - we're extraordinarily confident that Newton's gravitation is close to correct, - we're extraordinarily confident that Einstein's gravitation is even closer, - we're mildly confident that we will find no closer theories, though one alternative to explaining dark matter would be modified gravitation, so we're considerably less confident than we would be if there were no known evidence suggesting inaccuracies in Einstein's gravitation, by a factor of P(Einstein|DarkMatter).

This UI could perhaps do with a flag meaning something like "this bit of writing is particularly meritorious, thought-inspiring, smile-creating: strive to retain in future edits if possible".

More explanation as how to calculate average velocity?

The Grek/Thag and Galileo/Aristotle dialogues are both great, but I found it a bit jarring when the prose itself would shift between caveperson-speak and the style used in the rest of the essay.

Also, the short section on Experimentation is kind of anticlimactic as a conclusion to the guide.

"speed have been" -> "speed will have been" ?