BrianScurfield comments on Taking Ideas Seriously - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (257)
And in the math papers example, how exactly are you going to do that? Presumably you are going to go through the papers and the criticisms in detail and evaluate the content. And when you do that you are going to think of reasons why one is right and the other wrong. And then probabilities become irrelevent. It's your understanding of the content that will enable you to choose.
Right - but you don't "choose" - you assign probabilities. Rejecting something completely would be bad - because of:
http://lesswrong.com/lw/mp/0_and_1_are_not_probabilities/
I don't think anyone is falling into this trap. It sounds like the Popperian version is replacing "true" and "false" by "tentatively true" and "tentatively false."
"Tentatively true" and "tentatively false" sound a lot like probabilities which are not expressed in a format which is compatible with Bayes rule.
It is hard to see how that adds anything - but rather easy to see how it subtracts the ability to quantitatively analyse problems.
That's what I said.
Edit: That refers to the first sentence only.
Theories are either true or false. The word "tentative" is there as an expression of fallibility. We cannot know if a theory is in fact true: it may contain problems that we do not yet know about. All knowledge is tentative. The word is not intended as a synonym for probability or to convey anything about probabilities.
Observers can put probabilities on the truth of theories. They can do it - and will do it - if you ask them to set odds and prepare to receive bets. Quantifying uncertainty allows it to be measured and processed.
It is true that knowledge is fallible - but some knowledge is more fallible than others - and if you can't measure degrees of uncertainty, you will never develop a quantitative treatment of the subject. Philosophers of science realised this long ago - and developed a useful framework for quantifying uncertainty.
Scurfield missed his chance here. He should have asked when it becomes the case that those bets must be paid off, and offered the services of a Popper adept to make that kind of decision. Of course, the Popperite doesn't rule that one theory is true, he rules that the other theory is refuted.
Short time limits don't mean that agents can't meaningfully assign probabilities to the truth of scientific theories - they just decrease the chances of the theories being proven wrong within the time limit a bit.
What is a time limit? Do actual bets on this sort of thing in Britain stipulate a time limit? As a Yank, I have no idea how betting 'markets' like this like this actually work.
Prediction markets/betting markets like Intrade or Betfair pretty universally set time limits on their bets. (Browse through Intrade sometime.) This does sometimes require changing the bet/prediction though - from 'the Higgs boson will be found' to 'the Higgs boson will be found by 2020'. Not that this is a bad thing, mind you.
Do you have an answer to that point-that-should-have-been?
Not really. To the extent that we limit attention to theories of the form:
we Bayesians can never "cash in" on a bet that the theory is true - at least not using empirical evidence. All we can do is to continue trying to falsify the theory by experiments at more times, at more places, and for more values of x. As Popper prescribes. Our probabilities that the theory is true grow higher and higher, but they grow more and more slowly, and they can never reach unity.
However, both Bayesians and Popper fans can become pretty certain that such a theory is false - even without checking everywhere, everywhen, and forall x. Popper does not have a monopoly on refutations. Or conjectures either, for that matter.