Hey, so, I figure this might be a good place to post a slightly on topic question. I'm currently reading "Scientific Reasoning: The Bayesian Approach" by Howson and Urbach. It seemed like a good place to start to learn Bayesian reasoning, although I don't know where the "normal" place to start would be. I'm working through the proofs by hand, making sure I understand each conclusion before moving to the next.

My question is "where do I go next?" What's a good book to follow up with?

Also, after reading this and "0 and 1 are not probabilities" I ran into the exact cognitive dissonance that Eliezer eluded to with his statement that "this would upset probability theorists is that we would need to rederive theorems previously obtained by assuming that we can marginalize over a joint probability by adding up all the pieces and having them sum to 1." The material is teaching that "P(t) == 1" is a fundamental theorem of the probability calculus and that "P(a) >= 0". These are then used in all succeeding derivations. After re-reading, I came to understand Eliezer's practical disagreement with a theoretical method.

So another question is: has anyone gone through the exercise of re-deriving the probability calculus, perhaps using "0 < P(a) < 1" or something similar, instead of the two previous rules?

## Comments (50)

OldHey, so, I figure this might be a good place to post a slightly on topic question. I'm currently reading "Scientific Reasoning: The Bayesian Approach" by Howson and Urbach. It seemed like a good place to start to learn Bayesian reasoning, although I don't know where the "normal" place to start would be. I'm working through the proofs by hand, making sure I understand each conclusion before moving to the next.

My question is "where do I go next?" What's a good book to follow up with?

Also, after reading this and "0 and 1 are not probabilities" I ran into the exact cognitive dissonance that Eliezer eluded to with his statement that "this would upset probability theorists is that we would need to rederive theorems previously obtained by assuming that we can marginalize over a joint probability by adding up all the pieces and having them sum to 1." The material is teaching that "P(t) == 1" is a fundamental theorem of the probability calculus and that "P(a) >= 0". These are then used in all succeeding derivations. After re-reading, I came to understand Eliezer's practical disagreement with a theoretical method.

So another question is: has anyone gone through the exercise of re-deriving the probability calculus, perhaps using "0 < P(a) < 1" or something similar, instead of the two previous rules?