FrameBenignly comments on Open thread, Dec. 21 - Dec. 27, 2015 - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (230)
Last week was a gathering of physicists in Oxford to discuss string theory and the philosophy of science.
From the article:
That the Bayesian view is news to so many physicists is itself news to me, and it's very unsettling news. You could say that modern theoretical physics has failed to be in-touch with other areas of science, but you could also make the argument that the rationalist community has failed to properly reach out and communicate with scientists.
I tried to get a discussion going on this exact subject in my post this week, but there seemed to be little interest. A major weakness of the standard Bayesian inference method is that it assumes a problem only has two possible solutions. Many problems involve many possible solutions, and many times the number of possible solutions is unknown, and in many cases the correct solution hasn't been thought of yet. In such instances, confirmation through inductive inference may not be the best way of looking at the problem.
Where did you get this from? Maintaining beliefs over an entire space of possible solutions is a strength of the Bayesian approach. Please don't talk about Bayesian inference after reading a single thing about updating beliefs on whether a coin is fair or not. That's just a simple tutorial example.
If I have 3 options, A, B, and C, and I'm 40% certain the best option is A, 30% certain the best option is B, and 30% certain the best option is C, would it be correct to say that I've confirmed option A instead of say my best evidence suggests A? This can sort of be corrected for with the standard Bayesian confirmation model, but the problem becomes larger as the number of possibilities increases to the point where you can't get a good read on your own certainty, or to the point where the number of possibilities is unknown.
I don't understand your question. Is this about maintaining beliefs over hypotheses or decision-making?
I'm arguing that Bayesian confirmation theory as a philosophy was originally conceived as a model using only two possibilities (A and ~A), and then this model was extrapolated into problems with more than two possibilities. If it had been originally conceived using more than two possibilities, it wouldn't have made any sense to use the word confirmation. So explanations of Bayesian confirmation theory will often entail considering theories or decisions in isolation rather than as part of a group of decisions or theories.
So if there are 20 possible explanations for a problem, and there is no strong evidence suggesting any one explanation, then I will have 5% certainty of the average explanation. Unless I am extremely good at calibration, then I can't confirm any of them, and if I consider each explanation in isolation from the other explanations, then all of them are wrong.
It doesn't matter whether we're talking about hypotheses or decision-making.
I'm not sure whether this is true, but it's irrelevant. Bayesian confirmation theory works just fine with any number of hypotheses.
If by "confirm" you mean "assign high probability to, without further evidence", yes. That seems to me to be exactly what you'd want. What is the problem you see here?
You sound confused. The "confirmation" stems from
(source)
So what if p(H) = 1, p(H|A) = .4, p(H|B) = .3, and p(H|C) = .3? The evidence would suggest all are wrong. But I have also determined that A, B, and C are the only possible explanations for H. Clearly there is something wrong with my measurement, but I have no method of correcting for this problem.
H is Hypothesis. You have three: HA, HB, and HC. Let's say your prior is that they are equally probable, so the unconditional P(HA) = P(HB) = P(HC) = 0.33
Let's also say you saw some evidence E and your posteriors are P(HA|E) = 0.4, P(HB|E) = 0.3, P(HC|E) = 0.3. This means that evidence E confirms HA because P(HA|E) > P(HA). This does not mean that you are required to believe that HA is true or bet your life's savings on it.
That's a really good explanation of part of the problem I was getting at. But that requires considering the three hypotheses as a group rather than in isolation from all other hypotheses to calculate 0.33.
If you start with inconsistent assumptions, you get inconsistent conclusions. If you believe P(H)=1, P(A&B&C)=1, and P(H|A) etc. are all <1, then you have already made a mistake. Why are you blaming this on Bayesian confirmation theory?
Wait, how would you get P(H) = 1?
Fine. p(H) = 0.5, p(H|A) = 0.2, p(H|B) = 0.15, p(H|C) = 0.15 It's not really relevant to the problem.
This is not true at all.
A large chunk of academics would say that it is. For example, from the paper I was referencing in my post:
That doesn't at all say Bayesian reasoning assumes only two possibilities. It says Bayesian reasoning assumes you know what all the possibilities are.
True, but how often do you see an explanation of Bayesian reasoning in philosophy that uses more than two possibilities?
This is a weird sentence to me. I learned about Bayesian inference through Jaynes' book and surely it doesn't portray that inference as having only two possible solutions.
The other book I know about, Sivia's, doesn't do this either.
You're referring to how it is described in statistics textbooks. I'm talking about confirmation theory as a philosophy.