No. https://www.lesswrong.com/posts/bAQDzke3TKfQh6mvZ/halpern-s-paper-a-refutation-of-cox-s-theorem
The general "method of rationality" does not require any specific theorem to be true. Rationality will work so long as the universe has causality. All rationality is saying that, given actions an agent can take have some causal effect on the outcome the universe will take, the agent can estimate the optimal outcome for the agent's goals. And the agent should do that by definition as this is what "winning" is.
We have many demonstrated such agents today, from simple control systems to cutting edge deep learning game-players. And we as humans should aspire to act as rationally as we can.
This is where non-mainstream actions come into play, for example cryonics is rational, taking risky drugs that may slow aging is rational, and so on. This is because the case for them is so strong that any rational approximation of the outcome of your actions says you should be doing these things. Another bit of non-mainstream thought is we don't have to be certain of an outcome to chase it. For example, if cryonics has a 1% chance of working, mainstream thought says we should just take the 99% case of it failing as THE expected outcome, declare it "doesn't work", and not do it. But a 1% chance of not being dead is worth the expense for most people.
No theorems are required, only that the laws of physics allow for us to rationally compute what to do. [note that religious beliefs state the opposite of this. For example, were an invisible being pulling the strings of reality, then merely "thinking" in a way that being doesn't like might cause that being to give you bad outcomes. mainstream religions contain various "hostile to rationality" memes, some religions state you should stop thinking, others that you should "take it on faith" that everything your local church leader states is factual, and so on.]
The general “method of rationality” does not require any specific theorem to be true. Rationality will work so long as the universe has causality. All rationality is saying that, given actions an agent can take have some causal effect on the outcome the universe will take, the agent can estimate the optimal outcome for the agent’s goals. And the agent should do that by definition as this is what “winning” is
And the agent can learn to do that better. In a universe where intuition and practical experience beat explicit reasoning, there is no point in teac...
No, Less Wrong is probably not dead without Cox's theorem, for several reasons.
It might turn out that the way Cox's theorem is wrong is that the requirements it imposes for a minimally-reasonable belief system need strengthening, but in ways that we would regard as reasonable. In that case there would still be a theorem along the lines of "any reasonable way of structuring your beliefs is equivalent to probability theory with Bayesian updates".
Or it might turn out that there are non-probabilistic belief structures that are good, but that they can be approximated arbitrarily closely with probabilistic ones. In that case, again, the LW approach would be fine.
Or it might turn out probabilistic belief structures are best so long as the actual world isn't too crazy. (Maybe there are possible worlds where some malign entity is manipulating the evidence you get to see for particular goals, and in some such worlds probabilistic belief structures are bad somehow.) In that case, we might know that either the LW approach is fine or the world is weird in a way we don't have any good way of dealing with.
Alternatively, it might happen that Cox's theorem is wronger than that; that there are human-compatible belief structures that are, in plausible actual worlds, genuinely substantially different from probabilities-and-Bayesian-updates. Would LW be dead then? Not necessarily.
It might turn out that all we have is an existence theorem and we have no idea what those other belief structures might be. Until such time as we figure them out, probability-and-Bayes would still be the best we know how to do. (In this case I would expect at least some LessWrongers to be working excitedly on trying to figure out what other belief structures might work well.)
It might turn out that for some reason the non-probabilistic belief structures aren't interesting to us. (E.g., maybe there are exceptions that in some sense amount to giving up and saying "I dunno" to everything.) In that case, again, we might need to adjust our ideas a bit but I would expect most of them to survive.
Suppose none of those things is the case: Cox's theorem is badly, badly wrong; there are other quite different ways in which beliefs can be organized and updated, that are feasible for humans to practice and don't look at all like probabilities+Bayes, and that so far as we can see work just as well or better. That would be super-exciting news. It might require a lot of revision of ideas that have been taken for granted here. I would expect LessWrongers to be working excitedly on figuring out what things need how much revision (or discarding completely). The final result might be that LessWrong is dead, at least in the sense that the ways of thinking that have been common here all turn out to be very badly suboptimal and the right thing is to all convert to Mormonism or something. But I think a much more likely outcome in this scenario is that we find an actually-correct analogue of Cox's theorem, which tells us different things about what sorts of thinking might be reasonable, and it still involves (for instance) quantifying our degrees of belief somehow, and updating them in the light of new evidence, and applying logical reasoning, and being aware of our own fallibility. We might need to change a lot of things, but it seems pretty likely to me that the community would survive and still be recognizably Less Wrong.
Let me put it all less precisely but more pithily: Imagine some fundamental upheaval in our understanding of mathematics and/or physics. ZF set theory is inconsistent! The ultimate structure of the physical world is quite unlike the GR-and-QM muddle we're currently working with! This would be exciting but it wouldn't make bridges fall down or computers stop computing, and people interested in applying mathematics to reality would go on doing so in something like the same ways as at present. Errors in Cox's theorem are definitely no more radical than that.
Or succinctly: to be the "least wrong" you need to be using the measured best available assessment of projected outcomes. All tools available are approximations anyway and the best tools right now are 'black box' deep learning methods which we do not know exactly how they arrive at their answers.
This isn't a religion and this is what a brain or any other known form of intelligence, artificial or natural, does.
I would expect LessWrongers to be working excitedly on figuring out what things need how much revision (or discarding completely)
I'd expect them to shoot the messenger!
Yes, too-strong conventions against nastiness are bad. It doesn't look to me as if we have those here, any more than it looks to me as if there's much of a shooting-the-messenger culture.
I've been asking you for examples to support your claims. I'll give a few to support mine. I'm not (at least, not deliberately) cherry-picking; I'm trying to think of cases where something has come along that someone could with a straight face argue is something like a refutation of something important to LW:
The larger point here is that the link between "Eliezer Yudkowsky called Richard Loosemore an idiot" and "People on Less Wrong should be expected to shoot the messenger if someone turns up saying that something many of them believe is false" is incredibly tenuous.
I mean, to make that an actual argument you'd need something like the following steps.
I've been pointing out that the step from the first of those to the second is one that requires some justification, but the same is true of all the others.
So, anyway: you're talking as if you'd said "EY's comment was an ad hominem attack" and I'd said "No it wasn't", but actually neither of those is right. You just quoted EY's comment and implied that it justified your opinion about the LW population generally; and what I said about it wasn't that it wasn't ad hominem. It was a perso...
Most of what's in CFAR's handbook doesn't depend on Cox's theorem. Very little that happened on LessWrong in the last years is affected in any way. Most of what we talk about is but button up derived from probability theory. Even for parts like credence calibration that are very much derived from it Cox theorem being valid or not has little effect on the value of a practice like forecasting Telock style.
I thought johnswentworth's comment on one of your earlier posts, along with an ocean of evidence from experience, was adequate to make me feel like that our current basic conception of probability is totally fine and not worth my time to keep thinking about.
FWIW, Van Horn says:
"There has been much unnecessary controversy over Cox’s Theorem due to differing implicit assumptions as to the nature of its plausibility function. Halpern [11, 12] claims to demonstrate a counterexample to Cox’s Theorem by examining a finite problem domain, but his argument presumes that there is a different plausibility function for every problem domain."
Cox’s theorem seems to be pretty important to you guys but it’s looking kind of weak right now with Halpern’s counter-example so I was wondering: what implications does Cox’s theorem not being true have for LessWrong? There seem to be very few discussions in LessWrong about alternative formulations for fixing probability theory as extended logic in light of Halpern’s paper. I find this quite surprising given how much you all talk about Jaynes-Cox probability theory. I asked a question about it myself, but to no avail: https://www.lesswrong.com/posts/x7NyhgenYe4zAQ4Kc/has-van-horn-fixed-cox-s-theorem
Thanks!