No, Less Wrong is probably not dead without Cox's theorem, for several reasons.
It might turn out that the way Cox's theorem is wrong is that the requirements it imposes for a minimally-reasonable belief system need strengthening, but in ways that we would regard as reasonable. In that case there would still be a theorem along the lines of "any reasonable way of structuring your beliefs is equivalent to probability theory with Bayesian updates".
Or it might turn out that there are non-probabilistic belief structures that are good, but that they can be approximated arbitrarily closely with probabilistic ones. In that case, again, the LW approach would be fine.
Or it might turn out probabilistic belief structures are best so long as the actual world isn't too crazy. (Maybe there are possible worlds where some malign entity is manipulating the evidence you get to see for particular goals, and in some such worlds probabilistic belief structures are bad somehow.) In that case, we might know that either the LW approach is fine or the world is weird in a way we don't have any good way of dealing with.
Alternatively, it might happen that Cox's theorem is wronger than that; that there are human-compatible belief structures that are, in plausible actual worlds, genuinely substantially different from probabilities-and-Bayesian-updates. Would LW be dead then? Not necessarily.
It might turn out that all we have is an existence theorem and we have no idea what those other belief structures might be. Until such time as we figure them out, probability-and-Bayes would still be the best we know how to do. (In this case I would expect at least some LessWrongers to be working excitedly on trying to figure out what other belief structures might work well.)
It might turn out that for some reason the non-probabilistic belief structures aren't interesting to us. (E.g., maybe there are exceptions that in some sense amount to giving up and saying "I dunno" to everything.) In that case, again, we might need to adjust our ideas a bit but I would expect most of them to survive.
Suppose none of those things is the case: Cox's theorem is badly, badly wrong; there are other quite different ways in which beliefs can be organized and updated, that are feasible for humans to practice and don't look at all like probabilities+Bayes, and that so far as we can see work just as well or better. That would be super-exciting news. It might require a lot of revision of ideas that have been taken for granted here. I would expect LessWrongers to be working excitedly on figuring out what things need how much revision (or discarding completely). The final result might be that LessWrong is dead, at least in the sense that the ways of thinking that have been common here all turn out to be very badly suboptimal and the right thing is to all convert to Mormonism or something. But I think a much more likely outcome in this scenario is that we find an actually-correct analogue of Cox's theorem, which tells us different things about what sorts of thinking might be reasonable, and it still involves (for instance) quantifying our degrees of belief somehow, and updating them in the light of new evidence, and applying logical reasoning, and being aware of our own fallibility. We might need to change a lot of things, but it seems pretty likely to me that the community would survive and still be recognizably Less Wrong.
Let me put it all less precisely but more pithily: Imagine some fundamental upheaval in our understanding of mathematics and/or physics. ZF set theory is inconsistent! The ultimate structure of the physical world is quite unlike the GR-and-QM muddle we're currently working with! This would be exciting but it wouldn't make bridges fall down or computers stop computing, and people interested in applying mathematics to reality would go on doing so in something like the same ways as at present. Errors in Cox's theorem are definitely no more radical than that.
I entirely agree that it's possible that someone might come along with something that is in fact a refutation of the idea that a reasonable set of requirements for rational thinking implies doing something close to probability-plus-Bayesian-updating, but that some people who are attached to that idea don't see it as a refutation.
I'm not sure whether you think that I'm denying that (and that I'm arguing that if someone comes along with something that is in fact a refutation, everyone on LW will necessarily recognize it as such), or whether you think it's an issue that hasn't occurred to me; neither is the case. But my guess -- which is only a guess, and I'm not sure what concrete evidence one could possibly have for it -- is that in most such scenarios at least some LWers would be (1) interested and (2) not dismissive.
I guess we could get some evidence by looking at how similar things have been treated here. The difficulty is that so far as I can tell there hasn't been anything that quite matches. So e.g. there's this business about Halpern's counterexample to Cox; this seems to me like it's a technical issue, to be addressed by tweaking the details of the hypotheses, and the counterexample is rather far removed from the realities we care about. The reaction here has been much more "meh" than "kill the heretic", so far as I can tell. There's the fact that some bits of the heuristics-and-biases stuff that e.g. the Sequences talk a lot about now seem doubtful because it turns out that psychology is hard and lots of studies are wrong (or, in some cases, outright fraudulent); but I don't think much of importance hangs on exactly what cognitive biases humans have, and in any case this is a thing that some LW types have written about, in what doesn't look to me at all a shoot-the-messenger sort of way.
Maybe you have a few concrete examples of messenger-shooting that are better explained as hostile reaction to evidence of being wrong rather than as hostile reaction to actual attack? (The qualificatoin is because if you come here and say "hahaha, you're all morons; here's my refutation of one of your core ideas" then, indeed, you will likely get a hostile response, but that's not messenger-shooting as I would understand it.)
I heartily agree that having epistemic double standards is very bad. I have the impression that your comment is intended as an accusation of epistemic double standards, but I don't know whom you're accusing of exactly what epistemic double standards. Care to be more specific?