No, Less Wrong is probably not dead without Cox's theorem, for several reasons.
It might turn out that the way Cox's theorem is wrong is that the requirements it imposes for a minimally-reasonable belief system need strengthening, but in ways that we would regard as reasonable. In that case there would still be a theorem along the lines of "any reasonable way of structuring your beliefs is equivalent to probability theory with Bayesian updates".
Or it might turn out that there are non-probabilistic belief structures that are good, but that they can be approximated arbitrarily closely with probabilistic ones. In that case, again, the LW approach would be fine.
Or it might turn out probabilistic belief structures are best so long as the actual world isn't too crazy. (Maybe there are possible worlds where some malign entity is manipulating the evidence you get to see for particular goals, and in some such worlds probabilistic belief structures are bad somehow.) In that case, we might know that either the LW approach is fine or the world is weird in a way we don't have any good way of dealing with.
Alternatively, it might happen that Cox's theorem is wronger than that; that there are human-compatible belief structures that are, in plausible actual worlds, genuinely substantially different from probabilities-and-Bayesian-updates. Would LW be dead then? Not necessarily.
It might turn out that all we have is an existence theorem and we have no idea what those other belief structures might be. Until such time as we figure them out, probability-and-Bayes would still be the best we know how to do. (In this case I would expect at least some LessWrongers to be working excitedly on trying to figure out what other belief structures might work well.)
It might turn out that for some reason the non-probabilistic belief structures aren't interesting to us. (E.g., maybe there are exceptions that in some sense amount to giving up and saying "I dunno" to everything.) In that case, again, we might need to adjust our ideas a bit but I would expect most of them to survive.
Suppose none of those things is the case: Cox's theorem is badly, badly wrong; there are other quite different ways in which beliefs can be organized and updated, that are feasible for humans to practice and don't look at all like probabilities+Bayes, and that so far as we can see work just as well or better. That would be super-exciting news. It might require a lot of revision of ideas that have been taken for granted here. I would expect LessWrongers to be working excitedly on figuring out what things need how much revision (or discarding completely). The final result might be that LessWrong is dead, at least in the sense that the ways of thinking that have been common here all turn out to be very badly suboptimal and the right thing is to all convert to Mormonism or something. But I think a much more likely outcome in this scenario is that we find an actually-correct analogue of Cox's theorem, which tells us different things about what sorts of thinking might be reasonable, and it still involves (for instance) quantifying our degrees of belief somehow, and updating them in the light of new evidence, and applying logical reasoning, and being aware of our own fallibility. We might need to change a lot of things, but it seems pretty likely to me that the community would survive and still be recognizably Less Wrong.
Let me put it all less precisely but more pithily: Imagine some fundamental upheaval in our understanding of mathematics and/or physics. ZF set theory is inconsistent! The ultimate structure of the physical world is quite unlike the GR-and-QM muddle we're currently working with! This would be exciting but it wouldn't make bridges fall down or computers stop computing, and people interested in applying mathematics to reality would go on doing so in something like the same ways as at present. Errors in Cox's theorem are definitely no more radical than that.
I'm not sure which of two arguments private_messaging is making, but I think both are wrong.
Argument 1. "Yudkowsky et al think many-worlds interpretations are simpler than collapse interpretations, but actually collapse interpretations are simpler because unlike many-worlds interpretations they don't have the extra cost of identifying which branch you're on."
I think this one is wrong because that cost is present with collapse interpretations too; if you're trying to explain your observations via a model of MWI, your explanation needs to account for what branch you're in, and if you're trying to explain them via a model of a "collapse" interpretation of QM, it instead needs to account for the random choices of measurement results. The information you need to account for is exactly the same in the two cases.
So maybe instead the argument is more like this:
Argument 2. "Yudkowsky et al think many-worlds interpretations are simpler than collapse interpretations, because they are 'charging' collapse interpretations for the cost of identifying random measurement results. But that's wrong because the same costs are present in MW interpretations."
I think this one is wrong because that isn't why Yudkowsky et al think MW interpretations are simpler. They think MW interpretations are simpler because a "collapse" interpretation needs to do the same computation as an MW interpretation and also actually make things collapse. I am not 100% sure that this is actually right: it could conceivably turn out that as far as explaining human observations of quantum phenomena goes, you actually need some notion more or less equivalent to that of "Everett branch", and you need to keep track of them in your explanation, and the extra bookkeeping with an MW model of the underlying physics is just as bad as the extra model-code with a collapse model of the underlying physics. But if it's wrong I don't think it's wrong for private_messaging's reasons.
But, still, private_messaging's argument is an interesting one, and it's terrible to call him a troll for making it.
... Except that no one did call him a troll for making that argument.
What actually happened when he made that argument was that various people politely disagreed and offered counterarguments. The "consistent trolling" remark was somewhere entirely different, and its context was that private_messaging had been found to have something like five sockpuppets on LW, and was using them to post comments agreeing with one another, and the user who made the "consistent trolling" remark -- by the way, that was wedrifid, not Yudkowsky, and I'm not sure why you call them "the usual shooter" -- was saying (I paraphrase) "having sockpuppets as such isn't so bad, and private_messaging was the user's second account and not really a problem (it was also super-trollish, but that's a separate issue), but the subsequent sockpuppets were just created to abuse the system and that's not acceptable".
Well, OK. But, still, wedrifid called private_messaging a troll. Was that unreasonable? Note that even a troll can say correct and/or interesting things sometimes; trolling is precisely about what you do "tonally". So, here are a few comments from private_messaging. Judge for yourself whether there's anything trollish about them.
Here:
Here:
Here:
I dunno, seems a bit trollish to me. Again, not because it's necessarily wrong but because it's needlessly confrontational; private_messaging was rather fond of saying "X is wrong and stupid and you people are idiots for thinking it" when "X is wrong" would have sufficed.