No, Less Wrong is probably not dead without Cox's theorem, for several reasons.
It might turn out that the way Cox's theorem is wrong is that the requirements it imposes for a minimally-reasonable belief system need strengthening, but in ways that we would regard as reasonable. In that case there would still be a theorem along the lines of "any reasonable way of structuring your beliefs is equivalent to probability theory with Bayesian updates".
Or it might turn out that there are non-probabilistic belief structures that are good, but that they can be approximated arbitrarily closely with probabilistic ones. In that case, again, the LW approach would be fine.
Or it might turn out probabilistic belief structures are best so long as the actual world isn't too crazy. (Maybe there are possible worlds where some malign entity is manipulating the evidence you get to see for particular goals, and in some such worlds probabilistic belief structures are bad somehow.) In that case, we might know that either the LW approach is fine or the world is weird in a way we don't have any good way of dealing with.
Alternatively, it might happen that Cox's theorem is wronger than that; that there are human-compatible belief structures that are, in plausible actual worlds, genuinely substantially different from probabilities-and-Bayesian-updates. Would LW be dead then? Not necessarily.
It might turn out that all we have is an existence theorem and we have no idea what those other belief structures might be. Until such time as we figure them out, probability-and-Bayes would still be the best we know how to do. (In this case I would expect at least some LessWrongers to be working excitedly on trying to figure out what other belief structures might work well.)
It might turn out that for some reason the non-probabilistic belief structures aren't interesting to us. (E.g., maybe there are exceptions that in some sense amount to giving up and saying "I dunno" to everything.) In that case, again, we might need to adjust our ideas a bit but I would expect most of them to survive.
Suppose none of those things is the case: Cox's theorem is badly, badly wrong; there are other quite different ways in which beliefs can be organized and updated, that are feasible for humans to practice and don't look at all like probabilities+Bayes, and that so far as we can see work just as well or better. That would be super-exciting news. It might require a lot of revision of ideas that have been taken for granted here. I would expect LessWrongers to be working excitedly on figuring out what things need how much revision (or discarding completely). The final result might be that LessWrong is dead, at least in the sense that the ways of thinking that have been common here all turn out to be very badly suboptimal and the right thing is to all convert to Mormonism or something. But I think a much more likely outcome in this scenario is that we find an actually-correct analogue of Cox's theorem, which tells us different things about what sorts of thinking might be reasonable, and it still involves (for instance) quantifying our degrees of belief somehow, and updating them in the light of new evidence, and applying logical reasoning, and being aware of our own fallibility. We might need to change a lot of things, but it seems pretty likely to me that the community would survive and still be recognizably Less Wrong.
Let me put it all less precisely but more pithily: Imagine some fundamental upheaval in our understanding of mathematics and/or physics. ZF set theory is inconsistent! The ultimate structure of the physical world is quite unlike the GR-and-QM muddle we're currently working with! This would be exciting but it wouldn't make bridges fall down or computers stop computing, and people interested in applying mathematics to reality would go on doing so in something like the same ways as at present. Errors in Cox's theorem are definitely no more radical than that.
Unfortunately some messengers are idiots (we have already established that most likely either Yudkowsky or Loosemore is an idiot, in this particular scenario). Saying that someone is an idiot isn't shooting the messenger in any culpable sense if in fact they are an idiot, nor if the person making the accusation has reasonable grounds for thinking they are.
So I guess maybe we actually have to look at the substance of Loosemore's argument with Yudkowsky. So far as I can make out, it goes like this.
The usual response to this by LW-ish people is along the lines of "you're assuming that a hypothetical AI, on finding an inconsistency between its actual values and the high-level description of 'doing things that suit its human creators', would realise that its actual values are crazy and adjust them to match that high-level description better; but that is no more inevitable than that humans, on finding inconsistencies between our actual values and the high-level description of 'doing things that lead us to have more surviving descendants', would abandon our actual values in order to better serve the values of Evolution". To me this seems sufficient to establish that Loosemore has not shown that a hypothetical AI couldn't behave in clearly-intelligent ways that mostly work towards a given broad goal, but in some cases diverge greatly from it.
There's clearly more to be said here, but this comment is already rather long, so I'll skip straight to my conclusion: maybe there's some version of Loosemore's argument that's salvageable as an argument against Yudkowsky-type positions in general, but it's not clear to me that there is, and while I personally wouldn't have been nearly as rude as Yudkowsky was I think it's very much not clear that Yudkowsky was wrong. (With, again, the understanding that "idiot" here doesn't mean e.g. "person scoring very badly in IQ tests" but something like "person who obstinately fails to grasp a fundamental point of the topic under discussion".)
I don't think it's indefensible to say that Yudkowsky was shooting the messenger in this case. But, please note, your original comment was not about what Yudkowsky would do; it was about what the LW community in general would do. What did the LW community in general think about Yudkowsky's response to Loosemore? They downvoted it to hell, and several of them continued to discuss things with Loosemore.
One rather prominent LWer (Kaj Sotala, who I think is an admin or a moderator or something of the kind here) wrote a lengthy post in which he opined that Loosemore (in the same paper that was being discussed when Yudkowsky called Loosemore an idiot) had an important point. (I think, though, that he would agree with me that Loosemore has not demonstrated that Yudkowsky-type nightmare scenarios are anything like impossible, contra Loosemore's claim in that paper that "this entire class of doomsday scenarios is found to be logically incoherent at such a fundamental level that they can be dismissed", which I think is the key question here. Sotala does agree with Loosemore than some concrete doomsday scenarios are very implausible.) He made a linkpost for that here on LW. How did the community respond? Well, that post is at +23, and there are a bunch of comments discussing it in what seem to me like constructive terms.
So, I reiterate: it seems to me that you're making a large and unjustified leap from "Yudkowsky called Loosemore an idiot" to "LW should be expected to shoot the messenger". Y and L had a history of repeatedly-unproductive interactions in the past; L's paper pretty much called Y an idiot anyway (by implication, not as frankly as Y called L an idiot); there's a pretty decent case to e made that L was an idiot in the relevant sense; other LWers did not shoot Loosemore even when EY did, and when his objections were brought up again a few years later there was no acrimony.
[EDITED to add:] And of course this is only one case; even if Loosemore were a 100% typical example of someone making an objection to EY's arguments, and even if we were interested only in EY's behaviour and not anyone else, the inference from "EY was obnoxious to RL" to "EY generally shoots the messenger" is still pretty shaky.