It’s usually much easier to bullshit value claims than epistemic claims.
Sure, if we compare the sets of all value claims with all epistemic claims. However, the controversial epistemic claims aren't typical, they're selected for both being difficult to verify and having obvious value implications. Consider the following "factual" claims that are hacking people's brains these days:
It's not clear to me that "putting truth first" is a reliable enough defense for ordinary people in the face of that.
Nah, the weird idea is AI x-risk, something that almost nobody outside of LW-sphere takes seriously, even if some labs pay lip service to it.
I'm surprised that you're surprised. To me you've always been a go-to example of someone exceptionally good at both original seeing and taking weird ideas seriously, which isn't a well-trodden intersection.
We need an epistemic-clarity-win that’s stable at the the level of a few dozen world/company leaders.
If you disagree with the premise of “we’re pretty likely to die unless the political situation changes A Lot”, well, it makes sense if you’re worried about the downside risks of the sort of thing I’m advocating for here. We might be political enemies some of the time, sorry about that.
These propositions seem in tension. I think that we're unlikely to die, but agree with you that without an "epistemic-clarity-win" your side won't get its desired policies implemented. Of course, the beauty of asymmetric weapons is that if I'm right and you're wrong, epistemic clarity would reveal that and force you to change your approaches. So, we don't appear to be political enemies, in ways that matter.
general public making bad arguments
My point is that "experts disagree with each other, therefore we're justified in not taking it seriously" is a good argument, and this is what people mainly believe. If they instead offer bad object-level arguments, then sure, dismissing those is fine and proper.
Yoshua Bengio or Geoffrey Hinton do take AI doom seriously, and I agree that their attitude is reasonable (though for different reasons than you would say)
I agree that their attitude is reasonable, conditional on superintelligence being achievable in the foreseeable future. I personally think this is unlikely, but I'm far from certain.
And I think AI is exactly such a case, where conditional on AI doom being wrong, it will be for reasons that the general public mostly won’t know/care to say, and will still give bad arguments against AI doom.
Most people are clueless about AI doom, but they have always been clueless about approximately everything throughout history, and get by through having alternative epistemic strategies of delegating sense-making and decision-making to supposed experts.
Supposed experts clearly don't take AI doom seriously, considering that many of them are doing their best to race as fast as possible, therefore people don't either, an attitude that seems entirely reasonable to me.
Also, you haven’t linked to your comment properly, when I notice the link it goes to the post rather than your comments.
Thank you, fixed.
My core claim here is that most people, most of the time, are going to be terrible critics of your extreme idea. They will say confused, false, or morally awful things to you, no matter what idea you have.
I think that most unpopular extreme ideas have good simple counterarguments. E.g. for Marxism it's that it whenever people attempt it, this leads to famines and various extravagant atrocities. Of course, "real Marxism hasn't been tried" is the go-to counter-counterargument, but even if you are a true believer, it should give you pause that it has been very difficult to implement in practice, and it's reasonable for people to be critical by default because of those repeated horrible failures.
The clear AI implication I addressed elsewhere.
the only divided country left after Germany
China/Taiwan seem to be (slightly) more so these days, after Kim explicitly repudiated the idea of reunification.
publishing the evidence is prosocial, because it helps people make higher-quality decisions regarding friendship and trade opportunities with Mallory
And by the same token, subsequent punishment would be prosocial too. Why, then, would Alice want to disclaim it? Because, of course, in reality the facts of the matter whether somebody deserves punishment are rarely unambiguous, so it makes sense for people to hedge. But that's basically wanting to have the cake and eat it too.
The honorable thing for Alice to do would be to weigh the reliability of the evidence that she possesses, and disclose it only if she thinks that it's sufficient to justify the likely punishment that would follow it. No amount of nuances of wording and tone could replace this essential consideration.
Even though their supposed oppressor classes are unlikely to look like white males, that doesn't guarantee the absence of platonic toxic whiteness & masculinity.
Indeed.
As it happens, I had a bit of a back-and-forth with the author in the comments.