One might also do, say, a thought experiment with alien civilisations untouched by whites’ hands and unaware about the oppression system.
Even though their supposed oppressor classes are unlikely to look like white males, that doesn't guarantee the absence of platonic toxic whiteness & masculinity.
What #1,#2,#4 have in common is that it is harder to check experimentally unless you are immersed with the area and the potential difficulty of publishing your results threatening to invalidate the dominant narrative.
Indeed.
@Noosphere89′s discuaaion of ways for people to turn ideologically crazy
As it happens, I had a bit of a back-and-forth with the author in the comments.
It’s usually much easier to bullshit value claims than epistemic claims.
Sure, if we compare the sets of all value claims with all epistemic claims. However, the controversial epistemic claims aren't typical, they're selected for both being difficult to verify and having obvious value implications. Consider the following "factual" claims that are hacking people's brains these days:
It's not clear to me that "putting truth first" is a reliable enough defense for ordinary people in the face of that.
Nah, the weird idea is AI x-risk, something that almost nobody outside of LW-sphere takes seriously, even if some labs pay lip service to it.
I'm surprised that you're surprised. To me you've always been a go-to example of someone exceptionally good at both original seeing and taking weird ideas seriously, which isn't a well-trodden intersection.
We need an epistemic-clarity-win that’s stable at the the level of a few dozen world/company leaders.
If you disagree with the premise of “we’re pretty likely to die unless the political situation changes A Lot”, well, it makes sense if you’re worried about the downside risks of the sort of thing I’m advocating for here. We might be political enemies some of the time, sorry about that.
These propositions seem in tension. I think that we're unlikely to die, but agree with you that without an "epistemic-clarity-win" your side won't get its desired policies implemented. Of course, the beauty of asymmetric weapons is that if I'm right and you're wrong, epistemic clarity would reveal that and force you to change your approaches. So, we don't appear to be political enemies, in ways that matter.
general public making bad arguments
My point is that "experts disagree with each other, therefore we're justified in not taking it seriously" is a good argument, and this is what people mainly believe. If they instead offer bad object-level arguments, then sure, dismissing those is fine and proper.
Yoshua Bengio or Geoffrey Hinton do take AI doom seriously, and I agree that their attitude is reasonable (though for different reasons than you would say)
I agree that their attitude is reasonable, conditional on superintelligence being achievable in the foreseeable future. I personally think this is unlikely, but I'm far from certain.
And I think AI is exactly such a case, where conditional on AI doom being wrong, it will be for reasons that the general public mostly won’t know/care to say, and will still give bad arguments against AI doom.
Most people are clueless about AI doom, but they have always been clueless about approximately everything throughout history, and get by through having alternative epistemic strategies of delegating sense-making and decision-making to supposed experts.
Supposed experts clearly don't take AI doom seriously, considering that many of them are doing their best to race as fast as possible, therefore people don't either, an attitude that seems entirely reasonable to me.
Also, you haven’t linked to your comment properly, when I notice the link it goes to the post rather than your comments.
Thank you, fixed.
My core claim here is that most people, most of the time, are going to be terrible critics of your extreme idea. They will say confused, false, or morally awful things to you, no matter what idea you have.
I think that most unpopular extreme ideas have good simple counterarguments. E.g. for Marxism it's that it whenever people attempt it, this leads to famines and various extravagant atrocities. Of course, "real Marxism hasn't been tried" is the go-to counter-counterargument, but even if you are a true believer, it should give you pause that it has been very difficult to implement in practice, and it's reasonable for people to be critical by default because of those repeated horrible failures.
The clear AI implication I addressed elsewhere.
the only divided country left after Germany
China/Taiwan seem to be (slightly) more so these days, after Kim explicitly repudiated the idea of reunification.
It's worth mentioning speedrunning here. When players decide to optimize some aspects of gameplay (e.g. getting to the victory screen as fast as possible), this leads to weird interactions with the apparent ontology of the game.
From one point of view, it doesn't matter what developers intended (and we can't be completely certain anyway, cf. the "death of the author"), so any legitimate inputs (that you can make while actually playing, so console commands excluded) are treated as fair play, up to arbitrary code execution (ACE) - essentially exploiting bugs to reprogram the game on the fly to make it load desired events. This often requires high skill to competently execute, offering opportunities for dedicated competition. While such "gameplay" usually results in confusing on-screen mess for the uninitiated, many consider "glitched" speedruns legitimate, and hundreds of thousands of people regularly watch them during Games Done Quick charity marathons on Twitch, marveling at what hides "behind the curtain" of beloved games.
However, another approach to speedrunning is to exclude some types of especially game-breaking bugs, in order to approximate the intended playing experience for the competition. Both kinds are popular, as are discussions about which is more legitimate - another way that gaming makes people engage in amateur philosophy, usually without realizing it, producing much confused nonsense in the process. Kind of like actual philosophy, except more amusing and less obscurantist.