When you sit down to do the exercise and realize legitimate arguments (not merely ad hoc arguments) against your own views, you're overcoming your confirmation bias (default) on that issue for the first time.
That's not obvious to me. I'd expect LWers to be the kind of high-NFC/TIE people who try to weigh evidence in a two-sided way before deciding to like a particular politician or organization in the first place, and would probably, having made that decision, try to remain aware opposing evidence exists.
Nonsensical ad-hoc arguments are more useful than no argument whatsoever; one has the quality of provoking thought.
I'm less optimistic. While nonsensical ad hoc arguments do provoke thoughts, those thoughts are sometimes things like, "Jesus, am I doomed to hear that shitty pseudo-argument every time I talk to people about this?" or "I already pre-empted that dud counterargument and they ignored me and went ahead and used it anyway!" or "Huh?!", rather than "Oh, this other person seems to have misunderstanding [X]; I'd better say [Y] to try disabusing them of it".
The only way otherwise rational people come to disagree is from the differing priors of their respective data sets; it's not that the wrong one among them is thinking up nonsense and being negatively affected by it.
Unfortunately a lot of arguments don't seem to be between "otherwise rational people", in the sense you give here.
All I see being discussed and disagreed over are domain-specific trivial arguments.
But I've seen (and occasionally participated in) arguments here about macroeconomics, feminism, HIV & AIDS, DDT, peak oil, the riskiness of the 80,000 Hours strategy of getting rich to donate to charity, how to assess the importance of technologies, global warming, how much lead exposure harms children's development, astronomical waste, the global demographic transition, and more. While these are domain-specific issues, I wouldn't call these trivial. And I've seen broader, nontrivial arguments about developing epistemically rationality, whether at the personal or social level. (What's the right tradeoff between epistemic & instrumental rationality? When should one trust science? How does the social structure of science affect the reliability of the body of knowledge we call science? How does one decide on priors? What are good 5-second skills that help reinforce good rationalist habits? Where do the insights & intuitions of experts come from? How feasible is rationality training for people of ordinary IQ?)
This same concept applies to the entirety of LessWrongers; nobody is really changing their deep beliefs after "seeing the light." They're seeing superior logic and tactics and adding those onto their model. The model still remains the same, for the most part.
That's too vague for me to have a strong opinion about. (Presumably you don't literally mean "nobody", and I don't know precisely which beliefs you're referring to with "deep beliefs".) But there are possible counterexamples. People have decided to dedicate years of their lives (and/or thousands of dollars) to attacking the problem of FAI because of their interactions with LW. I dimly recall seeing a lurker post here saying they cured their delusional mental illness by internalizing rationality lessons from the Sequences.
The things that are actually shown to matter are taboo to even bring up because that might cause people to "realize" (confirmation bias) that they're dealing with people they consider to be idiots.
That's a bit of an unfair & presumptuous way to put it. It's not as if LW only realized human brains run on cognitive biases once it started having flamewars on taboo topics. The ubiquity of cognitive bias is the central dogma of LW if anything is; we already knew that the people we were dealing with were "idiots" in this respect. For another thing, there's a more parsimonious explanation for why some topics are taboo here: because they lead to disproportionately unpleasant & unproductive arguments.
Everything this community has done up to now is a good warm-up, but now I'd like to start seeing some actual improvement where it counts.
Finally I can agree with you on something! Yes, me too, and we're by no means the only ones. (I recognize I'm part of the problem here, being basically a rationalist kibitzer. I would be glad to be more rational, but I'm too lazy to put in the actual effort to become more rational. LW is mostly an entertainment device for me, albeit one that occasionally stretches my brain a little, like a book of crosswords.)
We're rationalists. We ought to be able to discuss any subject in a reasonable manner.
Ideally, yes. Unfortunately, in reality, we're still human, with the same bias-inducing hot buttons as everyone else. I think it's legitimate to accommodate that by recognizing some topics reliably make people blow up, and cultivating LW-specific norms to avoid those topics (or at least damp the powder to minimize the risk of explosion). (I'd be worried if I thought LWers wanted to "restrain the world", as you grandiosely put it, by extending that norm to everywhere beyond this community. But I don't.)
I predicted your reaction of considering the coherency of the collective as overblown. [...] I don't predict you're terribly bothered by a significant degree of accuracy to the prediction; rather, I predict that, to you, it will seem only obvious that I should have been able to predict that. This will all seem fairly elementary to you.
Yeh, pretty much!
What I'm unsure about is the degree to which you are aware that you stand out from the rest of these folks. You're exhibiting a deeper level of understanding of the usefulness of epistemic humility in bothering to speak to me and read my comments in the way that you are.
This is flattering and I'd like to believe it, but I suspect I'm just demonstrating my usual degree of getting the last word-ism, crossed with Someone Is Wrong On The Internet Syndrome. (Although this is far from the worst flare-up of those that I've had. Since then I've tried not to go on & on so much, but whether I've succeeded is, hrrrm, debatable.)
I've more to say, but it won't make sense to say it without receiving feedback about the more exact mechanics of your stage of grasping my concept. I predict you won't notice anything out of the ordinary about the thoughts you'll have thought in reading/responding to/pondering this.
Right again. I still don't have any idea what your concept/hypothesis is (although I expect it'll be an anticlimax after all this build-up), but maybe what I've written here gives you some idea of how to pitch it.
[Comment length limitation continuance...]
Although I expect it'll be an anticlimax after all this build-up.
It will, despite my fantasies, be anticlimactic, as you predict. While I predicted this already, I didn't predict that you would consciously and vocally predict this yourself. My model updates as thus: Though I was not consciously aware of the possibility of stating my predictions being an invitation for you to state your own set of predictions, I am now aware that such a result is possible. What scenarios the practice is useful in, why it works, ...
Making fun of things is actually really easy if you try even a little bit. Nearly anything can be made fun of, and in practice nearly anything is made fun of. This is concerning for several reasons.
First, if you are trying to do something, whether or not people are making fun of it is not necessarily a good signal as to whether or not it's actually good. A lot of good things get made fun of. A lot of bad things get made fun of. Thus, whether or not something gets made fun of is not necessarily a good indicator of whether or not it's actually good.[1] Optimally, only bad things would get made fun of, making it easy to determine what is good and bad - but this doesn't appear to be the case.
Second, if you want to make something sound bad, it's really easy. If you don't believe this, just take a politician or organization that you like and search for some criticism of it. It should generally be trivial to find people that are making fun of it for reasons that would sound compelling to a casual observer - even if those reasons aren't actually good. But a casual observer doesn't know that and thus can easily be fooled.[2]
Further, the fact that it's easy to make fun of things makes it so that a clever person can find themselves unnecessarily contemptuous of anything and everything. This sort of premature cynicism tends to be a failure mode I've noticed in many otherwise very intelligent people. Finding faults with things is pretty trivial, but you can quickly go from "it's easy to find faults with everything" to "everything is bad." This tends to be an undesirable mode of thinking - even if true, it's not particularly helpful.
[1] Whether or not something gets made fun of by the right people is a better indicator. That said, if you know who the right people are you usually have access to much more reliable methods.
[2] If you're still not convinced, take a politician or organization that you do like and really truly try to write an argument against that politician or organization. Note that this might actually change your opinion, so be warned.