I'm unlucky enough to know a few postmodernists, and what I find most striking about them is that they try very hard to stay out of conflict with each other.
That makes sense because when they do argue, due to their lack of a clear method for assessing who (if anybody) is in the right, the arguments are unproductive, frustrating, and can get quite nasty.
So I don't think we're too similar to them. That said, the obvious way to check our sanity would be to have outsiders look at us. In order to do that, we'd probably have to convince outsiders to give a fuck about us.
Scott Alexander recently posted a link to this article which was very interesting. After reading it, the difference between postmodernism and LW rationality seems very large. It doesn't directly address your point, but you may find it interesting.
Separately, I think that you are exaggerating the tendencies LW shares with postmodernism. While LessWrongers love going meta (and they seem to love it even more in person than on the site), what you actually see in discussions here and on rationality blogs is requests to go in either the meta or object-level directions as required by the interlocutor. CFAR specifically has lessons on going toward object-level. Comparing the jargon of postmodernism and LessWrong is not really an equal comparison either. Postmodernism is oftentimes intentionally obscure, and sometimes redefines words to very surprising meanings (see the above linked article), while on LessWrong people seem to go to some pains to coin new language only when old language is insufficient, and explicitly consider what appropriate names would be (the major exception to this is perhaps language coined during the time of the sequences that is still widely used). LW doesn't have a ...
LessWrong people seem to go to some pains to coin new language only when old language is insufficient
The pains don't always stretch to learning philosophy, which EY hasn't done, and advises against, with the result that LW jargon in fact often does reinvent philosophical jargon.
It is a great loss, because the original terms are nowhere to be seen. So if someone wants to read, say, non-amateur writing on the idea and its history, they're out of luck.
What other approaches can we take to check (and defend) our collective sanity?
Do rationalists win when confounding factors of intelligence, conscientiousness, and anything else we can find are corrected for?
Do they make more money? Have greater life satisfaction? Fewer avoidable tragedies? Reliably bootstrap themselves out of mental and physical problems?
I'm not sure what the answer is.
This reminds me of this SMBC. There are fields (modern physics comes to mind too) that no one outside of them can understand what they are doing anymore, yet that appear to have remained sane. There are more safeguards against postmodernists' failure mode than this one. In fact, I think there is a lot more wrong with postmodernism than that they don't have to justify themselves to outsiders. Math and physics have mechanisms determining what ideas within them get accepted that imbue them with their sanity. In math, there are proofs. In physics, there are ex...
The LW/MIRI/CFAR memeplex shares some important features with postmodernism, namely the strong tendency to go meta, a large amount of jargon that is often impenetrable to outsiders and the lack of an immediate need to justify itself to them.
Mathematics also has all of these. So I don't think this is a good argument that LW/MIRI/CFAR is doing something wrong.
...Based on the paragraphs quoted above, having to use our ideas to produce something that outsiders would value, or at least explain them in ways that intelligent outsiders can understand well enough
Here is something that could be useful to have, but would require a lot of work and talent. As a side effect, it would solve the problem mentioned in the article:
Rewrite parts of the Sequences, for wider audience.
For example, the Bayesian math. Rewrite the explanation in a way that is easy to read for a high school student, without any LW lingo. A lot of pictures. Sample problems. Then debate the more complex topics, such as how you can never get 0 and 1 as a result of Bayesian updating, conservation of expected evidence, etc. Then distribute the book as p...
Insularity is always dangerous, and too much internal jargon can scare off outsiders. However, postmodernists are quite unlike the LW-community. For instance, postmodernists tend to be anti-scientific and deliberately obscurantist, as Alan Sokal showed by publishing a fake article in a postmodernist journal. Hence I don't think the analogy works very well.
As far as I can tell, we have not fallen into this trap, but since people tend to fail to notice when their in-group has gone crazy
Given the amount of contrarians on LW that open discussions on whether or not LW is a cult, I don't really think we have a problem with lack of self criticism.
...Based on the paragraphs quoted above, having to use our ideas to produce something that outsiders would value, or at least explain them in ways that intelligent outsiders can understand well enough to criticize would create this sort of pressure. Has anyone here tr
CFAR seems to be trying to be using (some of) our common beliefs to produce something useful to outsiders. And they get good ratings from workshop attendees.
or at least explain them in ways that intelligent outsiders can understand well enough to criticize
Based on feedback, I think I achieved that through my "Smarter than Us" booklet or through the AI risk executive summary: http://lesswrong.com/lw/k37/ai_risk_new_executive_summary/
Insularity in this case is simply a case of long inferential distances. It seems like senseless noise to the outside because that's what compressed information looks like to all who dont' have a decoder.
Every group that specializes on something falls into that, and it's healthy that it does so. But we should want a PR office only if we would want to sell our worldview to others, not to check our own sanity.
My understanding is that postmodernists face career incentives to keep the bullshit flowing. (To change my mind on this, find me an online community of enthusiastic amateur postmodernists who aren't trying to make it in academia or anything.)
LW is the opposite of postmodernism. Plato's condemnation of sophists ("the art of contradiction making, descended from an insincere kind of conceited mimicry, of the semblance-making breed, derived from image making, distinguished as portion, not divine but human, of production, that presents, a shadow play of words") applies perfectly to postmodernists, who are just the umpteenth incarnation of the sophist virus.
The following two paragraphs got me thinking some rather uncomfortable thoughts about our community's insularity:
- Chip Morningstar, "How to Deconstruct Almost Anything: My Postmodern Adventure"
The LW/MIRI/CFAR memeplex shares some important features with postmodernism, namely the strong tendency to go meta, a large amount of jargon that is often impenetrable to outsiders and the lack of an immediate need to justify itself to them. This combination takes away the selective pressure that stops most groups from going totally crazy. As far as I can tell, we have not fallen into this trap, but since people tend to fail to notice when their in-group has gone crazy, this is at best weak evidence that we haven't; furthermore, even assuming that we are in fact perfectly sane now, it will still take effort to maintain that state.
Based on the paragraphs quoted above, having to use our ideas to produce something that outsiders would value, or at least explain them in ways that intelligent outsiders can understand well enough to criticize would create this sort of pressure. Has anyone here tried to do either of these to a significant degree? If so, how, and how successfully?
What other approaches can we take to check (and defend) our collective sanity?