or at least explain them in ways that intelligent outsiders can understand well enough to criticize
Based on feedback, I think I achieved that through my "Smarter than Us" booklet or through the AI risk executive summary: http://lesswrong.com/lw/k37/ai_risk_new_executive_summary/
What's the outsider feedback on those been like?
The following two paragraphs got me thinking some rather uncomfortable thoughts about our community's insularity:
- Chip Morningstar, "How to Deconstruct Almost Anything: My Postmodern Adventure"
The LW/MIRI/CFAR memeplex shares some important features with postmodernism, namely the strong tendency to go meta, a large amount of jargon that is often impenetrable to outsiders and the lack of an immediate need to justify itself to them. This combination takes away the selective pressure that stops most groups from going totally crazy. As far as I can tell, we have not fallen into this trap, but since people tend to fail to notice when their in-group has gone crazy, this is at best weak evidence that we haven't; furthermore, even assuming that we are in fact perfectly sane now, it will still take effort to maintain that state.
Based on the paragraphs quoted above, having to use our ideas to produce something that outsiders would value, or at least explain them in ways that intelligent outsiders can understand well enough to criticize would create this sort of pressure. Has anyone here tried to do either of these to a significant degree? If so, how, and how successfully?
What other approaches can we take to check (and defend) our collective sanity?