-
The world is locked right now in a deadly puzzle, and needs something like a miracle of good thought if it is to have the survival odds one might wish the world to have.
-
Despite all priors and appearances, our little community (the "aspiring rationality" community; the "effective altruist" project; efforts to create an existential win; etc.) has a shot at seriously helping with this puzzle. This sounds like hubris, but it is at this point at least partially a matter of track record.[1]
-
To aid in solving this puzzle, we must probably find a way to think together, accumulatively. We need to think about technical problems in AI safety, but also about the full surrounding context -- everything to do with understanding what the heck kind of a place the world is, such that that kind of place may contain cheat codes and trap doors toward achieving an existential win. We probably also need to think about "ways of thinking" -- both the individual thinking skills, and the community conversational norms, that can cause our puzzle-solving to work better. [2]
-
One feature that is pretty helpful here, is if we somehow maintain a single "conversation", rather than a bunch of people separately having thoughts and sometimes taking inspiration from one another. By "a conversation", I mean a space where people can e.g. reply to one another; rely on shared jargon/shorthand/concepts; build on arguments that have been established in common as probably-valid; point out apparent errors and then have that pointing-out be actually taken into account or else replied-to).
-
One feature that really helps things be "a conversation" in this way, is if there is a single Schelling set of posts/etc. that people (in the relevant community/conversation) are supposed to read, and can be assumed to have read. Less Wrong used to be a such place; right now there is no such place; it seems to me highly desirable to form a new such place if we can.
-
We have lately ceased to have a "single conversation" in this way. Good content is still being produced across these communities, but there is no single locus of conversation, such that if you're in a gathering of e.g. five aspiring rationalists, you can take for granted that of course everyone has read posts such-and-such. There is no one place you can post to, where, if enough people upvote your writing, people will reliably read and respond (rather than ignore), and where others will call them out if they later post reasoning that ignores your evidence. Without such a locus, it is hard for conversation to build in the correct way. (And hard for it to turn into arguments and replies, rather than a series of non sequiturs.)
SSC linked to this LW post (here http://slatestarcodex.com/2016/12/06/links-1216-site-makes-right/ ). I suspect it might be of some use to you if explain my reasons why I'm interested in reading and commenting on SSC but not very much on LW.
First of all, the blog interface is confusing, more so than regular blogs or sub-reddits or blog-link-aggregators.
Also, to use LW terminology, I have pretty negative prior on LW. (Some other might say the LW has not a very good brand.) I'm still not convinced that AI risk is very important (nor that decision theory is going to be useful when it comes to mitigating AI risk (I work in ML)). The sequences and list of top posts on LW are mostly about AI risk, which to me seems quite tangential to the attempt at modern rekindling of the Western tradition of rational thought (which I do consider a worthy goal). It feels like (mind you, this is my initial impression) this particular rationalist community tries to sell me the idea that there's this very important thing about AI risk and it's very important that you learn about it and then donate to MIRI (or whatever it's called today). Also, you can learn rationality in workshops, too! It's resembles just bit too much (and not a small bit) either a) the certain religions that have people stopping me on the street or ringing my doorbell and insisting on how it's most important thing in the world that I listen to them and read their leaflet, or b) the whole big wahoonie that is self-help industry. On both counts, my instincts tell me: stay clear out of it.
And yes, most of the all the important things to have a discussion about involve or at least touch politics.
Finally, I disliked the HPMOR. Both as fiction and and as presentation of certain arguments. I was disappointed when I found out HPMOR and LW were related.
On the other hand, I still welcome the occasional interesting content that happens to be posted on LW and makes ripples in the wider internet (and who knows maybe I'll comment now that I bothered to make an account). But I ask you to reconsider if the LW is actually the healthiest part of the rationalist community, or if the more general cause of "advancement of more rational discourse in public life" would be better served by something else (for example, a number of semi-related communities such blogs and forums and meat-space communities in academia). Not all rationalism needs to be LW style rationalism.
edit. explained arguments more
Thanks for sharing! I appreciate the feedback but because it's important to distinguish between "the problem is that you are X" and "the problem is that you look like you are X," I think it's worth hashing out whether some points are true.
Which list of top posts are you thinking of? If you look at the most-upvoted posts on LW, the only one in the top ten about AI risk is Holden Karnofsky explaining, in 2012, why he thought the Singularity Institute wasn't worth fundin... (read more)