Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Vaniver 07 December 2016 11:35:04PM 0 points [-]

Thanks for sharing! I appreciate the feedback but because it's important to distinguish between "the problem is that you are X" and "the problem is that you look like you are X," I think it's worth hashing out whether some points are true.

The sequences and list of top posts on LW are mostly about AI risk

Which list of top posts are you thinking of? If you look at the most-upvoted posts on LW, the only one in the top ten about AI risk is Holden Karnofsky explaining, in 2012, why he thought the Singularity Institute wasn't worth funding. (His views have since changed, a document I think is worth reading in full.)

And the Sequences themselves are rarely if ever directly about AI risk; they're more often about the precursors to the AI risk arguments. If someone thinks that intelligence and morality are intrinsically linked, instead of telling them "no, they're different" it's easier to talk about what intelligence is in detail and talk about what morality is in detail and then they say "oh yeah, those are different." And if you're just curious about intelligence and morality, then you still end up with a crisper model than you started with!

which to me seems quite tangential to the attempt at modern rekindling of the Western tradition of rational thought

I think one of the reasons I consider the Sequences so successful as a work of philosophy is because it keeps coming back to the question of "do I understand this piece of mental machinery well enough to program it?", which is a live question mostly because one cares about AI. (Otherwise, one might pick other standards for whether or not a debate is settled, or how to judge various approaches to ideas.)

But I ask you to reconsider if the LW is actually the healthiest part of the rationalist community, or if the more general cause of "advancement of more rational discourse in public life" would be better served by something else (for example, a number of semi-related communities such blogs and forums and meat-space communities in academia). Not all rationalism needs to be LW style rationalism.

I think everyone is agreed about the last bit; woe betide the movement that refuses to have friends and allies, insisting on only adherents.

For the first half, I think considering this involves becoming more precise about 'healthiest'. On the one hand, LW's reputation has a lot of black spots, and those basically can't be washed off, but on the other hand, it doesn't seem like reputation strength is the most important thing to optimize for. That is, having a place where people are expected to have a certain level of intellectual maturity that grows over time (as the number of things that are discovered and brought into the LW consensus grows) seems like the sort of thing that is very difficult to do with a number of semi-related communities.

Comment author: nimim-k-m 09 December 2016 07:39:57AM 1 point [-]

Which list of top posts are you thinking of? If you look at the most-upvoted posts on LW, the only one in the top ten about AI risk is Holden Karnofsky explaining, in 2012, why he thought the Singularity Institute wasn't worth funding.

I grant that I was talking out of my memory; the previous time I read the LW stuff was years ago. MIRI and CFAR logos up there did not help.

Comment author: Viliam 29 November 2016 10:59:49AM *  3 points [-]

people who want to 'spread rationalism' and grow the movement go one way and the people who want to maintain a sense of community and maintain purity go another. I've seen the same dynamic at work in the Libertarian party and in Christian churches. I think we have to accept both sides have good points.

I believe the proper solution is like an eukaryotic cell -- with outer circle, and inner circle(s). In Christianity, the outer circle is to be formally a Christian, and to visit a church on (some) Sundays. The inner circles are various monastic orders, or becoming a priest, or this kind of stuff. Now you can provide both options for people who want different things. If you just want the warm fuzzy feelings of belonging to a community, here you go. If you want some hardcore stuff, okay, come here.

These two layers need to cooperate: the outer circle must respect the inner circle, but the inner circle must provide some services for the outer circle. -- In case of LW such services would mostly be writing articles or making videos.

The outer circle must be vague enough that anyone can join, but the inner circles must be protected from invasion of charlatans; they must cooperate with each other so that they are able to formally declare someone "not one of us", if a charlatan tries to take over the system or just benefit from claiming that he is a part of the system. In other words, the inner circles need some system to formally recognize who is an inner circle of the system and who is not.

Looking at rationalist community today, "MIRI representatives" and "CFAR representatives" seem like inner circles, and there are also a few obvious celebrities such as Yvain of SSC. But if the community is going to grow, these people are going to need some common flag to make them different from anyone else who decides to make "rationality" their applause light and gather followers.

Comment author: nimim-k-m 09 December 2016 07:33:30AM 0 points [-]

But if the community is going to grow, these people are going to need some common flag to make them different from anyone else who decides to make "rationality" their applause light and gather followers.

What, you are not allowed to call yourself a rationalist if you are not affiliated with MIRI, even if you subscribe to branches of Western philosophy descended from Descartes and Kant and Vienna circle...?

Comment author: nimim-k-m 07 December 2016 06:57:34PM *  2 points [-]

SSC linked to this LW post (here http://slatestarcodex.com/2016/12/06/links-1216-site-makes-right/ ). I suspect it might be of some use to you if explain my reasons why I'm interested in reading and commenting on SSC but not very much on LW.

First of all, the blog interface is confusing, more so than regular blogs or sub-reddits or blog-link-aggregators.

Also, to use LW terminology, I have pretty negative prior on LW. (Some other might say the LW has not a very good brand.) I'm still not convinced that AI risk is very important (nor that decision theory is going to be useful when it comes to mitigating AI risk (I work in ML)). The sequences and list of top posts on LW are mostly about AI risk, which to me seems quite tangential to the attempt at modern rekindling of the Western tradition of rational thought (which I do consider a worthy goal). It feels like (mind you, this is my initial impression) this particular rationalist community tries to sell me the idea that there's this very important thing about AI risk and it's very important that you learn about it and then donate to MIRI (or whatever it's called today). Also, you can learn rationality in workshops, too! It's resembles just bit too much (and not a small bit) either a) the certain religions that have people stopping me on the street or ringing my doorbell and insisting on how it's most important thing in the world that I listen to them and read their leaflet, or b) the whole big wahoonie that is self-help industry. On both counts, my instincts tell me: stay clear out of it.

And yes, most of the all the important things to have a discussion about involve or at least touch politics.

Finally, I disliked the HPMOR. Both as fiction and and as presentation of certain arguments. I was disappointed when I found out HPMOR and LW were related.

On the other hand, I still welcome the occasional interesting content that happens to be posted on LW and makes ripples in the wider internet (and who knows maybe I'll comment now that I bothered to make an account). But I ask you to reconsider if the LW is actually the healthiest part of the rationalist community, or if the more general cause of "advancement of more rational discourse in public life" would be better served by something else (for example, a number of semi-related communities such blogs and forums and meat-space communities in academia). Not all rationalism needs to be LW style rationalism.

edit. explained arguments more