All of nimim-k-m's Comments + Replies

Which list of top posts are you thinking of? If you look at the most-upvoted posts on LW, the only one in the top ten about AI risk is Holden Karnofsky explaining, in 2012, why he thought the Singularity Institute wasn't worth funding.

I grant that I was talking out of my memory; the previous time I read the LW stuff was years ago. MIRI and CFAR logos up there did not help.

But if the community is going to grow, these people are going to need some common flag to make them different from anyone else who decides to make "rationality" their applause light and gather followers.

What, you are not allowed to call yourself a rationalist if you are not affiliated with MIRI, even if you subscribe to branches of Western philosophy descended from Descartes and Kant and Vienna circle...?

0Evan_Gaensbauer
Viliam is right that unless we have a name for the cluster in thingspace that is the rationalist community, it's difficult to talk about. While I can understand why one might be alarmed, but I think MIRI/CFAR representatives mostly want everyone to be able to identify them in a clearly delineated way so that they and only they can claim to speak on behalf of those organizations on manners such as AI safety, existential risk reduction, or their stance on what to make of various parts of the rationality community now that they're trying to re-engage it. I think everyone can agree that it won't make anyone better off to confuse people who both identify with the LW/rationality community and those outside of it what MIRI/CFAR actually believe, re: their missions and goals. This is probably more important to MIRI's/CFAR's relationship to EA and academia than people merely involved with LW/rationalists, since what's perceived as the positions of these organizations could effect how much funding they receive, and their crucial relationships with other organizations working on the same important problems.
0Lumifer
The rationality police will come and use the rationality spray on you, leaving your writhing on the floor crying "Oh, my eyes! It burns, IT BURNS!"
5Viliam
I think there should exist a name for the cluster in thingspace that is currently known here as "the rationalist community". That is my concern. How specifically it will be called, that is less important. We just have to coordinate on using the new name. Generic "subscribing to branches of Western philosophy descended from Descartes and Kant and Vienna circle" is not exactly the same thing.

SSC linked to this LW post (here http://slatestarcodex.com/2016/12/06/links-1216-site-makes-right/ ). I suspect it might be of some use to you if explain my reasons why I'm interested in reading and commenting on SSC but not very much on LW.

First of all, the blog interface is confusing, more so than regular blogs or sub-reddits or blog-link-aggregators.

Also, to use LW terminology, I have pretty negative prior on LW. (Some other might say the LW has not a very good brand.) I'm still not convinced that AI risk is very important (nor that decision theory is g... (read more)

0Vaniver
Thanks for sharing! I appreciate the feedback but because it's important to distinguish between "the problem is that you are X" and "the problem is that you look like you are X," I think it's worth hashing out whether some points are true. Which list of top posts are you thinking of? If you look at the most-upvoted posts on LW, the only one in the top ten about AI risk is Holden Karnofsky explaining, in 2012, why he thought the Singularity Institute wasn't worth funding. (His views have since changed, a document I think is worth reading in full.) And the Sequences themselves are rarely if ever directly about AI risk; they're more often about the precursors to the AI risk arguments. If someone thinks that intelligence and morality are intrinsically linked, instead of telling them "no, they're different" it's easier to talk about what intelligence is in detail and talk about what morality is in detail and then they say "oh yeah, those are different." And if you're just curious about intelligence and morality, then you still end up with a crisper model than you started with! I think one of the reasons I consider the Sequences so successful as a work of philosophy is because it keeps coming back to the question of "do I understand this piece of mental machinery well enough to program it?", which is a live question mostly because one cares about AI. (Otherwise, one might pick other standards for whether or not a debate is settled, or how to judge various approaches to ideas.) I think everyone is agreed about the last bit; woe betide the movement that refuses to have friends and allies, insisting on only adherents. For the first half, I think considering this involves becoming more precise about 'healthiest'. On the one hand, LW's reputation has a lot of black spots, and those basically can't be washed off, but on the other hand, it doesn't seem like reputation strength is the most important thing to optimize for. That is, having a place where people are expected to