Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

shev comments on On the importance of Less Wrong, or another single conversational locus - Less Wrong

84 Post author: AnnaSalamon 27 November 2016 05:13PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (362)

You are viewing a single comment's thread.

Comment author: shev 27 November 2016 08:14:31PM *  7 points [-]

Here's an opinion on this that I haven't seen voiced yet:

I have trouble being excited about the 'rationalist community' because it turns out it's actually the "AI doomsday cult", and never seems to get very far away from that.

As a person who thinks we have far bigger fish to fry than impending existential AI risk - like problems with how irrational most people everywhere (including us) are, or how divorced rationality is from our political discussions / collective decision making progress, or how climate change or war might destroy our relatively-peaceful global state before AI even exists - I find that I have little desire to try to contribute here. Being a member of this community seems to requiring buying into the AI-thing, and I don't so I don't feel like a member.

(I'm not saying that AI stuff shouldn't be discussed. I'd like it to dominate the discussion a lot less.)

I think this community would have an easier time keeping members, not alienating potential members, and getting more useful discussion done, if the discussions were more located around rationality and effectiveness in general, instead of the esteemed founder's pet obsession.

Comment author: Vaniver 27 November 2016 09:03:23PM *  13 points [-]

Being a member of this community seems to requiring buying into the AI-thing, and I don't so I don't feel like a member.

I don't think that it's true that you need to buy into the AI-thing to be a member of the community, and so I think that it seems that way is a problem.

But I think you do need to be able to buy into the non-weirdness of caring about the AI-thing, and that we may need to be somewhat explicit about the difference between those two things.

[This isn't specific to AI; I think this holds for lots of positions. Cryonics is probably an easy one to point at that disproportionately many LWers endorse but is seen as deeply weird by society at large.]