Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Becoming a Better Community

9 Sable 06 June 2017 07:11AM

So I've been following Project Hufflepuff, the efforts of the rationalist community to become, rather than better rationalists (per se), but a better community.  I recently read the summary of the recent Project Hufflepuff Unconference, and I had a thought.

 

The Problem

LessWrong And Guardedness

I can only speak to my own experiences in joining the community, but I have always felt that the rationalist community holds its members to a very high standard.  This isn't a bad thing but it creates, at least in me, a sense of guardedness.  I don't want to be the rationalist who sounds stupid or the one who contributes less to the conversation.

 

Every post I've made here on LessWrong (not that there have been many), has been reviewed and edited with the same kind of diligence that I normally reserve for graded essays or business documentation.  Other online communities I'm a part of (and meatspace communities) require far less diligence from me as a contributor.  (Note: This isn't a value judgement, rather a description of my experience.)

 

However, my best experiences in communities and friendships have generally occurred in very unguarded atmospheres.  Not that my friends and I aren't smart or can't be smart, but most of the fun I've had with them happens when we're playing card or board or video games, or just hanging out and talking.  Doing things like going out to eat, playing ping-pong, and talking about bad TV shows have led to some of the strongest relationships in my life.

 

So Where Is The Fun?

So - where is this in the Rationalist Community?  Now, it is very possible that the fun is there and I'm simply missing it.  I haven't been to any meetups, I don't live in the bay area, and I don't even know any rationalists in meatspace.  But if it is, aside from the occasional meetup, I don't see any evidence of it.

 

I tried to do some research on how friendships and communities are formed, and there seemed to be little consensus in the field.  A New York Times article on making friendships as an adult mentions three factors: 

As external conditions change, it becomes tougher to meet the three conditions that sociologists since the 1950s have considered crucial to making close friends: proximity; repeated, unplanned interactions; and a setting that encourages people to let their guard down and confide in each other, said Rebecca G. Adams, a professor of sociology and gerontology at the University of North Carolina at Greensboro. This is why so many people meet their lifelong friends in college, she added.

I was unable to find this in an actual paper, but a brief perusal of the Stanford Encyclopedia of Philosophy's page on friendship at least shows that people who think about the topic seem to agree that there has to be some kind of intimacy involved in a friendship.  And while there are certainly rationalists who are friends, for me becoming a rationalist and joining the community has not yet materialized into any specific friendships.  While that is on my shoulders, I believe it highlights a distinction I want to make.

 

If what we have in common, as Rationalists, is a shared way of thinking and a shared set of goals (e.g. save the world, improve the rationality waterline, etc.), then the relationship I share with the community strikes me as more as an alliance than a friendship.

 

Allies want the same goals, and may use similar methodologies to achieve them, but they are not friends.  I wouldn't tell my ally about an embarrassing dream I had, or get drunk with them and make fun of bad movies.

 

I don't mean to get hung up on meanings - the words themselves aren't important.  But from what I have seen, the community, especially those outside the Bay Area, lack the unguarded intimacy I see in my close friendships, and that I think are a key component of community-building.  I'd be willing to bet that even in meetups, many (>20%) of Rationalists feel the weight of the high standards of the community, and are thus more guarded than they are in relationships with less expectations.

 

What I'm trying to get at is that I haven't experienced an unguarded interaction with a rationalist, online or in meatspace.  I always want to be at the top of my game, always trying to reason better, and remember all the things I've learned about biases and probability theory.  And I suspect that low-standards unguarded interactions have something to do with growing friendships and communities.

 

So, for an East-coaster with a computer:

 

Where is the fun?  Where are the rationalist video game tournaments?  Robot fights?  Words with Friends who are rationalists?

 

Where is the chilling and watching all the Lord of the Rings movies together?  The absurd Dungeons and Dragons campaigns because everyone is a plotter and there are too many plots?

 

A Few Suggested Solutions

Everyone in the Rationalist community wants to help.  We want to save the world, and that's great.  But...not everything has to be about saving the world.  If the goal of an activity is community/friendship building, why can't it be otherwise pointless?  Why can't it be silly and inane and utterly irrational?

 

So, in the interests of Project Hufflepuff, I spent some time thinking about ways to improve/change the situation.

 

The Hero/Sidekick/Dragon Project

There was a series of posts in 2015 that had to do with different people wanting to take different roles in projects, be it the hero, the sidekick, the dragon, etc.  An effort was made to match people up, but as far as I can tell, it petered out, because I haven't seen anything to do with it since then (I would be happy to be wrong about this).  I'll link the posts here; the first is, in particular, excellent: the issue in general, an attempt at matchmaking, and a discussion of matchmaking methods.

I might suggest an open thread that functions as a classified ad, e.g. Help Wanted, must be able to XYZ, or Sidekick In Need of Hero, must live in X area, etc.

I'd also like to mention that the project in question shouldn't have to be about friendly AI or effective altruism; I think that developing an effective partnership is valuable by itself.

 

Online Gaming

Is there a reason that members of the community can't game together online?  This post on Overwatch provides at least a small amount of evidence that the community would have enough members interested to form teams, and team-building seems to be one of the goals.

 

Fun Projects 

I can think of plenty of challenging projects that require a team that I'd love to do, but that have almost nothing to do with world-saving at any scale.  Things like making a robot, or coding a game, or writing a book or play.  Does this happen in the community?  If not, I think it might help.  Again, the goal would be to create an unguarded atmosphere to foster friendships and team-building.

 

Rationalist Buddy System

I'd like to distinguish this from the Hero/Sidekick idea above.  I know that I could use a rationalist buddy to pair up with.  Many motivational and anti-akrasia techniques require social commitment, and Beeminder can only go so far.  Having a person to talk things through, experiment with anti-akrasi techniques, or just to inspire and be inspired by would be insanely helpful for me, and I suspect for many of us.  I'm vaguely reminded of the 12-step program's sponsors, if only in the way they support people going through the program.

I'm not sure how to execute this, but I think it has the potential to be useful enough to be worth trying.

 

Rationalist Big/Little Program

One of the things I got out of the Project Hufflepuff Unconference Notes was that making newcomers feel welcome was an issue.  An idea to change this was a "welcoming committee":

Welcoming Committee (Mandy Souza, Tessa Alexanian)

Oftentimes at events you'll see people who are new, or who don't seem comfortable getting involved with the conversation. Many successful communities do a good job of explicitly welcoming those people. Some people at the unconference decided to put together a formal group for making sure this happens more. 

I would like to suggest some version of the Big/Little program.  For those who don't know, the idea is that established members of the community volunteer to be "Bigs," and when a newcomer appears (a "Little") they are matched with a Big.  The Big then takes on the role of a guide, providing the Little an easier introduction to the community.  This idea has been used in many different environments, and has helped me personally in the past.

Perhaps people could sign up on some sort of permanent thread that they're willing to be Bigs, and then lurkers and first-time posters could be encouraged to PM them?

In Conclusion

It seems to me as though the high standards of the Rationalist community promote a guarded atmosphere, which hampers the development of close friendships and the community.  I've outlined a few ways that may help create places within the community where standards can be lowered and guards relaxed without (hopefully) compromising its high standards elsewhere.

I realize that most of this post is based upon my personal observations and experiences, which are anecdotal evidence and thus Not To Be Trusted.  I am prepared to be wrong, and would welcome the correction.

Let me know what you think.

Comment author: Vaniver 02 December 2016 07:51:50PM 4 points [-]

Like others pointed out, there's a Slack channel administered by Elo, a lesswrong IRC, and a SSC IRC. (I'm sometimes present in the first, but not the other two; I don't know how active they are now.)

As an addendum, and as a way of helping newer members, maybe we could have some kind of Big/Little program? Nothing fancy, just a list of people who have volunteered to be 'Bigs,' who are willing to jump in and discuss things with newer members.

Is the idea here pairing (Alice volunteers as a Big and is matched up with Bob, they swap emails / Hangouts / etc. and have one-on-one conversations about rationality / things that Bob doesn't understand yet) or in-need matching (Alice is the Big on duty at 7pm Eastern time, and Bob shows up in the chat channel to ask questions that Alice answers), or something else?

This also made me think of the possibility of something like "Dear Prudence"; maybe emails about some question that are then responded to in depth, or maybe chat discussions that get recorded and then shared, or so on.

(Somewhat tangential, but there are other things you can overlay on top of online communities in order to mimic some features of normal geographic communities, which seem like they make them more human-friendly but require lots of engagement on the part of individuals that may or may not be forthcoming.)

Comment author: Sable 03 December 2016 04:34:44AM 1 point [-]

Thanks for the info - I'll check out some of the chat channels. I had no idea they existed.

As for the idea, I hadn't thought it through quite that far, but I was picturing something along the lines of your second suggestion. Any publicized and easily accessible way of asking questions that doesn't force newer members to post their own topics would be helpful.

I remember back when I was just starting out on LessWrong, and being terrified to ask really stupid questions, especially when everyone else here was talking about graduate level computer science and medicine. Having someone to ask privately would've sped things up considerably.

Comment author: Sable 01 December 2016 11:33:49PM 3 points [-]

This is more of a practical suggestion than a theoretical one, but what if we had an instant message feature? Some kind of chat box like google hangouts, where we could talk in a more immediate sense to people rather than through comment and reply.

As an addendum, and as a way of helping newer members, maybe we could have some kind of Big/Little program? Nothing fancy, just a list of people who have volunteered to be 'Bigs,' who are willing to jump in and discuss things with newer members.

A 'little' could ask their big questions as they make their way through the literature, and both Bigs and Littles would gain a chance to practice rationality skills pertaining to discussion (controlling one's emotions, being willing to change one's mind, etc.) in real time. I think this would help reinforce these habits.

The LessWrong study hall on Complice is nice, but it's a place to get work done, not to chat or debate or teach.

Comment author: Sable 28 November 2016 02:32:32AM 2 points [-]

Additionally, humor - especially self-effacing humor - allows one to critique ideas or people held in high esteem without being offensive or inciting anger. It's hard to be mad when you're laughing.

Thought: Humor lowers one's natural barriers to accepting new ideas.

In the context of ideas as memes that undergo Darwinian processes of mutation and natural selection, perhaps humor can be thought of as an immunodeficiency virus? A way to lower an idea's natural defenses against competing ideas, which is why we see Christians willing to listen to Atheist comics, and vice versa. Humor lowers Christianity's natural defenses against Atheism (group consolidation, faith, etc.) and allows new ideas to attack the weakened "body."

Comment author: James_Miller 22 November 2016 04:42:07AM 2 points [-]

Isn't this insanely dangerous? Couldn't bacteria immune to viruses out-compete all other bacteria and destroy most of earth's biosphere?

Comment author: Sable 26 November 2016 03:36:01AM 1 point [-]

Insanely dangerous, yes, but then again so is all potentially world-changing technology (think AI and nanobots).

In other words I agree with you, but I think that the response to "new technology with potentially horrific consequences or otherwise high risk/reward ratio" should be, "estimate level of caution necessary to reduce risk to manageable levels, double the level of caution, and proceed very, very slowly."

Because it seems to me, bad at biology as I am, that the ability to synthesize arbitrary proteins, which this technology does/is a stepping stone to, could be incredibly powerful and life-saving.

Comment author: ThisSpaceAvailable 22 November 2016 06:19:39AM 1 point [-]

"Instead of generalizing situation-specific behavior to personality (i.e. "Oh, he's not trying to make me feel stupid, that's just how he talks"), people assume that personality-specific behavior is situational (i.e. "he's talking like that just to confuse me")."

Those aren't really mutually exclusive. "Talking like that just to confuse his listeners is just how he talks". It could be an attribution not of any specific malice, but generalized snootiness.

Comment author: Sable 22 November 2016 06:51:44AM 0 points [-]

True.

Comment author: Sable 22 November 2016 12:09:42AM *  3 points [-]

My understanding of #3 is that it comes from a place of insecurity. Someone secure in their own intelligence, or at least of their own self-worth, will either ignore the unknown word/phrase/idea, ask about it, or look it up.

So from the inside, #3 feels something like: "Look, I know you're smart, but you don't have to rub it in, okay? I mean, just 'cause I don't know what 'selective pressures in tribal mechanics' are doesn't make me stupid."

My guess is that it feels as though the other person is using a higher level vocabulary on purpose, rather than incidentally; kind of the like the opposite of the fundamental attribution error. Instead of generalizing situation-specific behavior to personality (i.e. "Oh, he's not trying to make me feel stupid, that's just how he talks"), people assume that personality-specific behavior is situational (i.e. "he's talking like that just to confuse me").

Also, I think a lot of the reaction you're going to get out of someone when using a word or idea they don't know is going to depend upon your nonverbal signals. Are you saying it like you assume that they know it? I've had professors who talk about really complex subjects I didn't fully understand as though they were obvious, and that tended to make me feel dumb. I doubt they were doing it on purpose - to them it was obvious - but by paying a little bit more attention to the inferential distance between the two of us, they could have moderated their tones and body language a bit to convey something a little less disdainful, even if the disdain itself was accidental.

Lastly, when it comes to communication I tend to favor the direct approach. If at any point I think the other person doesn't understand what I'm saying, I try to back up and explain it better. Sometimes I just flat-out ask if they understood, and if not, try to explain it, all while emphasizing that it isn't a word/phrase/idea that I (or anyone) would expect them to know.

True or not, the above strategy has been effective for me in reducing confrontation when the scenario you're describing happens.

Comment author: Carinthium 15 November 2016 10:59:34PM 0 points [-]

Question. I admit I have a low EQ here, but I"m not sure if 4) is sarcasm or not. It would certainly make a lot of sense if "I've been glad to see in this thread that we LW's do, in fact, put our money where our mouths are when it comes to trying to navigate, circumvent, or otherwise evade the Mindkiller." were sarcasm.

I would have said we had information on 2), but I've made so many wrong predictions about Donald Trump privately that I think my private opinion has lost all credibility there. 1) makes sense.

I can see why you might be afraid of war breaking out with Russia, but why do you consider Islam a major threat? Maybe you don't and I'm misinterpreting you, but given how little damage terrorist attacks actually do isn't Islam a regional problem to which the West has a major overreaction problem?

Comment author: Sable 16 November 2016 02:54:14AM 0 points [-]

I was trying to be sincere with 4), although I admit that without tone of voice and body language, that's hard to communicate sometimes. And even if LW hasn't done as good a job as we could have with this topic, from what I've seen we've done far better than just about anyone not in the rationalist community at trying to remain rational.

Glad you agree with 1); when I first heard that argument (I didn't come up with it), I had a massive moment of "that seems realy obvious, now that someone said it."

With regards to 2), you're right that we do have information on Trump; I spoke without precision. What I mean is this: beliefs are informed by evidence, and we have little evidence, given the nature of the American election, of what a candidate will behave like when they aren't campaigning. I believe there's a history of president-elects moderating their stances once they take office, although I have no direct evidence to support myself there.

When it comes to Islam, I should begin by saying that I'm sure the vast majority of Muslims simply want to live a decent life, just like the rest of us. However, theirs is the only religion active today that currently endorses holy war.

Then observe that MAD only applies to people unwilling to sacrifice their children for their cause, and further observe that Islam, as an idea, a meme, a religion, has successfully been able to make people do exactly that.

An American wouldn't launch a nuke if it would kill their children, and Russian wouldn't either. But a jihadist? From what I understand (which is admittedly not much on this topic), a jihadist just might. At least, the jihadist has a much higher probability of choosing a nuclear war over a nationalist.

I agree that the West overreacts in terms of Terrorism, in the sense that any given person is more likely to die in a car accident than be killed by a terrorist, but I was referring to existential threats, a common topic on LW and one that Yudkowsky himself seems concerned with regarding this election. Car crashes don't threaten the existence of humanity; nuclear war does.

And because I can't see how either candidate would effect the likelihood of unfriendly AI, a meteor, a plague, or any of the other existential risks, nuclear war becomes the deciding vote in the "who's less likely to get us all killed" competition.

Admittedly, the risk of catastrophic climate change might be higher under Trump, but I've no evidence for that save the very standard left vs. right paradigm, which doesn't seem to apply all that well to Trump anyway.

Thank you for your response.

Comment author: Sable 14 November 2016 11:03:15PM 4 points [-]

1)

Unless I am much mistaken, the reason that no one has yet used Nuclear Weapons is Mutually Assured Destruction, the idea that there can be no victor in a nuclear war. MAD holds so long as the people in control of nuclear weapons have something to lose if everything gets destroyed, and Trump has grandchildren.

Grandchildren who would burn in nuclear fire if he ever started a nuclear war.

So I am in no way sympathetic to any argument that he's stupid enough to start one. He has far too much to lose.

2)

I believe that the sets of skills necessary to be a good president, and to be elected president, are two entirely separate things. They may be correlated, but I doubt they're correlated that highly; a popularity contest selects for popularity, after all.

So far, we have information on Trump's skill set as a businessman: immoral and unethical perhaps, but ultimately very successful.

And we have information on Trump's skill set as a Presidential Candidate: bombastic, brash, witty, politically incorrect and able to motivate large numbers of people to vote for him.

We have no information on what Trump will be like as President; that's the gamble. We can guess, but trends don't always continue, and I suspect, based on more recent data, that Trump has an inkling that now is not the time to do anything drastic.

3)

Aside from the usual LW topics concerning existential risk (i.e. AI, Climate Change, etc.), my biggest concern is Islam. Mutually Assured Destruction only works when those with the Nuclear Weapons have nothing to lose, and if someone with such weapons genuinely believes that they and their family will go to heaven for using them, then MAD no longer applies.

From what meager evidence I can gather, I believe that Trump lowers the chance of such a war breaking out compared to Clinton. We've had a chance to see what Clinton's foreign policy looks like, and so far as I can tell, it isn't lowering the risk of nuclear war. It's heightening it.

Assuming other existential risks would be equal under either administration (which is a very questionable assumption, granted, and I would be happy to discuss it), that makes Trump look at the very least no worse than Clinton when it comes to existential risk.

I'd also like to note that I've been told plenty of people thought that Ronald Reagan would start a nuclear war with Russia, and he did nothing of the sort. Granted, I wasn't around then, so it's second person information, but there you go.

4)

I don't know about the rest of you, but I am sick of having to expend copious amount of mental energy trying to remain as rational as I can throughout this election cycle. I've been glad to see in this thread that we LW's do, in fact, put our money where our mouths are when it comes to trying to navigate, circumvent, or otherwise evade the Mindkiller.

If you disagree with anything I have to say, please respond - if my thinking is wrong, I want your help to make it better, to make it closer to correct.

Comment author: JohnReese 07 November 2016 02:40:32AM *  8 points [-]

Greetings**, As someone who was once described as a self-control fetishist by a somewhat hedonistic friend of mine, I can report from experience on personal strategies. As someone whose doctoral work involved attempting to build a connectionist model of self-control, I would probably be inclined to highlight a couple of things from the literature. Let me try both. 1. The psychology literature on self-control/willpower would suggest that regardless of whether the "limited resource" model of Baumeister and colleagues holds up in the long run, there are some things one could do to strengthen and replenish "willpower". I have not examined this work in relation to the current replication controversy within the behavioural sciences, but I have encountered it in a few different contexts and attempted to theorise about it, so would like to include it here. http://psycnet.apa.org/journals/psp/96/4/770/

The basic idea appears to be that affirming core values or principles with the self as referent, would "boost" self-control. Of course, this is supposed to counteract depletion within a certain window, but not when the "self-control" system is pushed to fatigue. Another interesting context I have noticed it pop up is in military psychology and manuals for mindset training, where soldiers are given "affirmations" which typically include the military branch's code, a set of declarative , affirmative statements about membership and values associated with it etc, and this is prescribed as a means of combating fatigue in situations where focus and cognitive control are required (I need to re-read the source but if interested, check out work by Loren Christensen, Michael Asken et al.). My old modelling work (still unpublished...working on it) would have stuff to say and I would be happy to talk about it if that is ok and anyone is interested.

Now, from personal experience...I went through several years of extreme adventures in self-control... and self-denial. As a long-term meditator, some of it was part of the training. One could perform a little test. Perhaps try to eat a single crisp and put the bag back in the container. The body would naturally not like this as crisps tend to be tasty, and one would want more. Observing the wanting can help contain it. Similarly, observing the depletion of will can help in the sense that one can disengage from the task at hand and allow it to re-calibrate to functional levels. Otherwise, if ongoing control cannot be abandoned for any stretch of time, performing centering exercises taught to meditators, LEOs etc can help.

Exercise 1 - close your eyes, breathe deeply, and as you relax, try to detect and follow 4-5 different sounds in your environment. Do this for a couple of minutes. Exercise 2 - close your eyes, and detect different sensations you can feel...like your fingers on the keyboard, air circulation, how warm or cold the air in the room is, and keep at it for a couple of minutes. While not aimed at willpower as such, this should facilitate a relaxed alertness that would benefit the ongoing task.

Of course, these may not work for you..I'd be interested in finding out how it pans out if anyone wants to give it a shot. If they are already known, apologies for the redundant comment.

I also find that engaging in consistent practice of some sort, like say, a few proper repetitions of a Taijiquan form /day, is (anecdotally) correlated with having a higher degree of volitional control over decisions and willpower for cognitively challenging tasks. The practice does not have to be religious or involve chants or suchlike...I suspect it has more to do with relaxed alertness and positioning oneself at the edge of a "flow" attractor basin.

**I come in peace. New member. I do not know if the protocol is to publish a post introducing oneself. If such is the case, please let me know and I will do so. It is great to read the posts and discussions on LW and I am hoping to write some soon. Live Long and Prosper!

Comment author: Sable 07 November 2016 10:25:39PM 0 points [-]

Welcome to lesswrong, and thanks for the advice. I'll take a look at what you suggested.

View more: Next