Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Vaniver 02 December 2016 07:51:50PM 4 points [-]

Like others pointed out, there's a Slack channel administered by Elo, a lesswrong IRC, and a SSC IRC. (I'm sometimes present in the first, but not the other two; I don't know how active they are now.)

As an addendum, and as a way of helping newer members, maybe we could have some kind of Big/Little program? Nothing fancy, just a list of people who have volunteered to be 'Bigs,' who are willing to jump in and discuss things with newer members.

Is the idea here pairing (Alice volunteers as a Big and is matched up with Bob, they swap emails / Hangouts / etc. and have one-on-one conversations about rationality / things that Bob doesn't understand yet) or in-need matching (Alice is the Big on duty at 7pm Eastern time, and Bob shows up in the chat channel to ask questions that Alice answers), or something else?

This also made me think of the possibility of something like "Dear Prudence"; maybe emails about some question that are then responded to in depth, or maybe chat discussions that get recorded and then shared, or so on.

(Somewhat tangential, but there are other things you can overlay on top of online communities in order to mimic some features of normal geographic communities, which seem like they make them more human-friendly but require lots of engagement on the part of individuals that may or may not be forthcoming.)

Comment author: Sable 03 December 2016 04:34:44AM 1 point [-]

Thanks for the info - I'll check out some of the chat channels. I had no idea they existed.

As for the idea, I hadn't thought it through quite that far, but I was picturing something along the lines of your second suggestion. Any publicized and easily accessible way of asking questions that doesn't force newer members to post their own topics would be helpful.

I remember back when I was just starting out on LessWrong, and being terrified to ask really stupid questions, especially when everyone else here was talking about graduate level computer science and medicine. Having someone to ask privately would've sped things up considerably.

Comment author: Sable 01 December 2016 11:33:49PM 3 points [-]

This is more of a practical suggestion than a theoretical one, but what if we had an instant message feature? Some kind of chat box like google hangouts, where we could talk in a more immediate sense to people rather than through comment and reply.

As an addendum, and as a way of helping newer members, maybe we could have some kind of Big/Little program? Nothing fancy, just a list of people who have volunteered to be 'Bigs,' who are willing to jump in and discuss things with newer members.

A 'little' could ask their big questions as they make their way through the literature, and both Bigs and Littles would gain a chance to practice rationality skills pertaining to discussion (controlling one's emotions, being willing to change one's mind, etc.) in real time. I think this would help reinforce these habits.

The LessWrong study hall on Complice is nice, but it's a place to get work done, not to chat or debate or teach.

Comment author: Sable 28 November 2016 02:32:32AM 2 points [-]

Additionally, humor - especially self-effacing humor - allows one to critique ideas or people held in high esteem without being offensive or inciting anger. It's hard to be mad when you're laughing.

Thought: Humor lowers one's natural barriers to accepting new ideas.

In the context of ideas as memes that undergo Darwinian processes of mutation and natural selection, perhaps humor can be thought of as an immunodeficiency virus? A way to lower an idea's natural defenses against competing ideas, which is why we see Christians willing to listen to Atheist comics, and vice versa. Humor lowers Christianity's natural defenses against Atheism (group consolidation, faith, etc.) and allows new ideas to attack the weakened "body."

Comment author: James_Miller 22 November 2016 04:42:07AM 2 points [-]

Isn't this insanely dangerous? Couldn't bacteria immune to viruses out-compete all other bacteria and destroy most of earth's biosphere?

Comment author: Sable 26 November 2016 03:36:01AM 1 point [-]

Insanely dangerous, yes, but then again so is all potentially world-changing technology (think AI and nanobots).

In other words I agree with you, but I think that the response to "new technology with potentially horrific consequences or otherwise high risk/reward ratio" should be, "estimate level of caution necessary to reduce risk to manageable levels, double the level of caution, and proceed very, very slowly."

Because it seems to me, bad at biology as I am, that the ability to synthesize arbitrary proteins, which this technology does/is a stepping stone to, could be incredibly powerful and life-saving.

Comment author: ThisSpaceAvailable 22 November 2016 06:19:39AM 1 point [-]

"Instead of generalizing situation-specific behavior to personality (i.e. "Oh, he's not trying to make me feel stupid, that's just how he talks"), people assume that personality-specific behavior is situational (i.e. "he's talking like that just to confuse me")."

Those aren't really mutually exclusive. "Talking like that just to confuse his listeners is just how he talks". It could be an attribution not of any specific malice, but generalized snootiness.

Comment author: Sable 22 November 2016 06:51:44AM 0 points [-]


Comment author: Sable 22 November 2016 12:09:42AM *  3 points [-]

My understanding of #3 is that it comes from a place of insecurity. Someone secure in their own intelligence, or at least of their own self-worth, will either ignore the unknown word/phrase/idea, ask about it, or look it up.

So from the inside, #3 feels something like: "Look, I know you're smart, but you don't have to rub it in, okay? I mean, just 'cause I don't know what 'selective pressures in tribal mechanics' are doesn't make me stupid."

My guess is that it feels as though the other person is using a higher level vocabulary on purpose, rather than incidentally; kind of the like the opposite of the fundamental attribution error. Instead of generalizing situation-specific behavior to personality (i.e. "Oh, he's not trying to make me feel stupid, that's just how he talks"), people assume that personality-specific behavior is situational (i.e. "he's talking like that just to confuse me").

Also, I think a lot of the reaction you're going to get out of someone when using a word or idea they don't know is going to depend upon your nonverbal signals. Are you saying it like you assume that they know it? I've had professors who talk about really complex subjects I didn't fully understand as though they were obvious, and that tended to make me feel dumb. I doubt they were doing it on purpose - to them it was obvious - but by paying a little bit more attention to the inferential distance between the two of us, they could have moderated their tones and body language a bit to convey something a little less disdainful, even if the disdain itself was accidental.

Lastly, when it comes to communication I tend to favor the direct approach. If at any point I think the other person doesn't understand what I'm saying, I try to back up and explain it better. Sometimes I just flat-out ask if they understood, and if not, try to explain it, all while emphasizing that it isn't a word/phrase/idea that I (or anyone) would expect them to know.

True or not, the above strategy has been effective for me in reducing confrontation when the scenario you're describing happens.

Comment author: Carinthium 15 November 2016 10:59:34PM 0 points [-]

Question. I admit I have a low EQ here, but I"m not sure if 4) is sarcasm or not. It would certainly make a lot of sense if "I've been glad to see in this thread that we LW's do, in fact, put our money where our mouths are when it comes to trying to navigate, circumvent, or otherwise evade the Mindkiller." were sarcasm.

I would have said we had information on 2), but I've made so many wrong predictions about Donald Trump privately that I think my private opinion has lost all credibility there. 1) makes sense.

I can see why you might be afraid of war breaking out with Russia, but why do you consider Islam a major threat? Maybe you don't and I'm misinterpreting you, but given how little damage terrorist attacks actually do isn't Islam a regional problem to which the West has a major overreaction problem?

Comment author: Sable 16 November 2016 02:54:14AM 0 points [-]

I was trying to be sincere with 4), although I admit that without tone of voice and body language, that's hard to communicate sometimes. And even if LW hasn't done as good a job as we could have with this topic, from what I've seen we've done far better than just about anyone not in the rationalist community at trying to remain rational.

Glad you agree with 1); when I first heard that argument (I didn't come up with it), I had a massive moment of "that seems realy obvious, now that someone said it."

With regards to 2), you're right that we do have information on Trump; I spoke without precision. What I mean is this: beliefs are informed by evidence, and we have little evidence, given the nature of the American election, of what a candidate will behave like when they aren't campaigning. I believe there's a history of president-elects moderating their stances once they take office, although I have no direct evidence to support myself there.

When it comes to Islam, I should begin by saying that I'm sure the vast majority of Muslims simply want to live a decent life, just like the rest of us. However, theirs is the only religion active today that currently endorses holy war.

Then observe that MAD only applies to people unwilling to sacrifice their children for their cause, and further observe that Islam, as an idea, a meme, a religion, has successfully been able to make people do exactly that.

An American wouldn't launch a nuke if it would kill their children, and Russian wouldn't either. But a jihadist? From what I understand (which is admittedly not much on this topic), a jihadist just might. At least, the jihadist has a much higher probability of choosing a nuclear war over a nationalist.

I agree that the West overreacts in terms of Terrorism, in the sense that any given person is more likely to die in a car accident than be killed by a terrorist, but I was referring to existential threats, a common topic on LW and one that Yudkowsky himself seems concerned with regarding this election. Car crashes don't threaten the existence of humanity; nuclear war does.

And because I can't see how either candidate would effect the likelihood of unfriendly AI, a meteor, a plague, or any of the other existential risks, nuclear war becomes the deciding vote in the "who's less likely to get us all killed" competition.

Admittedly, the risk of catastrophic climate change might be higher under Trump, but I've no evidence for that save the very standard left vs. right paradigm, which doesn't seem to apply all that well to Trump anyway.

Thank you for your response.

Comment author: Sable 14 November 2016 11:03:15PM 4 points [-]


Unless I am much mistaken, the reason that no one has yet used Nuclear Weapons is Mutually Assured Destruction, the idea that there can be no victor in a nuclear war. MAD holds so long as the people in control of nuclear weapons have something to lose if everything gets destroyed, and Trump has grandchildren.

Grandchildren who would burn in nuclear fire if he ever started a nuclear war.

So I am in no way sympathetic to any argument that he's stupid enough to start one. He has far too much to lose.


I believe that the sets of skills necessary to be a good president, and to be elected president, are two entirely separate things. They may be correlated, but I doubt they're correlated that highly; a popularity contest selects for popularity, after all.

So far, we have information on Trump's skill set as a businessman: immoral and unethical perhaps, but ultimately very successful.

And we have information on Trump's skill set as a Presidential Candidate: bombastic, brash, witty, politically incorrect and able to motivate large numbers of people to vote for him.

We have no information on what Trump will be like as President; that's the gamble. We can guess, but trends don't always continue, and I suspect, based on more recent data, that Trump has an inkling that now is not the time to do anything drastic.


Aside from the usual LW topics concerning existential risk (i.e. AI, Climate Change, etc.), my biggest concern is Islam. Mutually Assured Destruction only works when those with the Nuclear Weapons have nothing to lose, and if someone with such weapons genuinely believes that they and their family will go to heaven for using them, then MAD no longer applies.

From what meager evidence I can gather, I believe that Trump lowers the chance of such a war breaking out compared to Clinton. We've had a chance to see what Clinton's foreign policy looks like, and so far as I can tell, it isn't lowering the risk of nuclear war. It's heightening it.

Assuming other existential risks would be equal under either administration (which is a very questionable assumption, granted, and I would be happy to discuss it), that makes Trump look at the very least no worse than Clinton when it comes to existential risk.

I'd also like to note that I've been told plenty of people thought that Ronald Reagan would start a nuclear war with Russia, and he did nothing of the sort. Granted, I wasn't around then, so it's second person information, but there you go.


I don't know about the rest of you, but I am sick of having to expend copious amount of mental energy trying to remain as rational as I can throughout this election cycle. I've been glad to see in this thread that we LW's do, in fact, put our money where our mouths are when it comes to trying to navigate, circumvent, or otherwise evade the Mindkiller.

If you disagree with anything I have to say, please respond - if my thinking is wrong, I want your help to make it better, to make it closer to correct.

Comment author: JohnReese 07 November 2016 02:40:32AM *  8 points [-]

Greetings**, As someone who was once described as a self-control fetishist by a somewhat hedonistic friend of mine, I can report from experience on personal strategies. As someone whose doctoral work involved attempting to build a connectionist model of self-control, I would probably be inclined to highlight a couple of things from the literature. Let me try both. 1. The psychology literature on self-control/willpower would suggest that regardless of whether the "limited resource" model of Baumeister and colleagues holds up in the long run, there are some things one could do to strengthen and replenish "willpower". I have not examined this work in relation to the current replication controversy within the behavioural sciences, but I have encountered it in a few different contexts and attempted to theorise about it, so would like to include it here. http://psycnet.apa.org/journals/psp/96/4/770/

The basic idea appears to be that affirming core values or principles with the self as referent, would "boost" self-control. Of course, this is supposed to counteract depletion within a certain window, but not when the "self-control" system is pushed to fatigue. Another interesting context I have noticed it pop up is in military psychology and manuals for mindset training, where soldiers are given "affirmations" which typically include the military branch's code, a set of declarative , affirmative statements about membership and values associated with it etc, and this is prescribed as a means of combating fatigue in situations where focus and cognitive control are required (I need to re-read the source but if interested, check out work by Loren Christensen, Michael Asken et al.). My old modelling work (still unpublished...working on it) would have stuff to say and I would be happy to talk about it if that is ok and anyone is interested.

Now, from personal experience...I went through several years of extreme adventures in self-control... and self-denial. As a long-term meditator, some of it was part of the training. One could perform a little test. Perhaps try to eat a single crisp and put the bag back in the container. The body would naturally not like this as crisps tend to be tasty, and one would want more. Observing the wanting can help contain it. Similarly, observing the depletion of will can help in the sense that one can disengage from the task at hand and allow it to re-calibrate to functional levels. Otherwise, if ongoing control cannot be abandoned for any stretch of time, performing centering exercises taught to meditators, LEOs etc can help.

Exercise 1 - close your eyes, breathe deeply, and as you relax, try to detect and follow 4-5 different sounds in your environment. Do this for a couple of minutes. Exercise 2 - close your eyes, and detect different sensations you can feel...like your fingers on the keyboard, air circulation, how warm or cold the air in the room is, and keep at it for a couple of minutes. While not aimed at willpower as such, this should facilitate a relaxed alertness that would benefit the ongoing task.

Of course, these may not work for you..I'd be interested in finding out how it pans out if anyone wants to give it a shot. If they are already known, apologies for the redundant comment.

I also find that engaging in consistent practice of some sort, like say, a few proper repetitions of a Taijiquan form /day, is (anecdotally) correlated with having a higher degree of volitional control over decisions and willpower for cognitively challenging tasks. The practice does not have to be religious or involve chants or suchlike...I suspect it has more to do with relaxed alertness and positioning oneself at the edge of a "flow" attractor basin.

**I come in peace. New member. I do not know if the protocol is to publish a post introducing oneself. If such is the case, please let me know and I will do so. It is great to read the posts and discussions on LW and I am hoping to write some soon. Live Long and Prosper!

Comment author: Sable 07 November 2016 10:25:39PM 0 points [-]

Welcome to lesswrong, and thanks for the advice. I'll take a look at what you suggested.

Comment author: gwern 06 November 2016 03:34:36PM *  6 points [-]
Comment author: Sable 07 November 2016 01:03:54AM 0 points [-]

Thanks, I'll take a look.

View more: Next