Posts

Sorted by New

Wiki Contributions

Comments

I may be late in the game here, but I found this chapter much less effective than the previous four, and I updated hard from "This book might resonate outside the LW community" towards "This will definitely not resonate outside the LW community." Maybe the community is the target, full stop, but that seems unnecessarily modest. The thing that most bothered me was that the conversations are full of bits that feel like Eliezer unnecessarily personalizing, which reads like bragging, e.g.:

When I try to explain a phenomenon, I’m also implicitly relying on my ability to use a technique like “don’t even start to rationalize,” which is a skill that I started practicing at age 15 and that took me a decade to hone to a reliable and productive form.
But from my perspective, there’s no choice.
From the perspective of somebody like me,

I'm having a hard time describing exactly why I found these so off-putting, but I think it has to do with the ways LW gets described as a cult. The more I think about it, the more I think that this is a problem with the framing of conversations in the first place: it's hard to avoid looking like a git when you pick three examples of being smarter than other people, even with the caveat up front that you know you're being unfair about it.

This chapter also felt much more densely packed with Rationalist vernacular, e.g. "epistemic harm", "execute the '[...]' technique", "obvious failure mode", "doing the hard cognitive labor", and to a lesser extent, Silicon Valley vernacular (most of part iii). Sometimes you have to introduce new terms, but each time burns inferential distance points, and sometimes even weirdness points.

Flipping this around: this seems like yet another data point in favor of investing at least moderately in signalling. Heuristically, people won't distinguish your lack of caring-about-signalling from lack of ability-to-signal.

Sure. The biggest one is that when someone has poor social skills, we treat that as a thing to tolerate rather than as a thing to fix. E.g. someone shows up to a meetup and doesn't really get how conversation flow works, when it's time to talk and when it's time to listen, how to tell the difference between someone being interested in what ze has to say and someone just being polite. We're welcoming, at least outwardly, and encourage that person to keep showing up, so ze does. And the people who are both disinclined to be ranted to and who have the social skills to avoid the person learn to do so, but we don't seem to make any effort to help the person become less annoying. So ze continues to inflict zirself on newcomers who haven't learned better, and they walk away with the impression that that's what our community is.

Which is sad, because we spend plenty of time encouraging self-improvement in thinking skills. If we siphoned some effort from "notice you're confused" to "notice your audience", we should be able to encourage self-improvement in social skills as well. But since we don't treat it like something fixable, it doesn't get fixed.

2a here seems like a major issue to me. I've had an essay brewing for a couple of months, about how the range of behaviors we tolerate affects who is willing to join the community. It's much easier to see the people who join than the people who are pushed away.

I argue that the way we are currently inclusive goes beyond being a safe space for weirdness, and extends into being anti-normal in a way that frightens off anyone who already has strong mainstream social skills. And that we can and should encourage social skill development while remaining a safe space.

If there's interest, I'll finish writing the longer-form argument.

This crystallization really resonated with me. I've recently noticed a social norms divide, where some people seem to perceive requests for more information as hostile (attacking their status), rather than as a sign of interest. "I do not understand your world view, tell me more" can translate as "I like you and am interested in understanding you better", or as "you are obviously wrong, please show me some weakness so that I can show how much smarter I am." Or related, consider:

A: I'm working on X.

B: I've heard Y about X, what do you think?

Is B mentioning Y a sign of belonging to A's in[terest]-group, and a bid for closeness? Or is B bidding for status, trying to show how much better informed B is?

Obviously I've removed all the interesting subtlety from my examples here, and it's easy to imagine a conversation such that the hypothetical questions have obvious answers. It's also possible for B to be unambiguous in one direction or the other - this is a useful social skill. My point is that there's also overlap, where B intends to bid for closeness, but is interpreted as bidding for status. And that's a function of A's assumptions, not just about B but about how interactions in general are supposed to be structured.

"Comment epistemic status" would work.

I think I can make this! Any tips for identifying the group?

Data point: I would love to come to something like this, but I'm out of town.

Stop reading this.

Did you stop? So I don't think the difficulty is avoiding compliance with commands in general. Rather, it's switching between the mental modes of "complying" and "not complying" under time pressure.

I'm also going, and would also like to meet other LW-ers. Let's wander towards Grendel's Den around 6.

If a couple people reply to this, I'll come up with more explicit logistics, but I can't plan at 1am.

Load More