All of spiralingintocontrol's Comments + Replies

Yes, I've had many of our newcomers tell me that they showed up because of that feature (not because they went looking for in-person meetups). Discoverability matters.

Raemon110

Wanted to share our thoughts on this so far (we're currently pretty uncertain about how to weigh a large number of concerns together, but agree that meetups should be discoverable and are a really important part of the LessWrong community, and I'm interested in thoughts on how to go about this).

Throughout most of the web, you're barraged with distractions. You can't read a blogpost without seeing links to related content, notifications, etc, hyperlinks that force at least a mild decision of whether to keep reading the current thing or s... (read more)

Here's the main consideration from my point of view as an organizer:

Lesswrong.com is still the #1 source of newcomers to the San Francisco meetup.

We haven't had much chance to try out the new meetup functionality on lesserwrong.com yet, but we really need it to work. This is our community's primary source of new blood, and it is super important for that to be all set before the migration happens.

9Vaniver
I agree that meetup functionality is one of the core features of the site; it just so happened that it was the last core feature that we built, and so hasn't had as much time to be polished as the others. (For some context on our decision-making process, we thought that the site shouldn't be brought out of beta until we had all of the core features that current LW has, but also that we should bring it out of beta as soon as possible, and continue making improvements at the lesswrong.com domain.)

Agree. Something the old LessWrong did that the LW2 community page doesn't currently do, was that it displayed upcoming meetups on the side of every page, and in my experience as an organizer sometimes people would just stumble across the site for the first time, immediately see there was a meetup in my city, and show up. From a design perspective I know there's no way Oliver will go for having meetups display on the side of every page, but maybe we can do something to make nearby meetups comparably visible, because that does seem really important.

If people are leaving as we speak, then scaling it to the size it already is may indeed require change.

1Chris_Leong
Do you think that people are leaving at more than a reasonable rate of natural attrition? If so, why?
habryka160

Yeah, it's both important to me that the people I see doing the most valuable work on rationality and existential risk feel comfortable posting to the platform, and that we can continue replacing the people we will inevitably lose because of natural turnover with people of equal or better quality.

This has definitely not been the case over the previous 3 years of LessWrong, and so to fix that we will require some changes. My diagnosis of why that happened is partially that the nature of how people use the internet changed (with the onset of social net... (read more)

A sense that other people are paying direct attention to you, noticing important and real aspects of you, and not rejecting those aspects.

This is rare in my experience because people mostly don't actually pay attention to each other, they just notice some vague surface-level details that are easy to remember and not much else.

8Kaj_Sotala
I endorse this summary. The experience of being seen often translates into a feeling of safety for me, something like "people saw me for what I am and accepted me; that implies that, at least around some people, I can relax my guards and not worry so much about giving a good impression, because these people are fine with me already".

Have you looked at possible empirical bases of "raw happiness" such as Kahneman's Day Reconstruction Method?

(see also: Happiness is Not a Coherent Concept)

1Jan_Kulveit
Ad Kahneman: Yes. This is related, but my impression is the nonlinearity is somewhat more general - in DRM you are still asking people for rating on 1-6 affective scale (the nonlinearity would appear between the "raw affect" and the rating on the scale), and doing aggregates. Happiness is Not a Coherent Concept thanks. It seems to me to be somewhat orthogonal - the article argues happiness breaks into several different variables, which are correlated, but not identical. Ok, than you can choose one of them. Or, you can try to uderstand the varriables better, and possibly construct something which encompasses all of them.

I see. In that case

This can only be seen as a failure of rationality.

seems very non-obvious to me. Though of course the decline of LW1 was very bad for people not near any in-person community or involved with any of the LW diaspora online, I am not sure that it had a bad effect on the community as a whole.

But then, I'm not sure how to define "bad effect on the community as a whole," either, short of the entire thing dissolving.

3Chris_Leong
"Was very bad for people not near any in-person community or involved with any of the LW diaspora online" - I feel that there were sufficient people with an interest in improving LW that it was a failure that we didn't find a way to achieve this/that it took so long. I don't dispute that there are many people who would have had sufficient in person community that trying to fix LW might not have been relevant to their goals.
We saw that the community was in a steep decline until recently, despite the fact that many rationalists wanted the community to thrive.

Why do you believe this? Or by "community" do you mean "the LessWrong website"?

From my vantage point, it looks like the overall, in-person+online community has been growing slowly ever since I joined it ~6 years ago.

4Chris_Leong
I meant the community of people on LW was in decline, I wasn't talking about the broader rationalist community. I suppose one advantage of the decline of LW is that it led to further community formation elsewhere. But I can imagine alternative timelines in which LW never recovered. We got lucky.

Yeah, ok, "I don't have time for that" is definitely a valid response to this.

So, not having access to the original post you linked to on Facebook, here is how I would summarize the thing you're saying here. Please correct me if I'm wrong.

"Social reality" is a reference to the idea that everything that humans say to each other is mediated by political or social concerns, such that truth is being constantly warped as it passes through people's brains, without the people involved even being aware of it. The term "social reality" specifically refers to the "alternate reality" that is created
... (read more)

As a piece of general feedback: I find your writing useful but hard to understand, even though I have a general sense of how your brain works and what kinds of things you usually say. I think if I didn't have those, your writing would be pretty much impossible to understand (edit: for counterfactual me, specifically). It's very dense with jargon that means a lot to you but other people don't have context on.

My guess is you could fix this by doing something that feels like "dumbing it down almost to the point of uselessness," so tha... (read more)

4Ziz
Thanks. I have considered and tried to implement strategies like that, and I think it's better for me to do what I'm doing because: I do model-building primarily for my purposes of using models to decide things in real life. This kind of content, with a bunch of dependencies because it was made based on pulling from my entire worldview, is the content which already exists. The subset which I can link something to explain the dependencies is what I can write in the course of my life which is mostly not about writing, without spending time generating sort-of-related surface-level content. A much larger chunk of the work for doing the writing I'm doing is "free" in that I'm already doing it. And in practice that makes it something that I actually have time to do sometimes. Also, I think you'd be surprised. I've had at least one person who didn't know me personally get it, I think.

It's called "Bold Orion." (I found it in your Giant Epic Rationalist Solstice Filk spreadsheet.)

I think Bold Orion was supposed to be the "winter-themed" song for the evening. But it's subtle and doesn't explicitly use the word "winter." edit: no wait, "Old Man Winter" is in the lyrics once. But just once.

Nitpick: might give people a slightly better sense of what it's like, but mostly it's a meta-discussion that's gone way off the rails and has little to do with what actually happened on Saturday, and is more about What Is Or Should Be The Ultimate Idea Of Solstice, Really.

4Raemon
This actually suits me fine - I'm thinking mostly of people who've never been to Solstice and aren't sure why they might want to. Talking about Platonic Ideal Solstice gives them a sense of what we're striving for, and the surrounding discussion gives a sense of where things are currently at.

Thanks for sharing. As is often the case, I find myself agreeing with you on most concrete points but unhappy with the overly negative tone you're taking. I hope that none of the core organizers are reading this now, because if I were them I'd want to take some more time to decompress before diving into criticism this harsh.

So, on to specific points:

I agree that this year was pretty scattershot, and didn't feel like the arc pulled together well. Have you talked to next year's organizer about helping out with creative direction? Running ... (read more)

3PDV
I would like to help organize/creatively direct, and put my name in for this year. Interpersonal reasons mean that helping with next year is probably not an option for me. Perhaps for 2019. It's also about time that the Bay Solstice experience some mitosis, since we've outgrown every reasonably-priced space; last year and this I've considered what I'd want in running a separate fairly large Solstice, but thing #1 is the Bayesian Choir performing, and I expect that would be a sticking point. For speeches, I agree that getting very high quality needs original speeches. But I also thought that the speeches shared between 2016 and 2017 were less practiced and less heartfelt, which seems like a different problem. I agree that there is no obvious intervention to deal with applause. I find it very frustrating since it is obvious to me that it's out of step with the arc of the night, and I don't know how to convey that feeling to everyone else or why they don't have it.

There are some guidelines on what sort of content belongs on the frontpage.

I think, based on these guidelines, that the issue with this particular post would be "crowdedness" - people here have discussed this topic a lot already.

1RST
Thanks for the informations.

Aw man, and here I assumed it was because you were avoiding giving LW the addictive quality of a normal social network.

3Raemon
Heh, I do also think that's important. (There's some very fine line of "exactly the right amount of addictiveness", as well as just making it easy for people to be informed about the things they want to be informed about. My current take is that the ideal Notifications tab should say something but not do the Bright Red Thing that most other sites do)

Because it's spoilers? ... and not in rot13.

2gjm
That's certainly why I downvoted it. The OP invited readers to make a guess, and Eneasz's comment made it much harder to make a guess before seeing the answer.
2Raemon
Ah, if it was intentional I can just retract my upvote.

I assume you have read Myth of the Framework. Doesn't Popper himself emphasize that it's not necessary to share an epistemological framework with someone, nor explicitly verbalize exactly how it works (since doing that is difficult-to-impossible), to make intellectual progress?

-2Elliot_Temple
Verbalizing your entire framework/worldview is too hard, but CR manages to verbalize quite a lot of epistemology. Does LW have verbalized epistemology to rival CR, which is verbalized in a reasonably equivalent kinda way to e.g. Popper's books? I thought the claim was that it does. If you don't have an explicit epistemology, may I recommend one to you? It's way, way better than nothing! If you stick with unverbalized epistemology, it really lets in bias, common sense, intuition, cultural tradition, etc, and makes it hard to make improvements or have discussions.
It's kind of a convergent thing to do, among people of a certain level of success and awareness of how the world works.

Relatedly, my parents helped found an elementary school that my oldest sibling was in the founding class of. That school is now about as expensive per-year as out-of-state tuition at a nice university. (Still a nice school, though.) It has about 200 students at any given time.

My takeaway from this is that scaling schools at all at a reasonable cost is really hard.

There’s a reason why the culture that produced an outsized number of science Nobelists is not an engineering culture, but a rule of law one.

What do you mean by an engineering culture, and how is it distinct from the "rule of law" culture you described earlier?

3Vaughn Papenhausen
Based on this quote: it seems to me that "engineering culture" is something like "don't worry about the rules, just do whatever works." Think "bodging". This contrasts with the rule of law culture, which is something like "in conflicts between doing what works and following the rules, always follow the rules."

I think most rationalists are way more in favor of tribal stuff than you're implying. The Solstice is just one example.

We're not shunning it because it's what normies use, it's just hard to start it from scratch. If you start doing these things, I bet a lot of people would be interested in them.

3. Safety versus standards.

The dichotomy feels very specific to companies. I don't see why most communities couldn't have both, with people simply having various levels of engagement.
Most communities have a lot of idlers and lurkers.

I strongly disagree. All communities have to face up to this tradeoff in one way or another. Just as one example, the LW community has been low-key having this debate for a long time now; "should we be about Being Real Ambitious or just focus on being nice to the people here?"

Hobby communities have to think

... (read more)

I'm missing a lot of context here. How is this post connected to the other things you're referring to as a "cancer" and what is wrong with those things and this post?

Meta note: I don't like that your comment has a lot of "this is bad" but not a lot of why.

edit: To be clear, I'm genuinely curious. This post is also extremely confusing and bizarre, so I would appreciate hearing your take on it as someone who is skeptical but also seems to understand what it's pointing at.

2PDV
Insofar as I understand what it's pointing at, it is pointing at something I'd paraphrase as "logical thought is overrated". There's nuance to what exactly it's being pushed aside in favor of, but that's the core piece I object to. I object to it the most strongly because it's from an intellectual lineage that draws adherents mostly from the rationalist community and is based around disparaging logical thought and a naive view of truth in favor of various wooy "instinct/social reasoning/tradition/spirituality without understanding is good" frameworks. And while there's value to system 1 reasoning, I think that A) CFAR is handling that quite fine with more care and purpose and B) Anything that hooks tightly to system 1 without being moderated by the strong endorsement of system 2 should be treated as BADSCARYATOMICFIRESPIDERS, even while trying to extract value from it.

I'd actually be very interested in hearing what specific community this is, as a case study in How To Do This Right.

Feedback: This link would be a lot more useful if it had any concrete context or commentary related to the link. "A common fallacy among programmer-types" could be anything, not even related to programming, and the link title is even more vague. I clicked on it only to realize I'd already read it several years ago.

Normally, facial expressions and body language and tone of voice are credible signals. They are hard to fake for most people, which creates trust.

If you know that a specific person is good at faking those signals, they instantly become less trustworthy. They could have your best interests at heart, sure, but how would you know?

For me, the world is divided into roughly two groups:

1. People who I do not trust enough to engage in this kind of honest intellectual debate with, because our interests and values are divergent and all human communication is political.

2. Close friends, who, when we disagree, I engage in something like "double crux" naturally and automatically, because it's the obvious general shape of how to figure out what we should do.

The latter set currently contains about two (2) people.

This is why I don't do explicit double crux.

+1, I feel like this post is getting at something useful, but I'm too confused by the use of terminology to understand it.