Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: nimim-k-m 09 December 2016 07:33:30AM 0 points [-]

But if the community is going to grow, these people are going to need some common flag to make them different from anyone else who decides to make "rationality" their applause light and gather followers.

What, you are not allowed to call yourself a rationalist if you are not affiliated with MIRI, even if you subscribe to branches of Western philosophy descended from Descartes and Kant and Vienna circle...?

Comment author: Evan_Gaensbauer 20 December 2016 12:46:21PM 0 points [-]

Viliam is right that unless we have a name for the cluster in thingspace that is the rationalist community, it's difficult to talk about. While I can understand why one might be alarmed, but I think MIRI/CFAR representatives mostly want everyone to be able to identify them in a clearly delineated way so that they and only they can claim to speak on behalf of those organizations on manners such as AI safety, existential risk reduction, or their stance on what to make of various parts of the rationality community now that they're trying to re-engage it. I think everyone can agree that it won't make anyone better off to confuse people who both identify with the LW/rationality community and those outside of it what MIRI/CFAR actually believe, re: their missions and goals.

This is probably more important to MIRI's/CFAR's relationship to EA and academia than people merely involved with LW/rationalists, since what's perceived as the positions of these organizations could effect how much funding they receive, and their crucial relationships with other organizations working on the same important problems.

Comment author: Lumifer 28 November 2016 04:39:00PM 6 points [-]

Couple of random observations.

First, with regard to privacy, I think it took a remarkably long time to sink in that "I hope you know this goes down on your permanent record". Internet activity is publishing, it is, in a large number of cases, both forever and searchable. And, of course, "anything you say can and will be used against you".

For nobodies that's not much of problem. But for people with something to lose it is. The net effect is evaporative cooling where smart, interesting, important people either withdraw from the open 'net or curate their online presence into sterility.

Second, "let's play at discussions" vs sealioning and "randos in my mentions" -- I think a major issue here is the value of time. There is a rather obvious inverse relationship between the value of someone's time and how much that someone spends wandering the 'net and random-commenting things which catch his eye. So random comments are generally low-value and worthless -- which means that people who value their time are not only going to not make them, they are also not going to pay much attention to them.

In the golden age covered by the mists of time (aka before the Eternal September) the barriers to entry were high and the people who made it inside were both smart and similar. Thus the early 'net was a very high-trust club. But that changed. Oh, how that changed.

The issue, of course, is discovery: how do you locate and identify new interesting people in the sea of hyperactive idiots? It's an interesting problem. You can create walled gardens. You can set out bait and filter mercilessly, or just hang out in places where only interesting-to-you people are likely to wander into. You can try to follow connections from people you know piggybacking on their filtering skills. Any other ways come to mind?

Comment author: Evan_Gaensbauer 20 December 2016 12:36:17PM 1 point [-]

For nobodies that's not much of problem. But for people with something to lose it is. The net effect is evaporative cooling where smart, interesting, important people either withdraw from the open 'net or curate their online presence into sterility.

I'm flagging this as a really important failure mode nobody noticed. It strikes me as very surprising this in hindsight seems so obvious when I know so many former top contributors not to have considered this as a failure mode. They didn't anticipate as they got older and advanced in their social circles and their careers, they'd go from being nobodies to being somebodies. Scott Alexander is a psychiatrist now; he has to watch what he says on the internet more than Scott the pre-med/philosophy student needed to watch what he said several years ago. Many of the legacy contributors on LW, like Eliezer Yudkowsky, Anna Salamon, Carl Shulman, Luke Muehlhauser and Andrew Critch work for nonprofits with budgets over a million dollars a year, part of the EA community, which seems hyper-conscious of status and prestige, and in a way thrusts all of them into the limelight more.

Comment author: Error 27 November 2016 05:37:34PM *  7 points [-]

I think there's two forces involved here.

It’s almost as though the issue were accountability.

And I think this is one of them. Under a Hanson hat, Talk isn't about Information. That is, for most things most people say on the net, this:

Author: “Hi! I just said a thing!”

is their only genuine content, no matter what words they happen to pick to express it. The fear is that others will hold them accountable for what they said rather than what they meant. They're playing the "I just said a thing!" game, but on a personal blog they might get accosted by people playing the "let's have discussions" game, and that would be awkward because one of the conceits of the former is that it pretends to be the latter.

In short, blogs signal the wrong things to non-nerds, they're the wrong kind of conversation. Our signal is their noise.

For one rather public and hilarious example, witness Scott Alexander’s flight from LessWrong

But I think something different is going on here, and with other diasporists. Scott et al are clearly playing the discussion game, really well. I think the second force driving people from forums to blogs and from blogs to social media is convenience.

Not having direct control of your posting environment is a trivial inconvenience. Having to run your own posting environment is also a trivial inconvenience, once the novelty of owning it wears off. Tumblr and twitter are extremely convenient. Especially twitter; you don't have to feel bad about emitting opinions without thought if the format makes depth of thought impossible! Both even make it possible to Say A Thing without saying any thing!

Never bet against convenience. Discussion moves from formats that ask more of the discussants to those that ask less. This rule is good when applied to the process of posting, and bad when applied to the content of posting, but in practice applies equally to both.

Comment author: Evan_Gaensbauer 20 December 2016 12:28:09PM 1 point [-]

Never bet against convenience. Discussion moves from formats that ask more of the discussants to those that ask less. This rule is good when applied to the process of posting, and bad when applied to the content of posting, but in practice applies equally to both.

Well, not for everyone on LW, but certainly some, esp. those at CFAR trying to revive it, having an open place for discussion par excellence is a crucial part of learning how to enhance group rationality and coordination in an online environment. If something is a crucial part of reducing x-risk, I can imagine many thinking "convenience be damned! This needs to get done!"

So, I think a key question is: how do we make LW more convenient? Or, rather, since rewriting the codebase will take a while yet, and I imagine people want to move discussion back to LW before several months go by, what can we do to make LW more attractive to overcome the trivial inconveniences of being here rather than on social media, other blogs, etc.? What are some robust incentives we can implement to draw people back? Are there any better/more suggestions than "generate good blog content", "ask people what they want to read about and then blog it", and "stop trolls/increase moderation/fix voting system" we can generate for making LW more magnetic?

Comment author: delton137 27 November 2016 09:23:41PM 3 points [-]

As far as "playing the comments game", I admit I am guilty of that. At a deeper level it comes from a desire to connect with like-minded people. I may even be doing it right now.

We like to think people post because they are genuinely intellectually engaged in the material we've written, but the truth is people post comments for a myriad of different reasons, including wanting to score comment 'points' or 'karma' or engage in a back-and-forth with a figure they admire. People like getting attention. [even shy nerdy people who are socially isolated or socially awkward, for which commenting on an internet blog may count as a significant social engagement] As you point out, the 'comments game' motivation isn't necessarily bad in terms of the consequences -- it gets debate and discussion going. Given the importance of the topics discussed on LW and elsewhere, even low quality discussion is better than no discussion, or shutting people out.

Obviously though, there is a tension in the 'rational-sphere', though between wanting to draw in lots of new people and wanting to maintain a sense of community, or people who are on the 'same wavelength'. This tension is not at all unique to rationalism, and it typically leads to some type of fragmentation -- people who want to 'spread rationalism' and grow the movement go one way and the people who want to maintain a sense of community and maintain purity go another. I've seen the same dynamic at work in the Libertarian party and in Christian churches. I think we have to accept both sides have good points.

But getting back to your post, it seems like you are more on the 'we need to maintain a sense of community' side. Personally I haven't been very active in forums or online communities, but from what I have seen, maintaining a community online is possible , but it takes work - it requires considerable organization, active moderators and administrators, etc. Some platforms are more conducive to it than others. I can't really comment on the viability of LW, since I'm kinda new here, but it seems to be a good place.

As a side note, I'm not sure how much 'social trust' is required for commenting. While I might be very hesitant to talk to someone at a cocktail party for fear of annoying them, or because I don't trust them to take me seriously, I don't feel that way about commenting, or if I do, it's to a much lower extent. There is a difference - talking to someone in real life requires really interrupting them and taking their time, while writing a comment doesn't really interrupt someone as they can always ignore it if they want to. What you said about more socially privileged people being more trusting or confident is definitely true though.

Comment author: Evan_Gaensbauer 20 December 2016 12:17:05PM *  1 point [-]

LessWrong itself doesn't have as much activity as it once did, but the first users on LessWrong have pursued their ideas on Artificial Intelligence and rationality, through the Machine Intelligence Research Institute (MIRI) and the Center for Applied Rationality (CFAR), respectively, they have a lot more opportunity to impact the world than they did before. If those are the sorts of things you or anyone, really, is passionate about, if they can get abreast of what these organizations are doing now and can greatly expand on it on LW itself, it can lead to jobs. Well, it'd probably help to be able to work in the United States and also have a degree to work at either CFAR or MIRI. I've known several people who've gone on to collaborate with them by starting on LW. Still, though, personally I'd find the most exciting part to be shaping the future of ideas regardless of whether it led to a job or not.

I think it's much easier to say now to become a top contributor on LW can be a springboard to much greater things. Caveat: whether those things are greater depends on what you want. Of course there are all manner of readers and users on LW who don't particularly pay attention to what goes on in AI safety, or at CFAR/MIRI. I shouldn't say building connections through LW is unusually likely to lead to great things if most LessWrongers might not think the outcomes so great after all. If LW became the sort of rationality community which was conducive to other slam-dunk examples of systematic winning, like a string of successful entrepreneurs, that'd make the sight much more attractive.

I know several CFAR alumni have credited the rationality skills they learned at CFAR as contributing to their success as entrepreneurs or on other projects. That's something else entirely different from finding the beginnings of that sort of success merely on this website itself. If all manner of aspiring rationalists pursued and won in all manner of domains, with all the beginnings of their success attributed to LW, that'd really be something else.

Oops, went on a random walk there. Anyway, my point even shy nerdy people...

[even shy nerdy people who are socially isolated or socially awkward, for which commenting on an internet blog may count as a significant social engagement]

...can totally think of LW as significant social engagement if they want to, because I know dozens of people for whom down the road it's brought them marriages, families, careers, new passions, and whole new family-like communities. That's really more common among people who attended LW meetups in the past, when those were more common.

Comment author: SatvikBeri 27 November 2016 06:07:50AM 16 points [-]

I think this is completely correct, and have been thinking along similar lines lately.

The way I would describe the problem is that truth-tracking is simply not the default in conversation: people have a lot of other goals, such as signaling alliances, managing status games, and so on. Thus, you need substantial effort to develop a conversational place where truth tracking actually is the norm.

The two main things I see Less Wrong (or another forum) needing to succeed at this are good intellectual content and active moderation. The need for good content seems fairly self-explanatory. Active moderation can provide a tighter feedback loop pushing people towards pro-intellectual norms, e.g. warning people when an argument uses the noncentral fallacy (upvotes & downvotes work fairly poorly for this.)

I'll try to post more content here too, and would be happy to volunteer to moderate if people feel that's useful/needed.

Comment author: Evan_Gaensbauer 14 December 2016 11:56:19AM 3 points [-]

I've been using the Effective Altruism Forum more frequently than I have LessWrong for at least the past year. I've noticed it's not particularly heavily moderated. I mean, one thing is effective altruism is mediated both primarily through in-person communities, and social media. So, most of the drama occurring in EA occurs there, and works itself out before it gets to the EA Forum.

Still, though, the EA Forum seems to have a high level of quality content, but without as much active moderation necessary. The site doesn't get as much traffic as LW ever did. The topics covered are much more diverse: while LW covered things like AI safety, metacognition and transhumanism, all that and every other cause in EA is game for the EA Forum[1]. From my perspective, though, it's far and away host to the highest-quality content in the EA community. So, if anyone else here also finds that to be the case: what makes EA unlike LW in not needing as many moderators on its forum.

(Personally, I expect most of the explanatory power comes from the hypothesis the sorts of discussions which would need to be moderated are filtered out before they get to the EA Forum, and the academic tone set in EA conduce people to posting more detailed writing.)

[1] I abbreviate "Effective Altruism Forum" as "EA Forum", rather than "EAF", as EAF is the acronym of the Effective Altruism Foundation, an organization based out of Switzerland. I don't want people to get confused between the two.

Comment author: steven0461 27 November 2016 09:31:15PM 3 points [-]

To me, the major advantage of social media is they make it easy to choose whose content to read. A version of LW where only my 25 favorite posters were visible would be exciting where the current version is boring. (I don't think that's a feasible change, but maybe it's another data point that helps people understand the problem.)

Comment author: Evan_Gaensbauer 28 November 2016 06:10:03AM 7 points [-]

You can already do this. If you click on a user's profile, there will be a little box in the top right corner. Click on the button that says "add to friends" there. When you "friend" someone on LessWrong, it just means you follow them. If you go to www.lesswrong.com/r/friends, there's a feed with submissions from only the other users you're following.

[Link] Be Like Stanislov Petrov

0 Evan_Gaensbauer 28 November 2016 06:04AM
Comment author: John_Maxwell_IV 27 November 2016 01:02:16PM *  25 points [-]

I used to do this a lot on Less Wrong; then I started thinking I should do work that was somehow "more important". In hindsight, I think I undervalued the importance of pointing out minor reasoning/content errors on Less Wrong. "Someone is wrong on less wrong" seems to me to be an actually worth fixing; it seems like that's how we make a community that is capable of vetting arguments.

Participating in online discussions tends to reduce one's attention span. There's the variable reinforcement factor. There's also the fact that a person who comes to a discussion earlier gets more visibility. This incentivizes checking for new discussions frequently. (These two factors exacerbate one another.)

These effects are so strong that if I stay away from the internet for a few days ("internet fast"), my attention span increases dramatically. And if I've posted comments online yesterday, it's hard for me to focus today--there's always something in the back of my mind that wants to check & see if anyone's responded. I need to refrain from making new comments for several days before I can really focus.

Lots of people have noticed that online discussions sap their productivity this way. And due to the affect heuristic, they downgrade the importance & usefulness of online discussions in general. I think this inspired Patri's Self-Improvement or Shiny Distraction post. Like video games, Less Wrong can be distracting... so if video games are a distracting waste of time, Less Wrong must also be, right?

Except that doesn't follow. Online content can be really valuable to read. Bloggers don't have an incentive to pad their ideas the way book authors do. And they write simply instead of unnecessarily obfuscating like academics. (Some related discussion.)

Participating in discussions online is often high leverage. The ratio of readers to participants in online discussions can be quite high. Some numbers from the LW-sphere that back this up:

  • In 2010, Kevin created a thread where he asked lurkers to say hi. The thread generated 617 comments.

  • 77% of respondents to the Less Wrong survey have never posted a comment. (And this is a population of readers who were sufficiently engaged to take the survey!)

  • Here's a relatively obscure comment of mine that was voted to +2. But it was read by at least 135 logged-in users. Since 54+% of the LW readership has never registered an account, this obscure comment was likely read by 270+ people. A similar case study--deeply threaded comment posted 4 days after a top-level post, read by at least 22 logged-in users.

Based on this line of reasoning, I'm currently working on the problem of preserving focus while participating in online discussions. I've got some ideas, but I'd love to hear thoughts from anyone who wants to spend a minute brainstorming.

Comment author: Evan_Gaensbauer 28 November 2016 05:23:53AM 6 points [-]

I think this inspired Patri's Self-Improvement or Shiny Distraction post. Like video games, Less Wrong can be distracting... so if video games are a distracting waste of time, Less Wrong must also be, right?

I've been thinking about Patri's post for a long time, because I've found the question puzzling. The friends of mine who feel similar to Patri then are ones who look to rationality as a tool for effective egoism/self-care, entrepreneurship insights, and lifehacks. They're focused on individual rationality, and improved heuristics for improving things in their own life fast. Doing things by yourself allows for quicker decision-making and tighter feedback loops. It's easier to tell if what you're doing works sooner.

That's often referred to as instrumental rationality, and that the Sequences tended to focus more on epistemic rationality. But I think a lot of what Eliezer wrote about how to create a rational community which can go on form to project teams and build intellectual movements was instrumental rationality. It's just taken longer to tell if that's succeeded.

Patri's post was written in 2010. A lot has changed since then. The Future of Life Institute (FLI) is an organization which is responsible along with Superintelligence for boosting AI safety to the mainstream. FLI was founded by community members whose meeting originated on LessWrong, so that's value added to advancing AI safety that wouldn't have existed if LW never started. CFAR didn't exist in 2010. Effective altruism (EA) has blown up, and I think LW doesn't get enough credit for generating the meme pool which spawned it. Whatever one thinks of EA, it has achieved measurable progress on its own goals like how much money is moved not only through Givewell, but by a foundation with an endowment over $9 billion.

What I've read is the LW community aspiring to do better than science is currently done in new ways, or to apply rationality to new domains and make headway on your goals. Impressive progress has been made on many community goals.

Comment author: Viliam 24 June 2016 07:51:01AM 1 point [-]

Posting links to TFP and having Eugine downvote by sockpuppets everyone who provides a different opinion... I guess it would be time for all non-NR LessWrong readers (approximately 99% of them) to finally pack their bags and leave. :(

Comment author: Evan_Gaensbauer 28 June 2016 12:47:06PM 2 points [-]

Yeah, I'm not going to be posting links from TFP, then. Thanks for the feedback.

Comment author: Sable 23 June 2016 12:35:23AM 5 points [-]

Out of curiosity: because rationalists are supposed to win, are we (on average) below our respective national averages for things which are obviously bad (the low hanging fruits)?

In other words, are there statistics somewhere on rationalist or LessWrong fitness/weight, smoking/drinking, credit car debt, etc.?

I'd be curious to know how well the higher-level training effects these common failure modes.

Comment author: Evan_Gaensbauer 24 June 2016 10:12:59AM 2 points [-]

I've wondered this too. In particular, for several years, at least among people I know, people have constantly questioned the level of rationality in our community, particularly our 'instrumental rationality'. This is summed up by the question: "if you're so smart, why aren't you rich?" That is, if rationalists are so rational, why aren't they leveraging their high IQs and their supposed rationality skills to perform in the top percentages and all sorts of metrics of coveted success? Even by self-reports, such as the LW survey(s). However, I've thought of a contrapositive question: "if you're stupid, why aren't you poor?" I.e., while rationalists might not all be peak-happiness millionaires or whatever, we might also ask the question about what the rates of (socially perceived) failure are, and how they compare to other cohorts, communities, reference classes, etc.

You're the first person I've seen to pose this question. There might have been others, though.

View more: Next