Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Evan_Gaensbauer 21 September 2017 04:34:16AM 0 points [-]

The Future of Humanity Institute recently hosted a workshop on the focus of Dr. Denkenberger's research called ALLFED.

Comment author: Onemorenickname 25 June 2017 05:37:49AM 0 points [-]

I don't think about building a product from scratch, more about coordinating a Discord server, a Reddit board a Google doc for instance.

The website linked by @lifelonglearner is particularly good, even though it will be deleted by July.

Comment author: Evan_Gaensbauer 28 June 2017 07:39:31AM 0 points [-]

There's an existing EA Discord server. Someone posted about it in the 'Effective Altruism' Facebook group, and it was the first mention I'd seen of an EA Discord anywhere, so it's probably the only/primary one existing. There's nothing "official" about the EA Discord, but it's the biggest and best you'll find if that's what you're looking for. I can send you an invite if you want.

Comment author: Evan_Gaensbauer 13 June 2017 08:32:33AM 1 point [-]

Arbital as a community project is on the back-burner right now, though Alexei and Eliezer apparently have plans to develop into something new in the future. Oliver Habryka and Matthew Graves are two community members working on a successor site to this one.

Comment author: komponisto 28 May 2017 07:11:52AM 21 points [-]

For the record: at the risk of being a lonely dissenter, I strongly disagree with any notion that any of this discussion should have been censored in any way. (I was even grateful for the current impossibility of downvoting.)

Five years ago, or even two, my opinion would have been quite different. By this point, however, I have undergone a fairly massive update in the direction of thinking people are far, far too sensitive about matters of "tone" and the like. These norms of sensitivity are used to subtly restrict information flow. Ultimately Duncan and everyone else are better off knowing about the numerically-pseudonymous commenter's opinion in all of its gory detail. In fact, I would go so far as to say that the more they engage with this individual, the better; especially since the natural tendency will be to go in the opposite direction, circle the wagons, and dismiss the critic as a low-status outsider -- a behavior pattern that doesn't need more practice, IMHO.

(At any rate, the individual seems contemptuous enough of their targets that I would expect them to disengage on their own before the full value of discussion with them has been extracted.)

Comment author: Evan_Gaensbauer 02 June 2017 06:32:47AM 5 points [-]

As someone who doesn't live in the Bay Area, has no intention of moving there in the near future, and who resents the idea that anyone who wants to be part of what ought to be a worldwide rationality needs to eventually move to the Bay Area to do so. I'm part of the rationality and effective altruism communities, and I too have taken to task community members in the Bay Area for acting as though they can solve community coordination problems with new projects when acknowledgement of the underwhelming success or failure of prior projects never seems to take place. I do that on Facebook, though, where not only my civilian identity and a track record of my behaviour is. There are closed groups or chats where things are less open, so it's not as damaging, and even if I make a post on my own Facebook feed for over one thousand people to see, if I say something wrong, at least it's out in the open so I may face the full consequences of my mistakes.

I know lots of people mentioned in '18239018038528017428' comment. I either didn't know those things about them, or I wouldn't characterize what I did know in such terms. Based on their claims, '18239018038528017428' seems to have more intimate knowledge than I do, and I'd guess is also in or around the Bay Area rationality community as well. Yet they're on this forum anonymously, framing themselves as some underdog taking down high-status community members, when the criteria for such hasn't been established other than "works at MIRI/CFAR", and what they're doing is just insulting and accusing regular people like the rest of us on the internet. They're not facing the consequences of their actions.

The information provided isn't primarily intended to resolve disputes, which I would think ought to be the best application of truth-seeking behaviour in this regard, which is expected as a if not the only primary purpose of discourse here. Primary purposes of '18239018038528017428's comment were to express frustration, slander certain individuals, and undermine and discredit Duncan's project without evidence to back up their claims. These are at cross-purposes with truth-seeking behaviour.

There's nothing I do which is more policed in terms of tone on the basis of sensitivity that '18239018038528017428' isn't doing. While we're talking about norms of sensitivity, let's talk about norms for resolving interpersonal disputes. All the differences between how I and lots of others in the community do it, even if the tone we use isn't always splendid or sensitive, and how '18239018038528017428' do it, are what separates people who have a non-zero respect for norms, and those who don't. This coming from me, a guy who lots of people think probably already flaunts social norms too much.

I am anti-sympathetic to '18239018038528017428' and whether they're censored. Another reason not to resolve interpersonal disputes like this in public on a website like LessWrong is most people in online communities don't like seeing this sort of drama dominate discourse, and in particular there are lots of us who don't care for ever more drama from one zip code being all anyone pays attention to. That defies the purpose of this site, and saps the will of people not in the Bay Area to continue to engage in the rationality community. That's not what anyone needs. Since we've established '18239018038528017428' seems close enough to probably be part of the Berkeley rationality community already, there are plenty of channels like private group chats, mailing lists, or other apps where everyone involved can be connected, but user '18239018038528017428' wouldn't need to out themselves in front of everyone to do it. They could've had had a friend do it.

There are plenty of ways they could've accomplished everything they would've wanted without being censored, and without doing it on LessWrong. When they have access to plenty of online spaces which serve the same purpose, there's no reason LW must allow that speech to the chagrin of all other users. While I get that you think a Chesterton's fence for discourse is being torn down here, I don't believe that's what's going on here, and I think the preferences of everyone else on LessWrong who isn't personally involved deserves a say on what they are and aren't okay with being censored on this site.

Comment author: nimim-k-m 09 December 2016 07:33:30AM 0 points [-]

But if the community is going to grow, these people are going to need some common flag to make them different from anyone else who decides to make "rationality" their applause light and gather followers.

What, you are not allowed to call yourself a rationalist if you are not affiliated with MIRI, even if you subscribe to branches of Western philosophy descended from Descartes and Kant and Vienna circle...?

Comment author: Evan_Gaensbauer 20 December 2016 12:46:21PM 0 points [-]

Viliam is right that unless we have a name for the cluster in thingspace that is the rationalist community, it's difficult to talk about. While I can understand why one might be alarmed, but I think MIRI/CFAR representatives mostly want everyone to be able to identify them in a clearly delineated way so that they and only they can claim to speak on behalf of those organizations on manners such as AI safety, existential risk reduction, or their stance on what to make of various parts of the rationality community now that they're trying to re-engage it. I think everyone can agree that it won't make anyone better off to confuse people who both identify with the LW/rationality community and those outside of it what MIRI/CFAR actually believe, re: their missions and goals.

This is probably more important to MIRI's/CFAR's relationship to EA and academia than people merely involved with LW/rationalists, since what's perceived as the positions of these organizations could effect how much funding they receive, and their crucial relationships with other organizations working on the same important problems.

Comment author: Lumifer 28 November 2016 04:39:00PM 6 points [-]

Couple of random observations.

First, with regard to privacy, I think it took a remarkably long time to sink in that "I hope you know this goes down on your permanent record". Internet activity is publishing, it is, in a large number of cases, both forever and searchable. And, of course, "anything you say can and will be used against you".

For nobodies that's not much of problem. But for people with something to lose it is. The net effect is evaporative cooling where smart, interesting, important people either withdraw from the open 'net or curate their online presence into sterility.

Second, "let's play at discussions" vs sealioning and "randos in my mentions" -- I think a major issue here is the value of time. There is a rather obvious inverse relationship between the value of someone's time and how much that someone spends wandering the 'net and random-commenting things which catch his eye. So random comments are generally low-value and worthless -- which means that people who value their time are not only going to not make them, they are also not going to pay much attention to them.

In the golden age covered by the mists of time (aka before the Eternal September) the barriers to entry were high and the people who made it inside were both smart and similar. Thus the early 'net was a very high-trust club. But that changed. Oh, how that changed.

The issue, of course, is discovery: how do you locate and identify new interesting people in the sea of hyperactive idiots? It's an interesting problem. You can create walled gardens. You can set out bait and filter mercilessly, or just hang out in places where only interesting-to-you people are likely to wander into. You can try to follow connections from people you know piggybacking on their filtering skills. Any other ways come to mind?

Comment author: Evan_Gaensbauer 20 December 2016 12:36:17PM 1 point [-]

For nobodies that's not much of problem. But for people with something to lose it is. The net effect is evaporative cooling where smart, interesting, important people either withdraw from the open 'net or curate their online presence into sterility.

I'm flagging this as a really important failure mode nobody noticed. It strikes me as very surprising this in hindsight seems so obvious when I know so many former top contributors not to have considered this as a failure mode. They didn't anticipate as they got older and advanced in their social circles and their careers, they'd go from being nobodies to being somebodies. Scott Alexander is a psychiatrist now; he has to watch what he says on the internet more than Scott the pre-med/philosophy student needed to watch what he said several years ago. Many of the legacy contributors on LW, like Eliezer Yudkowsky, Anna Salamon, Carl Shulman, Luke Muehlhauser and Andrew Critch work for nonprofits with budgets over a million dollars a year, part of the EA community, which seems hyper-conscious of status and prestige, and in a way thrusts all of them into the limelight more.

Comment author: Error 27 November 2016 05:37:34PM *  7 points [-]

I think there's two forces involved here.

It’s almost as though the issue were accountability.

And I think this is one of them. Under a Hanson hat, Talk isn't about Information. That is, for most things most people say on the net, this:

Author: “Hi! I just said a thing!”

is their only genuine content, no matter what words they happen to pick to express it. The fear is that others will hold them accountable for what they said rather than what they meant. They're playing the "I just said a thing!" game, but on a personal blog they might get accosted by people playing the "let's have discussions" game, and that would be awkward because one of the conceits of the former is that it pretends to be the latter.

In short, blogs signal the wrong things to non-nerds, they're the wrong kind of conversation. Our signal is their noise.

For one rather public and hilarious example, witness Scott Alexander’s flight from LessWrong

But I think something different is going on here, and with other diasporists. Scott et al are clearly playing the discussion game, really well. I think the second force driving people from forums to blogs and from blogs to social media is convenience.

Not having direct control of your posting environment is a trivial inconvenience. Having to run your own posting environment is also a trivial inconvenience, once the novelty of owning it wears off. Tumblr and twitter are extremely convenient. Especially twitter; you don't have to feel bad about emitting opinions without thought if the format makes depth of thought impossible! Both even make it possible to Say A Thing without saying any thing!

Never bet against convenience. Discussion moves from formats that ask more of the discussants to those that ask less. This rule is good when applied to the process of posting, and bad when applied to the content of posting, but in practice applies equally to both.

Comment author: Evan_Gaensbauer 20 December 2016 12:28:09PM 1 point [-]

Never bet against convenience. Discussion moves from formats that ask more of the discussants to those that ask less. This rule is good when applied to the process of posting, and bad when applied to the content of posting, but in practice applies equally to both.

Well, not for everyone on LW, but certainly some, esp. those at CFAR trying to revive it, having an open place for discussion par excellence is a crucial part of learning how to enhance group rationality and coordination in an online environment. If something is a crucial part of reducing x-risk, I can imagine many thinking "convenience be damned! This needs to get done!"

So, I think a key question is: how do we make LW more convenient? Or, rather, since rewriting the codebase will take a while yet, and I imagine people want to move discussion back to LW before several months go by, what can we do to make LW more attractive to overcome the trivial inconveniences of being here rather than on social media, other blogs, etc.? What are some robust incentives we can implement to draw people back? Are there any better/more suggestions than "generate good blog content", "ask people what they want to read about and then blog it", and "stop trolls/increase moderation/fix voting system" we can generate for making LW more magnetic?

Comment author: delton137 27 November 2016 09:23:41PM 3 points [-]

As far as "playing the comments game", I admit I am guilty of that. At a deeper level it comes from a desire to connect with like-minded people. I may even be doing it right now.

We like to think people post because they are genuinely intellectually engaged in the material we've written, but the truth is people post comments for a myriad of different reasons, including wanting to score comment 'points' or 'karma' or engage in a back-and-forth with a figure they admire. People like getting attention. [even shy nerdy people who are socially isolated or socially awkward, for which commenting on an internet blog may count as a significant social engagement] As you point out, the 'comments game' motivation isn't necessarily bad in terms of the consequences -- it gets debate and discussion going. Given the importance of the topics discussed on LW and elsewhere, even low quality discussion is better than no discussion, or shutting people out.

Obviously though, there is a tension in the 'rational-sphere', though between wanting to draw in lots of new people and wanting to maintain a sense of community, or people who are on the 'same wavelength'. This tension is not at all unique to rationalism, and it typically leads to some type of fragmentation -- people who want to 'spread rationalism' and grow the movement go one way and the people who want to maintain a sense of community and maintain purity go another. I've seen the same dynamic at work in the Libertarian party and in Christian churches. I think we have to accept both sides have good points.

But getting back to your post, it seems like you are more on the 'we need to maintain a sense of community' side. Personally I haven't been very active in forums or online communities, but from what I have seen, maintaining a community online is possible , but it takes work - it requires considerable organization, active moderators and administrators, etc. Some platforms are more conducive to it than others. I can't really comment on the viability of LW, since I'm kinda new here, but it seems to be a good place.

As a side note, I'm not sure how much 'social trust' is required for commenting. While I might be very hesitant to talk to someone at a cocktail party for fear of annoying them, or because I don't trust them to take me seriously, I don't feel that way about commenting, or if I do, it's to a much lower extent. There is a difference - talking to someone in real life requires really interrupting them and taking their time, while writing a comment doesn't really interrupt someone as they can always ignore it if they want to. What you said about more socially privileged people being more trusting or confident is definitely true though.

Comment author: Evan_Gaensbauer 20 December 2016 12:17:05PM *  1 point [-]

LessWrong itself doesn't have as much activity as it once did, but the first users on LessWrong have pursued their ideas on Artificial Intelligence and rationality, through the Machine Intelligence Research Institute (MIRI) and the Center for Applied Rationality (CFAR), respectively, they have a lot more opportunity to impact the world than they did before. If those are the sorts of things you or anyone, really, is passionate about, if they can get abreast of what these organizations are doing now and can greatly expand on it on LW itself, it can lead to jobs. Well, it'd probably help to be able to work in the United States and also have a degree to work at either CFAR or MIRI. I've known several people who've gone on to collaborate with them by starting on LW. Still, though, personally I'd find the most exciting part to be shaping the future of ideas regardless of whether it led to a job or not.

I think it's much easier to say now to become a top contributor on LW can be a springboard to much greater things. Caveat: whether those things are greater depends on what you want. Of course there are all manner of readers and users on LW who don't particularly pay attention to what goes on in AI safety, or at CFAR/MIRI. I shouldn't say building connections through LW is unusually likely to lead to great things if most LessWrongers might not think the outcomes so great after all. If LW became the sort of rationality community which was conducive to other slam-dunk examples of systematic winning, like a string of successful entrepreneurs, that'd make the sight much more attractive.

I know several CFAR alumni have credited the rationality skills they learned at CFAR as contributing to their success as entrepreneurs or on other projects. That's something else entirely different from finding the beginnings of that sort of success merely on this website itself. If all manner of aspiring rationalists pursued and won in all manner of domains, with all the beginnings of their success attributed to LW, that'd really be something else.

Oops, went on a random walk there. Anyway, my point even shy nerdy people...

[even shy nerdy people who are socially isolated or socially awkward, for which commenting on an internet blog may count as a significant social engagement]

...can totally think of LW as significant social engagement if they want to, because I know dozens of people for whom down the road it's brought them marriages, families, careers, new passions, and whole new family-like communities. That's really more common among people who attended LW meetups in the past, when those were more common.

Comment author: SatvikBeri 27 November 2016 06:07:50AM 16 points [-]

I think this is completely correct, and have been thinking along similar lines lately.

The way I would describe the problem is that truth-tracking is simply not the default in conversation: people have a lot of other goals, such as signaling alliances, managing status games, and so on. Thus, you need substantial effort to develop a conversational place where truth tracking actually is the norm.

The two main things I see Less Wrong (or another forum) needing to succeed at this are good intellectual content and active moderation. The need for good content seems fairly self-explanatory. Active moderation can provide a tighter feedback loop pushing people towards pro-intellectual norms, e.g. warning people when an argument uses the noncentral fallacy (upvotes & downvotes work fairly poorly for this.)

I'll try to post more content here too, and would be happy to volunteer to moderate if people feel that's useful/needed.

Comment author: Evan_Gaensbauer 14 December 2016 11:56:19AM 3 points [-]

I've been using the Effective Altruism Forum more frequently than I have LessWrong for at least the past year. I've noticed it's not particularly heavily moderated. I mean, one thing is effective altruism is mediated both primarily through in-person communities, and social media. So, most of the drama occurring in EA occurs there, and works itself out before it gets to the EA Forum.

Still, though, the EA Forum seems to have a high level of quality content, but without as much active moderation necessary. The site doesn't get as much traffic as LW ever did. The topics covered are much more diverse: while LW covered things like AI safety, metacognition and transhumanism, all that and every other cause in EA is game for the EA Forum[1]. From my perspective, though, it's far and away host to the highest-quality content in the EA community. So, if anyone else here also finds that to be the case: what makes EA unlike LW in not needing as many moderators on its forum.

(Personally, I expect most of the explanatory power comes from the hypothesis the sorts of discussions which would need to be moderated are filtered out before they get to the EA Forum, and the academic tone set in EA conduce people to posting more detailed writing.)

[1] I abbreviate "Effective Altruism Forum" as "EA Forum", rather than "EAF", as EAF is the acronym of the Effective Altruism Foundation, an organization based out of Switzerland. I don't want people to get confused between the two.

Comment author: steven0461 27 November 2016 09:31:15PM 3 points [-]

To me, the major advantage of social media is they make it easy to choose whose content to read. A version of LW where only my 25 favorite posters were visible would be exciting where the current version is boring. (I don't think that's a feasible change, but maybe it's another data point that helps people understand the problem.)

Comment author: Evan_Gaensbauer 28 November 2016 06:10:03AM 7 points [-]

You can already do this. If you click on a user's profile, there will be a little box in the top right corner. Click on the button that says "add to friends" there. When you "friend" someone on LessWrong, it just means you follow them. If you go to www.lesswrong.com/r/friends, there's a feed with submissions from only the other users you're following.

View more: Next