Filter This year

Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Alexandros 27 November 2016 10:40:52AM *  66 points [-]

Hi Anna,

Please consider a few gremlins that are weighing down LW currently:

  1. Eliezer's ghost -- He set the culture of the place, his posts are central material, has punctuated its existence with his explosions (and refusal to apologise), and then, upped and left the community, without actually acknowledging that his experiment (well kept gardens etc) has failed. As far as I know he is still the "owner" of this website, retains ultimate veto on a bunch of stuff, etc. If that has changed, there is no clarity on who the owner is (I see three logos on the top banner, is it them?), who the moderators are, who is working on it in general. I know tricycle are helping with development, but a part-time team is only marginally better than no-team, and at least no-team is an invitation for a team to step up.

  2. the no politics rule (related to #1) -- We claim to have some of the sharpest thinkers in the world, but for some reason shun discussing politics. Too difficult, we're told. A mindkiller! This cost us Yvain/Scott who cited it as one of his reasons for starting slatestarcodex, which now dwarfs LW. Oddly enough I recently saw it linked from the front page of realclearpolitics.com, which means that not only has discussing politics not harmed SSC, it may actually be drawing in people who care about genuine insights in this extremely complex space that is of very high interest.

  3. the "original content"/central hub approach (related to #1) -- This should have been an aggregator since day 1. Instead it was built as a "community blog". In other words, people had to host their stuff here or not have it discussed here at all. This cost us Robin Hanson on day 1, which should have been a pretty big warning sign.

  4. The codebase, this website carries tons of complexity related to the reddit codebase. Weird rules about responding to downvoted comments have been implemented in there, nobody can make heads or tails with it. Use something modern, and make it easy to contribute to. (telescope seems decent these days).

  5. Brand rust. Lesswrong is now kinda like myspace or yahoo. It used to be cool, but once a brand takes a turn for the worse, it's really hard to turn around. People have painful associations with it (basilisk!) It needs burning of ships, clear focus on the future, and as much support as possible from as many interested parties, but only to the extent that they don't dillute the focus.

In the spirit of the above, I consider Alexei's hints that Arbital is "working on something" to be a really bad idea, though I recognise the good intention. Efforts like this need critical mass and clarity, and diffusing yet another wave of people wanting to do something about LW with vague promises of something nice in the future (that still suffers from problem #1 AFAICT) is exactly what I would do if I wanted to maintain the status quo for a few more years.

Any serious attempt at revitalising lesswrong.com should focus on defining ownership and plan clearly. A post by EY himself recognising that his vision for lw 1.0 failed and passing the batton to a generally-accepted BDFL would be nice, but i'm not holding my breath. Further, I am fairly certain that LW as a community blog is bound to fail. Strong writers enjoy their independence. LW as an aggregator-first (with perhaps ability to host content if people wish to, like hn) is fine. HN may have degraded over time, but much less so than LW, and we should be able to improve on their pattern.

I think if you want to unify the community, what needs to be done is the creation of a hn-style aggregator, with a clear, accepted, willing, opinionated, involved BDFL, input from the prominent writers in the community (scott, robin, eliezer, nick bostrom, others), and for the current lesswrong.com to be archived in favour of that new aggregator. But even if it's something else, it will not succeed without the three basic ingredients: clear ownership, dedicated leadership, and as broad support as possible to a simple, well-articulated vision. Lesswrong tried to be too many things with too little in the way of backing.

Comment author: AnnaSalamon 27 November 2016 10:29:20PM *  35 points [-]

Re: 1, I vote for Vaniver as LW's BDFL, with authority to decree community norms (re: politics or anything else), decide on changes for the site; conduct fundraisers on behalf of the site; etc. (He already has the technical admin powers, and has been playing some of this role in a low key way; but I suspect he's been deferring a lot to other parties who spend little time on LW, and that an authorized sole dictatorship might be better.)

Anyone want to join me in this, or else make a counterproposal?

Comment author: sarahconstantin 27 November 2016 10:14:51AM 35 points [-]

I applaud this and am already participating by crossposting from my blog and discussing.

One thing that I like about using LW as a home base is that everyone knows what it is, for good and for ill. This has the practical benefit of not needing further software development before we can get started on the hard problem of attracting high-quality users. It also has the signaling benefit of indicating clearly that we're "embracing our roots", including reclaiming the negative stereotypes of LessWrongers. (Nitpicky, nerdy, utopian, etc.)

I am unusual in this community in taking "the passions" really seriously, rather than identifying as being too rational to be caught up in them. One of my more eccentric positions has long been that we ought to be a tribe. For all but a few unusual individuals, humans really want to belong to groups. If the group of people who explicitly value reason is the one group that refuses to have "civic pride" or similar community-spirited emotions, then this is not good news for reason. Pride in who we are as a community, pride in our distinctive characteristics, seems to be a necessity, in a cluster of people who aspire to do better than the general public; it's important to have ways to socially reinforce and maintain that higher standard.

Having a website of "our" own is useful for practical purposes, but it also has the value of reinforcing an online locus for the community, which defines, unifies, and distinguishes us. Ideally, our defining "place" will also be a good website where good discussion happens. I think this is a better outcome than group membership being defined by "what parties in Berkeley you get invited to" or "whose FB-friends list you're on" or the other informal social means that have been used as stopgap proxy measures for ingroupiness. People are going to choose demarcations. Why not try to steer the form of those demarcations towards something like "virtue"?

Comment author: alyssavance 27 November 2016 10:39:26AM 32 points [-]

I appreciate the effort, and I agree with most of the points made, but I think resurrect-LW projects are probably doomed unless we can get a proactive, responsive admin/moderation team. Nick Tarleton talked about this a bit last year:

"A tangential note on third-party technical contributions to LW (if that's a thing you care about): the uncertainty about whether changes will be accepted, uncertainty about and lack of visibility into how that decision is made or even who makes it, and lack of a known process for making pull requests or getting feedback on ideas are incredibly anti-motivating." (http://lesswrong.com/lw/n0l/lesswrong_20/cy8e)

That's obviously problematic, but I think it goes way beyond just contributing code. As far as I know, right now, there's no one person with both the technical and moral authority to:

  • set the rules that all participants have to abide by, and enforce them
  • decide principles for what's on-topic and what's off-topic
  • receive reports of trolls, and warn or ban them
  • respond to complaints about the site not working well
  • decide what the site features should be, and implement the high-priority ones

Pretty much any successful subreddit, even smallish ones, will have a team of admins who handle this stuff, and who can be trusted to look at things that pop up within a day or so (at least collectively). The highest intellectual-quality subreddit I know of, /r/AskHistorians, has extremely active and rigorous moderation, to the extent that a majority of comments are often deleted. Since we aren't on Reddit itself, I don't think we need to go quite that far, but there has to be something in place.

Comment author: alyssavance 03 December 2016 02:02:03AM *  27 points [-]

This is just a guess, but I think CFAR and the CFAR-sphere would be more effective if they focused more on hypothesis generation (or "imagination", although that term is very broad). Eg., a year or so ago, a friend of mine in the Thiel-sphere proposed starting a new country by hauling nuclear power plants to Antarctica, and then just putting heaters on the ground to melt all the ice. As it happens, I think this is a stupid idea (hot air rises, so the newly heated air would just blow away, pulling in more cold air from the surroundings). But it is an idea, and the same person came up with (and implemented) a profitable business plan six months or so later. I can imagine HPJEV coming up with that idea, or Elon Musk, or von Neumann, or Google X; I don't think most people in the CFAR-sphere would, it's just not the kind of thing I think they've focused on practicing.

Comment author: SatvikBeri 27 November 2016 05:18:43PM 27 points [-]

On the idea of a vision for a future, if I were starting a site from scratch, I would love to see it focus on something like "discussions on any topic, but with extremely high intellectual standards". Some ideas:

  • In addition to allowing self-posts, a major type of post would be a link to a piece of content with an initial seed for discussion
  • Refine upvotes/downvotes to make it easier to provide commentary on a post, e.g. "agree with the conclusion but disagree with the argument", or "accurate points, but ad-hominem tone".
  • A fairly strict and clearly stated set of site norms, with regular updates, and a process for proposing changes
  • Site erring on the side of being over-opinionated. It doesn't necessarily need to be the community hub
  • Votes from highly-voted users count for more.
  • Integration with predictionbook or something similar, to show a user's track record in addition to upvotes/downvotes. Emphasis on getting many people to vote on the same set of standardized predictions
  • A very strong bent on applications of rationality/clear thought, as opposed to a focus on rationality itself. I would love to see more posts on "here is how I solved a problem I or other people were struggling with"
  • No main/discussion split. There are probably other divisions that make sense (e.g. by topic), but this mostly causes a lot of confusion
  • Better notifications around new posts, or new comments in a thread. Eg I usually want to see all replies to a comment I've made, not just the top level
  • Built-in argument mapping tools for comments
  • Shadowbanning, a la Hacker News
  • Initially restricted growth, e.g. by invitation only
Comment author: John_Maxwell_IV 27 November 2016 01:02:16PM *  25 points [-]

I used to do this a lot on Less Wrong; then I started thinking I should do work that was somehow "more important". In hindsight, I think I undervalued the importance of pointing out minor reasoning/content errors on Less Wrong. "Someone is wrong on less wrong" seems to me to be an actually worth fixing; it seems like that's how we make a community that is capable of vetting arguments.

Participating in online discussions tends to reduce one's attention span. There's the variable reinforcement factor. There's also the fact that a person who comes to a discussion earlier gets more visibility. This incentivizes checking for new discussions frequently. (These two factors exacerbate one another.)

These effects are so strong that if I stay away from the internet for a few days ("internet fast"), my attention span increases dramatically. And if I've posted comments online yesterday, it's hard for me to focus today--there's always something in the back of my mind that wants to check & see if anyone's responded. I need to refrain from making new comments for several days before I can really focus.

Lots of people have noticed that online discussions sap their productivity this way. And due to the affect heuristic, they downgrade the importance & usefulness of online discussions in general. I think this inspired Patri's Self-Improvement or Shiny Distraction post. Like video games, Less Wrong can be distracting... so if video games are a distracting waste of time, Less Wrong must also be, right?

Except that doesn't follow. Online content can be really valuable to read. Bloggers don't have an incentive to pad their ideas the way book authors do. And they write simply instead of unnecessarily obfuscating like academics. (Some related discussion.)

Participating in discussions online is often high leverage. The ratio of readers to participants in online discussions can be quite high. Some numbers from the LW-sphere that back this up:

  • In 2010, Kevin created a thread where he asked lurkers to say hi. The thread generated 617 comments.

  • 77% of respondents to the Less Wrong survey have never posted a comment. (And this is a population of readers who were sufficiently engaged to take the survey!)

  • Here's a relatively obscure comment of mine that was voted to +2. But it was read by at least 135 logged-in users. Since 54+% of the LW readership has never registered an account, this obscure comment was likely read by 270+ people. A similar case study--deeply threaded comment posted 4 days after a top-level post, read by at least 22 logged-in users.

Based on this line of reasoning, I'm currently working on the problem of preserving focus while participating in online discussions. I've got some ideas, but I'd love to hear thoughts from anyone who wants to spend a minute brainstorming.

Comment author: sarahconstantin 27 November 2016 10:52:41AM 25 points [-]

Specifically, I think that LW declined from its peak by losing its top bloggers to new projects. Eliezer went to do AI research full-time at MIRI, Anna started running CFAR, various others started to work on those two organizations or others (I went to work at MetaMed). There was a sudden exodus of talent, which reduced posting frequency, and took the wind out of the sails.

One trend I dislike is that highly competent people invariably stop hanging out with the less-high-status, less-accomplished, often younger, members of their group. VIPs have a strong temptation to retreat to a "VIP island" -- which leaves everyone else short of role models and stars, and ultimately kills communities. (I'm genuinely not accusing anybody of nefarious behavior, I'm just noting a normal human pattern.) Like -- obviously it's not fair to reward competence with extra burdens, I'm not that much of a collectivist. But I think that potentially human group dynamics won't work without something like "community-spiritedness" -- there are benefits to having a community of hundreds or thousands, for instance, that you cannot accrue if you only give your time and attention to your ten best friends.

Comment author: Viliam 03 April 2017 09:45:04AM *  24 points [-]

Everyone, could we please stop using the word "sociopath" to mean things other than... you know... sociopathy?

I also like the linked article and I believe it does a great job at describing social dynamic at subcultures. I shared that article many times. But while it is funny to use exaggerations for shocking value, making the exaggerated word a new normal is... I guess in obvious conflict with the goal of rationality and clear communication. Sometimes I don't even know how many people are actually aware that "trying to make profit from things you don't deeply care about" and "being diagnosed as a sociopath" are actually two different things.

To explain why I care about this, imagine a group that decides that it is cool to refer to "kissing someone for social reasons, not because you actually desire to", as "rape". Because, you know, there are some similarities; both are a kind of an intimate contact, etc. Okay, if you write an article describing the analogies, that's great, and you have a good point. It just becomes idiotic when the whole community decides to use "rape" in this sense, and then they keep talking like this: "Yesterday we visited Grandma. When we entered the house, she raped us, and then we raped her back. I really don't like it when old people keep raping me like this, but I don't want to create conflicts in the family. But maybe I am just making a mountain out of a molehill, and being raped is actually not a big deal." Followed by dozen replies using the same vocabulary.

First, this is completely unnecessarily burning your weirdness points. Weird jargon makes communication with outsiders more difficult, and makes it more difficult for outsiders to join the group, even if they would otherwise agree with the group's values. After this point, absurdity heuristics works against anything you say. Sometimes there is a good reason for using jargon (it can compress difficult concepts), but I believe in this case the benefits are not proportional to the costs.

More importantly, imagine that if talking like this would become the group norm, how difficult it would be to have a serious discussion about actual rape. Like, anytime someone would mention being actually raped by a grandparent as a child, there would be a guaranteed reaction from someone "yeah, yeah, happens to me when we visit Grandma every weekend, not a big deal". Or someone would express concern about possible rape at community weekend, and people would respond by making stickers "kisses okay" and "don't like kissing", believing they are addressing the issue properly.

I believe it would be really bad if rationalist community would lose the ability to talk about actual sociopathy rationally. Because one day this topic may become an important one, and we may be too busy calling everyone who sells Bayes T-shirts without having read the Sequences a "sociopath". But even if you disagree with me on the importance of this, I hope you can agree that using words like this is stupid. How about just calling it "exploiting"? As in: "some people are only exploiting the rationalist community to get money for their causes, or to get free work from us, without providing anything to our causes in return -- we seriously need to put stop to this". Could words like this get the message across, too?

Also, if you want to publicly address these people "hey guys, we suspect you are just using us for free resources; how about demonstrating some commitment to our causes first?", it will probably help to keep the discussion friendly, if you don't call them "sociopaths". Similarly, imagine LessWrong having an article saying (a) "vegans as a group benefit from the rationalist community, but don't contribute anything to the art of Bayes in return", or (b) "vegans are sociopaths". Regardless of whether you personally happen to be a vegan or not, this is obviously harmful.

tl;dr -- we are in the rationality business here, not in the clickbait business; talk accordingly

(EDIT: Just to be explicit about this, ignoring the terminology issue, I completely agree with the parent comment.)

Comment author: Viliam 27 November 2016 09:50:29PM *  23 points [-]

I think you are underestimating this, and a better estimate is "$100k or more". With an emphasis on the "or more" part.

Historically, the trouble has been finding people willing to do the work, not the money to fund people willing to do the work.

Having "trouble to find people willing to do the work" usually means you are not paying enough to solve the problem. Market price, by definition, is a price at which you can actually buy a product or service, not a price that seems like it should be enough but you just can't find anyone able and/or willing to accept the deal.

The problem with volunteers is that LW codebase needs too much highly specialized knowledge. Python and Ruby just to get a chance, and then study the code which was optimized for perfomance and backwards compatibility, at the expense of legibility and extensibility. (Database-in-the-database antipattern; values precomputed and cached everywhere.) Most of the professional programmers are simply unable to contribute, without spending a lot of time studying something they will never use again. For a person who has the necessary skills, $10k is about their monthly salary (if you include taxes), and one month feels like too short time to understand the mess of the Reddit code, and implement everything that needs to be done. And the next time, if you need another upgrade, and the same person isn't available, you need another person to spend the same time to understand the Reddit code.

I believe in long term it would be better to rewrite the code from scratch, but that's definitely going to take more than one month.

Comment author: JonahSinick 03 December 2016 03:46:00AM *  22 points [-]

A few nitpicks on choice of "Brier-boosting" as a description of CFAR's approach:

Predictive power is maximized when Brier score is minimized

Brier score is the sum of differences between probabilities assigned to events and indicator variables that are are 1 or 0 according to whether the event did or did not occur. Good calibration therefore corresponds to minimizing Brier score rather than maximizing it, and "Brier-boosting" suggests maximization.

What's referred to as "quadratic score" is essentially the same as the negative of Brier score, and so maximizing quadratic score corresponds to maximizing predictive power.

Brier score fails to capture our intuitions about assignment of small probabilities

A more substantive point is that even though the Brier score is minimized by being well-calibrated, the way in which it varies with the probability assigned to an event does not correspond to our intuitions about how good a probabilistic prediction is. For example, suppose four observers A, B, C and D assigned probabilities 0.5, 0.4, 0.01 and 0.000001 (respectively) to an event E occurring and the event turns out to occur. Intuitively, B's prediction is only slightly worse than A's prediction, whereas D's prediction is much worse than C's prediction. But the difference between the increase in B's Brier score and A's Brier score is 0.36 - 0.25 = 0.11, which is much larger than corresponding difference for D and C, which is approximately 0.02.

Brier score is not constant across mathematically equivalent formulations of the same prediction

Suppose that a basketball player is to make three free throws, observer A predicts that the player makes each one with probability p and suppose that observer B accepts observer A's estimate and notes that this implies that the probability that the player makes all three free throws is p^3, and so makes that prediction.

Then if the player makes all three free throws, observer A's Brier score increases by

3*(1 - p)^2

while observer B's Brier score increases by

(1 - p^3)^2

But these two expressions are not equal in general, e.g. for p = 0.9 the first is 0.03 and the second is 0.073441. So changes to Brier score depend on the formulation of a prediction as opposed to the prediction itself.

======

The logarithmic scoring rule handles small probabilities well, and is invariant under changing the representation of a prediction, and so is preferred. I first learned of this from Eliezer's essay A Technical Explanation of a Technical Explanation.

Minimizing logarithmic score is equivalent to maximizing the likelihood function for logistic regression / binary classification. Unfortunately, the phrase "likelihood boosting" has one more syllable than "Brier boosting" and doesn't have same alliterative ring to it, so I don't have an actionable alternative suggestion :P.

Comment author: RobinHanson 27 November 2016 06:31:46PM 20 points [-]

I have serious doubts about the basic claim that "the rationalist community" is so smart and wise and on to good stuff compared to everyone else that it should focus on reading and talking to each other at the expense of reading others and participating in other conversations. There are obviously cultish in-group favoring biases pushing this way, and I'd want strong evidence before I attributed this push to anything else.

Comment author: helldalgo 01 December 2016 05:12:48PM 21 points [-]

I have about six of these floating around in my drafts. This makes me think that maybe I should post them; I didn't think they were that interesting to anyone but me.

Recently, I spent about ten hours reading into a somewhat complicated question. It was nice to get a feel for the topic, first, before I started badgering the experts and near-experts I knew for their opinions. I was surprised at how close I got to their answers.

Comment author: Alexei 27 November 2016 06:42:19AM 21 points [-]

I strongly agree with this sentiment, and currently Arbital's course is to address this problem. I realize there have been several discussions on LW about bringing LW back / doing LW 2.0, and Arbital has often come up. Up until two weeks ago we were focusing on "Arbital as the platform for intuitive math explanations", but that proved to be harder to scale than we thought. We now pivoted to a more discussion-oriented truth-seeking north star, which was our long-term goal all along. We are going to need innovation and experimentation both on the software and the community levels, but I'm looking forward to the challenge. :)

Comment author: Vladimir_Nesov 27 November 2016 05:37:31PM *  20 points [-]

Successful conversations usually happen as a result of selection circumstances that make it more likely that interesting people participate. Early LessWrong was interesting because of the posts, then there was a phase when many were still learning, and so were motivated to participate, to tutor one another, and to post more. But most don't want to stay in school forever, so activity faded, and the steady stream of new readers has different characteristics.

It's possible to maintain a high quality blog roll, or an edited stream of posts. But with comments, the problem is that there are too many of them, and bad comments start bad conversations that should be prevented rather than stopped, thus pre-moderation, which slows things down. Controlling their quality individually would require a lot of moderators, who must themselves be assessed for quality of their moderation decisions, which is not always revealed by the moderators' own posts. It would also require the absence of drama around moderation decisions, which might be even harder. Unfortunately, many of these natural steps have bad side effects or are hard to manage, so should be avoided when possible. I expect the problem can be solved either by clever algorithms that predict quality of votes, or by focusing more on moderating people (both as voters and as commenters), instead of moderating comments.

On Stack Exchange, there is a threshold for commenting (not just asking or answering), a threshold for voting, and a separate place ("meta" forum) for discussing moderation decisions. Here's my guess at a feature set sufficient for maintaining good conversations when the participants didn't happen to be selected for generating good content by other circumstances:

  • All votes are tagged by the voters, it's possible to roll back the effect of all votes by any user.
  • There are three tiers of users: moderators, full members, and regular users. The number of moderators is a significant fraction of the number of full members, so there probably should be a few admins who are outside this system.
  • Full members can reply to comments without pre-moderation, while regular users can only leave top-level comments and require pre-moderation. There must be a norm against regular users posting top-level comments to reply to another comment. This is the goal of the whole system, to enable good conversations between full members, while allowing new users to signal quality of their contributions without interfering with the ongoing conversations.
  • Full members and moderators are selected and demoted based on voting by moderators (both upvoting and downvoting, kept separate). The voting is an ongoing process (like for comments, posts) and weighs recent votes more (so that changes in behavior can be addressed). The moderators vote on users, not just on their comments or posts. Each user has two separate ratings, one that can make them a full member, and the other that can make them a moderator, provided they are a full member.
  • Moderators see who votes how, both on users and comments, and can use these observations to decide who to vote for/against being a moderator. By default, when a user becomes a full member, they also become a moderator, but can then be demoted to just a full member if other moderators don't like how they vote. All votes by demoted moderators and the effects of those votes, including on membership status of other users, are automatically retracted.
  • A separate meta forum for moderators, and a norm against discussing changes in membership status etc. on the main site.

This seems hopelessly overcomplicated, but the existence of Stack Exchange is encouraging.

Comment author: Qiaochu_Yuan 20 December 2016 07:42:01AM 19 points [-]

The bucket diagrams don't feel to me like the right diagrams to draw. I would be drawing causal diagrams (of aliefs); in the first example, something like "spelled oshun wrong -> I can't write -> I can't be a writer." Once I notice that I feel like these arrows are there I can then ask myself whether they're really there and how I could falsify that hypothesis, etc.

Comment author: AlexMennen 03 December 2016 05:36:13AM 19 points [-]

I disagree. The LW community already has capable high-status people who many others in the community look up to and listen to suggestions from. It's not clear to me what the benefit is from picking a single leader. I'm not sure what kinds of coordination problems you had in mind, but I'd expect that most such problems that could be solved by a leader issuing a decree could also be solved by high-status figures coordinating with each other on how to encourage others to coordinate. High-status people and organizations in the LW community communicate with each other a fair amount, so they should be able to do that.

And there are significant costs to picking a leader. It creates a single point of failure, making the leader's mistakes more costly, and inhibiting innovation in leadership style. It also creates PR problems; in fact, LW already has faced PR problems regarding being an Eliezer Yudkowsky personality cult.

Also, if we were to pick a leader, Peter Thiel strikes me as an exceptionally terrible choice.

Comment author: Elo 27 November 2016 10:19:37PM 2 points [-]

"It is dangerous to be half a rationalist."

It is dangerous to half-arse this and every other attempt at recovering lesswrong (again).

I take into account the comments before mine which accurately mention several reasons for the problems on lw.

The codebase is not that bad. I know how many people have looked at it; and it's reasonably easy to fix it. I even know how to fix it; but I am personally without the coding skill to implement the specific changes. We are without volunteers willing to make changes; and without funds to pay someone to do them. Trust me. I collated all comments on all of the several times we have tried to collate ideas. We are unfortunately busy people. Working on other goals and other projects.

I think you are wrong about the need for a single Schelling point and I submit as evidence: Crony Beliefs. We have a mesh network where valuable articles do get around. Lesswrong is very much visited by many (as evidence by the comments on this post). When individuals judge information worthy; it makes its way around the network and is added to our history.

A year from now; crony beliefs may not be easy to find on lesswrong because it was never explicitly posted here in text, but it will still be in the minds of anyone active in the diaspora.


Having said all that; I am more than willing to talk to anyone who wants to work on changes or progress via skype. PM me to make a time. @Anna that includes you.

Comment author: AnnaSalamon 27 November 2016 07:01:11AM *  17 points [-]

I am extremely excited about this. I suspect we should proceed trying to reboot Less Wrong, without waiting, while also attempting to aid Arbital in any ways that can help (test users, etc.).

View more: Next