Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

LW 2.0 Strategic Overview

47 Post author: Habryka 15 September 2017 03:00AM

Update: We're in open beta! At this point you will be able to sign up / login with your LW 1.0 accounts (if the latter, we did not copy over your passwords, so hit "forgot password" to receive a password-reset email).

Hey Everyone! 

This is the post for discussing the vision that I and the rest of the LessWrong 2.0 team have for the new version of LessWrong, and to just generally bring all of you up to speed with the plans for the site. This post has been overdue for a while, but I was busy coding on LessWrong 2.0, and I am myself not that great of a writer, which means writing things like this takes quite a long time for me, and so this ended up being delayed a few times. I apologize for that.

With Vaniver’s support, I’ve been the primary person working on LessWrong 2.0 for the last 4 months, spending most of my time coding while also talking to various authors in the community, doing dozens of user-interviews and generally trying to figure out how to make LessWrong 2.0 a success. Along the way I’ve had support from many people, including Vaniver himself who is providing part-time support from MIRI, Eric Rogstad who helped me get off the ground with the architecture and infrastructure for the website, Harmanas Chopra who helped build our Karma system and did a lot of user-interviews with me, Raemon who is doing part-time web-development work for the project, and Ben Pace who helped me write this post and is basically co-running the project with me (and will continue to do so for the foreseeable future).

We are running on charitable donations, with $80k in funding from CEA in the form of an EA grant and $10k in donations from Eric Rogstad, which will go to salaries and various maintenance costs. We are planning to continue running this whole project on donations for the foreseeable future, and legally this is a project of CFAR, which helps us a bunch with accounting and allows people to get tax benefits from giving us money. 

Now that the logistics is out of the way, let’s get to the meat of this post. What is our plan for LessWrong 2.0, what were our key assumptions in designing the site, what does this mean for the current LessWrong site, and what should we as a community discuss more to make sure the new site is a success?

Here’s the rough structure of this post: 

  • My perspective on why LessWrong 2.0 is a project worth pursuing
  • A summary of the existing discussion around LessWrong 2.0 
  • The models that I’ve been using to make decisions for the design of the new site, and some of the resulting design decisions
  • A set of open questions to discuss in the comments where I expect community input/discussion to be particularly fruitful 

Why bother with LessWrong 2.0?  

I feel that independently of how many things were and are wrong with the site and its culture, overall, over the course of its history, it has been one of the few places in the world that I know off where a spark of real discussion has happened, and where some real intellectual progress on actually important problems was made. So let me begin with a summary of things that I think the old LessWrong got right, that are essential to preserve in any new version of the site:

On LessWrong…

 

  • I can contribute to intellectual progress, even without formal credentials 
  • I can sometimes have discussions in which the participants focus on trying to convey their true reasons for believing something, as opposed to rhetorically using all the arguments that support their position independent of whether those have any bearing on their belief
  • I can talk about my mental experiences in a broad way, such that my personal observations, scientific evidence and reproducible experiments are all taken into account and given proper weighting. There is no narrow methodology I need to conform to to have my claims taken seriously.
  • I can have conversations about almost all aspects of reality, independently of what literary genre they are associated with or scientific discipline they fall into, as long as they seem relevant to the larger problems the community cares about
  • I am surrounded by people who are knowledgeable in a wide range of fields and disciplines, who take the virtue of scholarship seriously, and who are interested and curious about learning things that are outside of their current area of expertise
  • We have a set of non-political shared goals for which many of us are willing to make significant personal sacrifices
  • I can post long-form content that takes up as much space at it needs to, and can expect a reasonably high level of patience of my readers in trying to understand my beliefs and arguments
  • Content that I am posting on the site gets archived, is searchable and often gets referenced in other people's writing, and if my content is good enough, can even become common knowledge in the community at large
  • The average competence and intelligence on the site is high, which allows discussion to generally happen on a high level and allows people to make complicated arguments and get taken seriously
  • There is a body of writing that is generally assumed to have been read by most people  participating in discussions that establishes philosophical, social and epistemic principles that serve as a foundation for future progress (currently that body of writing largely consists of the Sequences, but also includes some of Scott’s writing, some of Luke’s writing and some individual posts by other authors) 

 

When making changes to LessWrong, I think it is very important to preserve all of the above features. I don’t think all of them are universally present on LessWrong, but all of them are there at least some of the time, and no other place that I know of comes even remotely close to having all of them as often as LessWrong has. Those features are what motivated me to make LessWrong 2.0 happen, and set the frame for thinking about the models and perspectives I will outline in the rest of the post. 

I also think Anna, in her post about the importance of a single conversational locus, says another, somewhat broader thing, that is very important to me, so I’ve copied it in here: 

1. The world is locked right now in a deadly puzzle, and needs something like a miracle of good thought if it is to have the survival odds one might wish the world to have.

2. Despite all priors and appearances, our little community (the "aspiring rationality" community; the "effective altruist" project; efforts to create an existential win; etc.) has a shot at seriously helping with this puzzle.  This sounds like hubris, but it is at this point at least partially a matter of track record.

3. To aid in solving this puzzle, we must probably find a way to think together, accumulatively. We need to think about technical problems in AI safety, but also about the full surrounding context -- everything to do with understanding what the heck kind of a place the world is, such that that kind of place may contain cheat codes and trap doors toward achieving an existential win. We probably also need to think about "ways of thinking" -- both the individual thinking skills, and the community conversational norms, that can cause our puzzle-solving to work better.

4. One feature that is pretty helpful here, is if we somehow maintain a single "conversation", rather than a bunch of people separately having thoughts and sometimes taking inspiration from one another.  By "a conversation", I mean a space where people can e.g. reply to one another; rely on shared jargon/shorthand/concepts; build on arguments that have been established in common as probably-valid; point out apparent errors and then have that pointing-out be actually taken into account or else replied-to).

5. One feature that really helps things be "a conversation" in this way, is if there is a single Schelling set of posts/etc. that people (in the relevant community/conversation) are supposed to read, and can be assumed to have read.  Less Wrong used to be a such place; right now there is no such place; it seems to me highly desirable to form a new such place if we can.

6. We have lately ceased to have a "single conversation" in this way.  Good content is still being produced across these communities, but there is no single locus of conversation, such that if you're in a gathering of e.g. five aspiring rationalists, you can take for granted that of course everyone has read posts such-and-such.  There is no one place you can post to, where, if enough people upvote your writing, people will reliably read and respond (rather than ignore), and where others will call them out if they later post reasoning that ignores your evidence.  Without such a locus, it is hard for conversation to build in the correct way.  (And hard for it to turn into arguments and replies, rather than a series of non sequiturs.)

The Existing Discussion Around LessWrong 2.0

Now that I’ve given a bit of context on why I think LessWrong 2.0 is an important project, it seems sensible to look at what has been said so far, so we don’t have to repeat the same discussions over and over again. There has already been a lot of discussion about the decline of LessWrong, the need for a new platform and the design of LessWrong 2.0, and I won’t be able to summarise it all here, but I can try my best to summarize the most important points, and give a bit of my own perspective on them.

Here is a comment by Alexandros, on Anna’s post I quoted above:

Please consider a few gremlins that are weighing down LW currently:

1. Eliezer's ghost -- He set the culture of the place, his posts are central material, has punctuated its existence with his explosions (and refusal to apologise), and then, upped and left the community, without actually acknowledging that his experiment (well kept gardens etc) has failed. As far as I know he is still the "owner" of this website, retains ultimate veto on a bunch of stuff, etc. If that has changed, there is no clarity on who the owner is (I see three logos on the top banner, is it them?), who the moderators are, who is working on it in general. I know tricycle are helping with development, but a part-time team is only marginally better than no-team, and at least no-team is an invitation for a team to step up.

[...]

...I consider Alexei's hints that Arbital is "working on something" to be a really bad idea, though I recognise the good intention. Efforts like this need critical mass and clarity, and diffusing yet another wave of people wanting to do something about LW with vague promises of something nice in the future... is exactly what I would do if I wanted to maintain the status quo for a few more years.

Any serious attempt at revitalising lesswrong.com should focus on defining ownership and plan clearly. A post by EY himself recognising that his vision for lw 1.0 failed and passing the batton to a generally-accepted BDFL would be nice, but i'm not holding my breath. Further, I am fairly certain that LW as a community blog is bound to fail. Strong writers enjoy their independence. LW as an aggregator-first (with perhaps ability to host content if people wish to, like hn) is fine. HN may have degraded over time, but much less so than LW, and we should be able to improve on their pattern.

I think if you want to unify the community, what needs to be done is the creation of a hn-style aggregator, with a clear, accepted, willing, opinionated, involved BDFL, input from the prominent writers in the community (scott, robin, eliezer, nick bostrom, others), and for the current lesswrong.com to be archived in favour of that new aggregator. But even if it's something else, it will not succeed without the three basic ingredients: clear ownership, dedicated leadership, and as broad support as possible to a simple, well-articulated vision. Lesswrong tried to be too many things with too little in the way of backing.

I think Alexandros hits a lot of good points here, and luckily these are actually some of the problems I am most confident we have solved. The biggest bottleneck – the thing that I think caused most other problems with LessWrong – is simply that there was nobody with the motivation, the mandate and the resources to fight against the inevitable decline into entropy. I feel that the correct response to the question of “why did LessWrong decline?” is to ask “why should it have succeeded?”. 

In the absence of anyone with the mandate trying to fix all the problems that naturally arise, we should expect any online platform to decline. Most of the problems that will be covered in the rest of this post are things that could have been fixed many years ago, but simply weren’t because nobody with the mandate put much resources into fixing them. I think the cause for this was a diffusion of responsibility, and a lot of vague promises of problems getting solved by vague projects in the future. I myself put off working on LessWrong for a few months because I had some vague sense that Arbital would solve the problems that I was hoping to solve, even though Arbital never really promised to solve them. Then Arbital’s plan ended up not working out, and I had wasted months of precious time. 

Since this comment was written, Vaniver has been somewhat unanimously declared benevolent dictator for life of LessWrong. He and I have gotten various stakeholders on board, received funding, have a vision, and have free time – and so we have the mandate, the resources and the motivation to not make the same mistakes. With our new codebase, link posts are now something I can build in an afternoon, rather than something that requires three weeks of getting permissions from various stakeholders, performing complicated open-source and confidentiality rituals, and hiring a new contractor who has to first understand the mysterious Reddit fork from 2008 that LessWrong is based on. This means at least the problem of diffusion of responsibility is solved. 


Scott Alexander also made a recent comment on Reddit on why he thinks LessWrong declined, and why he is somewhat skeptical of attempts to revive the website: 

1. Eliezer had a lot of weird and varying interests, but one of his talents was making them all come together so you felt like at the root they were all part of this same deep philosophy. This didn't work for other people, and so we ended up with some people being amateur decision theory mathematicians, and other people being wannabe self-help gurus, and still other people coming up with their own theories of ethics or metaphysics or something. And when Eliezer did any of those things, somehow it would be interesting to everyone and we would realize the deep connections between decision theory and metaphysics and self-help. And when other people did it, it was just "why am I reading this random bulletin board full of stuff I'm not interested in?"

2. Another of Eliezer's talents was carefully skirting the line between "so mainstream as to be boring" and "so wacky as to be an obvious crackpot". Most people couldn't skirt that line, and so ended up either boring, or obvious crackpots. This produced a lot of backlash, like "we need to be less boring!" or "we need fewer crackpots!", and even though both of these were true, it pretty much meant that whatever you posted, someone would be complaining that you were bad.

3. All the fields Eliezer wrote in are crackpot-bait and do ring a bunch of crackpot alarms. I'm not just talking about AI - I'm talking about self-help, about the problems with the academic establishment, et cetera. I think Eliezer really did have interesting things to say about them - but 90% of people who try to wade into those fields will just end up being actual crackpots, in the boring sense. And 90% of the people who aren't will be really bad at not seeming like crackpots. So there was enough kind of woo type stuff that it became sort of embarassing to be seen there, especially given the thing where half or a quarter of the people there or whatever just want to discuss weird branches of math or whatever.

4. Communities have an unfortunate tendency to become parodies of themselves, and LW ended up with a lot of people (realistically, probably 14 years old) who tended to post things like "Let's use Bayes to hack our utility functions to get superfuzzies in a group house!". Sometimes the stuff they were posting about made sense on its own, but it was still kind of awkward and the sort of stuff people felt embarassed being seen next to.

5. All of these problems were exacerbated by the community being an awkward combination of Google engineers with physics PhDs and three startups on one hand, and confused 140 IQ autistic 14 year olds who didn't fit in at school and decided that this was Their Tribe Now on the other. The lowest common denominator that appeals to both those groups is pretty low.

6. There was a norm against politics, but it wasn't a very well-spelled-out norm, and nobody enforced it very well. So we would get the occasional leftist who had just discovered social justice and wanted to explain to us how patriarchy was the real unfriendly AI, the occasional rightist who had just discovered HBD and wanted to go on a Galileo-style crusade against the deceptive establishment, and everyone else just wanting to discuss self-help or decision-theory or whatever without the entire community becoming a toxic outcast pariah hellhole. Also, this one proto-alt-right guy named Eugene Nier found ways to exploit the karma system to mess with anyone who didn't like the alt-right (ie 98% of the community) and the moderation system wasn't good enough to let anyone do anything about it.

7. There was an ill-defined difference between Discussion (low-effort random posts) and Main (high-effort important posts you wanted to show off). But because all these other problems made it confusing and controversial to post anything at all, nobody was confident enough to post in Main, and so everything ended up in a low-effort-random-post bin that wasn't really designed to matter. And sometimes the only people who didpost in Main were people who were too clueless about community norms to care, and then their posts became the ones that got highlighted to the entire community.

8. Because of all of these things, Less Wrong got a reputation within the rationalist community as a bad place to post, and all of the cool people got their own blogs, or went to Tumblr, or went to Facebook, or did a whole bunch of things that relied on illegible local knowledge. Meanwhile, LW itself was still a big glowing beacon for clueless newbies. So we ended up with an accidental norm that only clueless newbies posted on LW, which just reinforced the "stay off LW" vibe.

I worry that all the existing "resurrect LW" projects, including some really high-effort ones, have been attempts to break coincidental vicious cycles - ie deal with 8 and the second half of 7. I think they're ignoring points 1 through 6, which is going to doom them.

At least judging from where my efforts went, I would agree that I have spent a pretty significant amount of resources on fixing the problems that Scott described in point 6 and 7, but I also spent about equal time thinking about how to fix 1-5. The broader perspective that I have on those latter points is I think best illustrated in an analogy: 

When I read Scott’s comments about how there was just a lot of embarrassing and weird writing on LessWrong, I remember my experiences as a Computer Science undergraduate. When the median undergrad makes claims about the direction of research in their field, or some other big claim about their field that isn't explicitly taught in class, or if you ask an undergraduate physics student what they think about how to do physics research, or what ideas they have for improving society, they will often give you quite naive sounding answers (I have heard everything from “I am going to build a webapp to permanently solve political corruption” to “here’s my idea of how we can transmit large amounts of energy wirelessly by using low-frequency tesla-coils”.) I don’t think we should expect anything different on LessWrong. I actually think we should expect it to be worse here, since we are actively encouraging people to have opinions, as opposed to the more standard practice of academia, which seems to consist of treating undergraduates as slightly more intelligent dogs that need to be conditioned with the right mixture of calculus homework problems and mandatory class attendance, so that they might be given the right to have any opinion at all if they spend 6 more years getting their PhD. 

So while I do think that Eliezer’s writing encouraged topics that were slightly more likely to attract crackpots, I think a large chunk of the weird writing is just a natural consequence of being an intellectual community that has a somewhat constant influx of new members. 

And having undergraduates go through the phase where they have bad ideas, and then have it explained to them why their ideas are bad, is important. I actually think it’s key to learning any topic more complicated than high-school mathematics. It takes a long time until someone can productively contribute to the intellectual progress of an intellectual community (in academia it’s at least 4 years, though usually more like 8), and during all that period they will say very naive and silly sounding things (though less and less so as time progresses). I think LessWrong can do significantly better than 4 years, but we should still expect that it will take new members time to acclimate and get used to how things work (based on user-interviews of a lot of top commenters it usually took something like 3-6 months until someone felt comfortable commenting frequently and about 6-8 months until someone felt comfortable posting frequently. This strikes me as a fairly reasonable expectation for the future). 

And I do think that we have many graduate students and tenured professors of the rationality community who are not Eliezer, and who do not sound like crackpots, that can speak reasonably about the same topics Eliezer talked about, and who I feel are acting with a very similar focus to what Eliezer tried to achieve. Luke Muehlhauser, Carl Shulman, Anna Salamon, Sarah Constantin, Ben Hoffman, Scott himself and many more, most of whose writing would fit very well on LessWrong (and often still ends up there). 

But all of this doesn’t mean what Scott describes isn’t a problem. It’s still a bad experience for everyone to constantly have to read through bad first year undergrad essays, but I think the solution can’t involve those essays not getting written at all. Instead it has to involve some kind of way of not forcing everyone to see those essays, while still allowing them to get promoted if someone shows up who does write something insightful from day one. I am currently planning to tackle this mostly with improvements to the karma system, as well as changes to the layout of the site, where users primarily post to their own profiles and can get content promoted to the frontpage by moderators and high-karma members. A feed consisting solely of content of the quality of the average Scott, Anna, Ben or Luke post would be an amazing read, and is exactly the kind of feed I am hoping to create with LessWrong, while still allowing users to engage with the rest of the content on the site (more on that later).

I would very very roughly summarize what Scott says in the first 5 points as two major failures: first a failure of separating the signal from the noise, and second a failure of enforcing moderation norms when people did turn out to be crackpots or just unable to productively engage with the material on the site. Both of which are natural consequences of the abandonment of promoting things to main, the fact that discussion is ordered by default by recency and not by some kind of scoring system, and the fact that the moderation tools were completely insufficient (but more on the details of that in the next section)


My models of LessWrong 2.0

I think there are three major bottlenecks that LessWrong is facing (after the zeroth bottleneck, which is just that no single group had the mandate, resources and motivation to fix any of the problems): 

  1. We need to be able to build on each other’s intellectual contributions, archive important content and avoid primarily being news-driven
  2. We need to improve the signal-to-noise ratio for the average reader, and only broadcast the most important writing
  3. We need to actively moderate in a way that is both fun for the moderators, and helps people avoid future moderation policy violations

I. 

The first bottleneck for our community, and the biggest I think, is the ability to build common knowledge. On facebook, I can read an excellent and insightful discussion, yet one week later I forgot it. Even if I remember it, I don’t link to the facebook post (because linking to facebook posts/comments is hard) and it doesn’t have a title so I don’t casually refer to it in discussion with friends. On facebook, ideas don’t get archived and built upon, they get discussed and forgotten. To put this another way, the reason we cannot build on the best ideas this community had over the last five years, is because we don’t know what they are. There’s only fragments of memories of facebook discussions which maybe some other people remember. We have the sequences, and there’s no way to build on them together as a community, and thus there is stagnation.

Contrast this with science. Modern science is plagued by many severe problems, but of humanity’s institutions it has perhaps the strongest record of being able to build successfully on its previous ideas. The physics community has this system where the new ideas get put into journals, and then eventually if they’re new, important, and true, they get turned into textbooks, which are then read by the upcoming generation of physicists, who then write new papers based on the findings in the textbooks. All good scientific fields have good textbooks, and your undergrad years are largely spent reading them. I think the rationality community has some textbooks, written by Eliezer (and we also compiled a collection of Scott’s best posts that I hope will become another textbook of the community), but there is no expectation that if you write a good enough post/paper that your content will be included in the next generation of those textbooks, and the existing books we have rarely get updated. This makes the current state of the rationality community analogous to a hypothetical state of physics, had physics no journals, no textbook publishers, and only one textbook that is about a decade old. 

This seems to me what Anna is talking about - the purpose of the single locus of conversation is the ability to have common knowledge and build on it. The goal is to have every interaction with the new LessWrong feel like it is either helping you grow as a rationalist or has you contribute to lasting intellectual progress of the community. If you write something good enough, it should enter the canon of the community. If you make a strong enough case against some existing piece of canon, you should be able to replace or alter that canon. I want writing to the new LessWrong to feel timeless. 

To achieve this, we’ve built the following things: 

  • We created a section for core canon on the site that is prominently featured on the frontpage and right now includes Rationality: A-Z, The Codex (a collection of Scott’s best writing, compiled by Scott and us), and HPMOR. Over time I expect these to change, and there is a good chance HPMOR will move to a different section of the site (I am considering adding an “art and fiction” section) and will be replaced by a new collection representing new core ideas in the community.
  • Sequences are now a core feature of the website. Any user can create sequences of their own and other users posts, and those sequences themselves can be voted and commented on. The goal is to help users compile the best writing on the site, and make it so that good timeless writing gets read by users for a long time, as opposed to disappearing into the void. Separating creative and curatorial effort allows the sort of professional specialization that you see in serious scientific fields.
  • Of those sequences, the most upvoted and most important ones will be chosen to be prominently featured on other sections of the site, allowing users easy access to read the best content on the site and get up to speed with the current state of knowledge of the community.
  • For all posts and sequences the site keeps track of how much of them you’ve read (including importing view-tracking from old LessWrong, so you will get to see how much of the original sequences you’ve actually read). And if you’ve read all of a sequence you get a small badge that you can choose to display right next to your username, which helps people navigate how much of the content of the site you are familiar with.
  • The design of the core content of the site (e.g. the Sequences, the Codex, etc.) tries to communicate a certain permanence of contributions. The aesthetic feels intentionally book-like, which I hope gives people a sense that their contributions will be archived, accessible and built-upon.
    One important issue with this is that there also needs to be a space for sketches on LessWrong. To quote PaulGraham: “What made oil paint so exciting, when it first became popular in the fifteenth century, was that you could actually make the finished work from the prototype. You could make a preliminary drawing if you wanted to, but you weren't held to it; you could work out all the details, and even make major changes, as you finished the painting.”
  • We do not want to discourage sketch-like contributions, and want to build functionality that helps people build a finished work from a prototype (this is one of the core competencies of Google Docs, for example).

And there are some more features the team is hoping to build in this direction, such as: 

  • Easier archiving of discussions by allowing discussions to be turned into top-level posts (similar to what Ben Pace did with a recent Facebook discussion between Eliezer, Wei Dai, Stuart Armstrong, and some others, which he turned into a post on LessWrong 2.0
  • The ability to continue reading the content you’ve started reading with a single click from the frontpage. Here's an example logged-in frontpage:

 

 

II.

The second bottleneck is improving the signal-to-noise ratio. It needs to be possible for someone to subscribe to only the best posts on LessWrong, and only the most important content needs to turned into common-knowledge. 

I think this is a lot of what Scott was pointing at in his summary about the decline of LessWrong. We need a way for people to learn from their mistakes, while also not flooding the inboxes of everyone else, and while giving people active feedback on how to improve in their writing. 

The site structure: 

To solve this bottleneck, here is the rough content structure that I am currently planning to implement on LessWrong: 

The writing experience: 

If you write a post, it first shows up nowhere else but your personal user page, which you can basically think of being a medium-style blog. If other users have subscribed to you, your post will then show up on their frontpages (or only show up after it hit a certain karma threshold, if users who subscribed to you set a minimum karma threshold). If you have enough karma you can decide to promote your content to the main frontpage feed (where everyone will see it by default), or a moderator can decide to promote your content (if you allowed promoting on that specific post). The frontpage itself is sorted by a scoring system based on the HN algorithm, which uses a combination of total karma and how much time has passed since the creation of the post. 

If you write a good comment on a post a moderator or a high-karma user can promote that comment to the frontpage as well, where we will also feature the best comments on recent discussions. 

Meta

Meta will just be a section of the site to discuss changes to moderation policies, issues and bugs with the site, discussion about site features, as well as general site-policy issues. Basically the thing that all StackExchanges have. Karma here will not add to your total karma and will not give you more influence over the site. 

Featured posts

In addition to the main thread, there is a promoted post section that you can subscribe to via email and RSS, that has on average three posts a week, which for now are just going to be chosen by moderators and editors on the site to be the posts that seem most important to turn into common-knowledge for the community. 

Meetups (implementation unclear)

There will also be a separate section of the site for meetups and event announcements that will feature a map of meetups, and generally serve as a place to coordinate the in-person communities. The specific implementation of this is not yet fully figured out. 

Shortform (implementation unclear)

Many authors (including Eliezer) have requested a section of the site for more short-form thoughts, more similar to the length of an average FB post. It seems reasonable to have a section of the site for that, though I am not yet fully sure how it should be implemented. 

Why? 

The goal of this structure is to allow users to post to LessWrong without their content being directly exposed to the whole community. Their content can first be shown to the people who follow them, or the people who actively seek out content from the broader community by scrolling through all new posts. Then, if a high-karma users among them finds their content worth posting to the frontpage, it will get promoted. The key to this is a larger userbase that has the ability to promote content (i.e. many more than have the ability to promote content to main on the current LessWrong), and the continued filtering of the frontpage based on the karma level of the posts. 

The goal of all of these is to allow users to see good content at various levels of engagement with the site, while giving some personalization options so that people can follow the people they are particularly interested and while also ensuring that this does not sabotage the attempt at building common knowledge by having the best posts from the whole ecosystem be featured and promoted on the frontpage. 

The karma system:

Another thing I’ve been working on to fix the signal-to-noise ratio is to improve the karma system. It’s important that the people having the most significant insights are able to shape a field more. If you’re someone who regularly produces real insights, you’re better able to notice and bring up other good ideas. To achieve this we’ve built a new karma system, where your upvotes and downvotes weight more if you have a lot of karma already. So far the current weighting is a very simple heuristic, whereby your upvotes and downvotes count for log base 5 of your total karma. Ben and I will post another top-level post to discuss just the karma system at some point in the next few weeks, but feel free to ask any questions now, and we will just include those in that post.

(I am currently experimenting with a karma system based on the concept of eigendemocracy by Scott Aaronson, which you can read about here, but which basically boils down to applying Google’s PageRank algorithm to karma allocation. How trusted you are as a user (your karma) is based on how much trusted users upvote you, and the circularity of this definition is solved using linear algebra.)

I am also interested in having some form of two-tiered voting, similarly to how Facebook has a primary vote interaction (the like) and a secondary interaction that you can access via a tap or a hover (angry, sad, heart, etc.). But the implementation of that is also currently undetermined. 

III

The third and last bottleneck is an actually working moderation system that is fun to use by moderators, while also giving people whose content was moderated a sense of why, and how they can improve. 

The most common, basic complaint currently on LessWrong pertains to trolls and sockpuppet accounts that the reddit fork’s mod tools are vastly inadequate for dealing with (Scott's sixth point refers to this). Raymond Arnold and I are currently building more nuanced mod tools, that include abilities for moderators to set the past/future votes of a user to zero, to see who upvoted a post, and to know the IP address that an account comes from (this will be ready by the open beta). 

Besides that, we are currently working on cultivating a moderation group we are calling “Sunshine Regiment.” Members of the sunshine regiment that will have the ability to take various smaller moderation actions around the site (such as temporarily suspending comment threads, making general moderating comments in a distinct font and promoting content), and so will have the ability to generally shape the culture and content of the website to a larger degree.

The goal is moderation that goes far beyond dealing with trolls, and actively makes the epistemic norms a ubiquitous part of the website. Right now Ben Pace is thinking about moderation norms that encourage archiving and summarizing good discussion, as well as other patterns of conversation that will help the community make intellectual progress. He’ll be posting to the open beta to discuss what norms the site and moderators should have in the coming weeks. We're both in agreement that moderation can and should be improved, and that moderators need better tools, and would appreciate good ideas about what else to give them.


How you can help and issues to discuss:

The open beta of the site is starting in a week, and so you can see all of this for yourself. For the duration of the open beta, we’ll continue the discussion on the beta site. At the conclusion of the open beta, we plan to have a vote open to those who had a thousand karma or more on 9/13 to determine whether we should move forward with the new site design, which would move to the lesswrong.com url from its temporary beta location, or leave LessWrong as it is now. (As this would represent the failure of the plan to revive LW, this would likely lead to the site being archived rather than staying open in an unmaintained state.) For now, this is an opportunity for the current LessWrong community to chime in here and object to anything in this plan.

During the open beta (and only during that time) the site will also have an Intercom button in the bottom right corner that allows you to chat directly with us. If you run into any problems, or notice any bugs, feel free to ping us directly on there and Ben and I will try to help you out as soon as possible.

Here are some issues where I discussion would be particularly fruitful: 

  • What are your thoughts about the karma system? Does an eigendemocracy based system seem reasonable to you? How would you implement the details? Ben and I will post our current thoughts on this in a separate post in the next two weeks, but we would be interested in people’s unprimed ideas.
  • What are your experiences with the site so far? Is anything glaringly missing, or are there any bugs you think I should definitely fix? 
  • Do you have any complaints or thoughts about how work on LessWrong 2.0 has been proceeding so far? Are there any worries or issues you have with the people working on it? 
  • What would make you personally use the new LessWrong? Is there any specific feature that would make you want to use it? For reference, here is our current feature roadmap for LW 2.0.
  • And most importantly, do you think that the LessWrong 2.0 project is doomed to failure for some reason? Is there anything important I missed, or something that I misunderstood about the existing critiques?
The closed beta can be found at www.lesserwrong.com.

Ben, Vaniver, and I will be in the comments!

Comments (294)

Comment author: ciphergoth 15 September 2017 03:53:27AM *  18 points [-]

Thank you all so much for doing this!

Eigenkarma should be rooted in the trust of a few accounts that are named in the LW configuration. If this seems unfair, then I strongly encourage you not to pursue fairness as a goal at all - I'm all in favour of a useful diversity of opinion, but I think Sybil attacks make fairness inherently synonymous with trivial vulnerability.

I am not sure whether votes on comments should be treated as votes on people. I think that some people might make good comments who would be bad moderators, while I'd vote up the weight of Carl Schulman's votes even if he never commented.

The feature map link seems to be absent.

Comment author: Habryka 15 September 2017 04:25:56AM 2 points [-]

Feature roadmap link fixed!

Comment author: DragonGod 15 September 2017 03:42:29PM 2 points [-]

What are Sybil attacks?

Comment author: KaynanK 15 September 2017 04:51:25PM 4 points [-]
Comment author: gbear605 15 September 2017 04:23:12AM 4 points [-]

I'd love to see achieved the goal of an active rationalist-hub and I think this might be a method that can lead to it.

Ironically, after looking at the post you made on lesserwrong that combines various Facebook posts, Eliezer unknowingly demonstrates the exact issue: "because of that thing I wrote on FB somewhere" On one of his old LW posts, he would have linked to it. Instead, the explanation is missing for those who aren't up to date on his entire FB feed.

Thanks for the work that you've put into this.

Comment author: Benito 15 September 2017 04:56:07AM 16 points [-]

We've actually talked a bit with Eliezer about importing his past and future facebook and tumblr essays to LW 2.0, and I think this is a plausible thing we'll do after launch. I think it will be good to have his essays be more linkable and searchable (and the people I've said this to tend to excitedly agree with me on this point).

(I'm Ben Pace, the other guy working full time on LW 2.0)

Comment author: ingres 15 September 2017 09:53:44PM *  11 points [-]

Please do this. This alone would be enough to get me to use and link LW 2.0, at least to read stuff on it.

UPDATE (Fri Sep 15 14:56:28 PDT 2017): I'll put my money where my mouth is. If the LW 2.0 team uploads at least 15 pieces of content authored by EY of a length at least one paragraph each from Facebook, I'll donate 20 dollars to the project.

Preferably in a way where I can individually link them, but just dumping them on a public web page would also be acceptable in strict terms of this pledge.

Comment author: philh 15 September 2017 12:01:24PM 8 points [-]

(As it happens, that particular post ("why you absolutely need 4 layers of conversation in order to have real progress") was un-blackholed by Alyssa Vance: https://rationalconspiracy.com/2017/01/03/four-layers-of-intellectual-conversation/)

Comment author: richard_reitz 15 September 2017 04:47:53AM *  12 points [-]

if you’ve read all of a sequence you get a small badge that you can choose to display right next to your username, which helps people navigate how much of the content of the page you are familiar with.

Idea: give sequence-writers the option to include quizzes because this (1) demonstrates a badgeholder actually understands what the badge indicates they understand (or, at least, are more likely to) and (2) leverages the testing effect.

I await the open beta eagerly.

Comment author: Raemon 15 September 2017 06:47:51AM 2 points [-]

leverages the which?

In any case, I like the idea, although it may be in the backlog for awhile.

Comment author: Raemon 15 September 2017 06:48:42AM 9 points [-]

In any case, I like the idea, although it may be in the backlog for awhile.

Although, it occurs to me that the benefit of an open source codebase that actually is reasonable to learn is that anyone that wants something like this to happen can just make it happen.

Comment author: richard_reitz 15 September 2017 12:33:16PM 2 points [-]

Testing effect.

(At this point, I should really know better than to trust myself to write anything at 1 in the morning.)

Comment author: ciphergoth 15 September 2017 09:24:35PM 8 points [-]

Also I have already read them all more than once and don't plan to do so again just to get the badge :)

Comment author: WhySpace 15 September 2017 08:29:07AM *  6 points [-]

I'm not really sure how shortform stuff could be implemented either, but I have a suggestion on how it can be used: jokes!

Seriously. If you look at Scott's writing, for example, one of the things which makes it so gripping is the liberal use of amusing phrasing, and mildly comedic exaggerations. Not the sort of thing that makes you actually laugh, but just the sort of thing that is mildly amusing. And, I believe he specifically recommended it in his blog post on writing advice. He didn't phrase his reasoning quite like this, but I think of it as little bits of positive reinforcement to keep your system 1 happy while your system 2 does the analytic thinking stuff to digest the piece.

Now, obviously this could go overboard, since memetics dictates that short, likeable things will get upvoted faster than long, thoughtful things, outcompeting them. But, I don't think we as a community are currently at risk of that, especially with the moderation techniques described in the OP.

And, I don't mean random normal "guy walks into a bar" jokes. I mean the sort of thing that you see in the comments on old LW posts, or on Weird Sun Twitter. Jokes about Trolley Problems and Dust Specks and Newcomb-like problems and negative Utilitarians. "Should Pascal accept a mugging at all, if there's even a tiny chance of another mugger with a better offer?" Or maybe "In the future, when we're all mind-uploads, instead of arguing about the simulation argument we'll worry about being mortals in base-level reality. Yes, we'd have lots of memories of altering the simulation, but puny biological brains are error-prone, and hallucinate things all the time."

I think a lot of the reason social media is so addictive is the random dopamine injections. People could go to more targeted websites for more of the same humor, but those get old quickly. The random mix of serious info intertwined with joke memes provides novelty and works well together. The ideal for a more intellectual community should probably be more like 90-99% serious stuff, with enough fun stuff mixed in to avoid akrasia kicking in and pulling us toward a more concentrated source.

The implementation implications would be to present short-form stuff between long-form stuff, to break things up and give readers a quick break.

Comment author: John_Maxwell_IV 15 September 2017 09:00:11AM *  12 points [-]

Sounds great!

Is there anything important I missed

This analysis found that LW's most important issue is lack of content. I think there are two models that are most important here.

There's the incentives model: making it so good writers have a positive hedonic expectation for creating content. There's a sense in which an intellectual community online is much more fragile than an intellectual community in academia: academic communities can offer funding, PhDs, etc. whereas internet discussion is largely motivated by pleasure that's intrinsic to the activity. As a concrete example, the way Facebook lets you see the name of each person who liked your post is good, because then you can take pleasure in each specific person who liked it, instead of just knowing that X strangers somewhere on the internet liked it. Contrast with academia, which plods on despite frequently being hellish.

And then there's the chicken-and-egg model. Writers go where the readers are and readers go where the writers are. Interestingly, sometimes just 1 great writer can solve this problem and bootstrap a community: both Eliezer and Yvain managed to create communities around their writing single-handedly.

The models are intertwined, because having readers is a powerful incentive for writers.

My sense is that LW currently performs poorly according to both models, and although there's a lot of great stuff here, it's not clear to me that any of the proposed actions are going to attack either of these issues head on.

Comment author: Habryka 16 September 2017 11:07:09PM 5 points [-]

Thanks! :)

I agree with the content issue, and ultimately having good content on the page is one of primary goals that guided all the modeling in the post. Good content is downstream from having a functioning platform and an active community that attracts interesting people and has some pointers on how to solve interesting problems.

I like your two models. Let me think about both of them...

The hedonic incentive model is one that I tend to use quite often, especially when it comes to the design of the page, but I didn't go into too much in this post because talking about it would inevitably involve a much larger amount of details. I've mentioned "making sure things are fun" a few times, but going into the details on how I am planning to achieve this would require me talking about the design of buttons, and animations and notification systems, each of which I could write a whole separate 8000 word post filled with my own thoughts. That said, it is also a ton of fun for me, and if anyone ever wants to discuss the details of any design decision on the page, I am super happy to do that.

I do feel that there is still a higher level of abstraction in the hedonic incentives model that in game design would be referred to as "the core game loop" or "the core reward loop". What is the basic sequence of actions that a user executes when he comes to your page that reliably results in positive hedonic reward? (on Facebook there are a few of those, but the most dominant one is "Go to frontpage, see you have new notifications, click notifications, see that X people have liked your content") And I don't think I currently have a super clear answer to this. I do feel like I have an answer on a System 1 level, but it isn't something I have spent enough time thinking about, and haven't clarified super much, and this comment made me realize that this is a thing I want to pay more attention to.

We hope to bootstrap the chicken-and-egg model by allowing people to practically just move their existing blogs to the LessWrong platform, either via RSS imports or by directly using their user-profile as a blog. My current sense is that in the larger rationality diaspora we have a really large amount of content, and so far almost everyone I've talked to seemed very open to having their content mirrored on LessWrong, which makes me optimistic about solving that aspect.

Comment author: casebash 15 September 2017 09:54:04AM *  7 points [-]

Firstly, well done on all your hard work! I'm very excited to see how this will work out.

Secondly, I know that this might be best after the vote, but don't forget to take advantage of community support.

I'm sure that if you set up a Kickstarter or similar, that people would donate to it, now that you've proven your ability to deliver.

I also believe that, given how many programmers we have here, many people will want to make contributions to the codebase. My understanding was that this wasn't really happening before: a) Because the old code base was extremely difficult to get up and running/messy b) Because it wasn't clear who to talk to if you wanted to know if your changes were likely to be approved if you made them.

It looks like a) has been solved, if you also improve b), then I expect a bunch of people will want to contribute.

Comment author: ingres 15 September 2017 10:40:04PM 1 point [-]

I'm going to write a top level post at some point (hopefully soon) but in the meantime I'd like to suggest the content in the original post and comments be combined into a wiki. There's a lot of information here about LW 2.0 which I wasn't previously aware of and significantly boosted my confidence in the project.

Comment author: Habryka 16 September 2017 11:26:35PM 1 point [-]

A wiki feels too high of a barrier to entry to me, though maybe there are some cool new wiki softwares that are better than what I remember.

For now I feel like having an about page on LessWrong that has links to all the posts, and tries to summarize the state of discussion and information is the better choice, until we reach the stage where LW gets a lot more open-source engagement and is being owned more by a large community again.

Comment author: SaidAchmiz 16 September 2017 11:51:22PM 3 points [-]

http://www.pmwiki.org/ is a cool new wiki softwares that is better than most things

Comment author: ingres 17 September 2017 12:32:21AM 3 points [-]

Seconding SaidAchmiz on pmwiki, it's what we use for our research project on effective online organizing and it works wonders. It's also how I plan to host and edit the 2017 survey results.

As far as the high barrier to entry goes, I'll repeat here my previous offer to set up a high quality instance of pmwiki and populate it with a reasonable set of initial content - for free. I believe this is sufficiently important that if the issue is you just don't have the capacity to get things started I'm fully willing to help on that front.

Comment author: Kaj_Sotala 15 September 2017 10:18:18AM *  6 points [-]

Thank you for doing this!

Not a comment on the overview, but on LW2.0 itself: are you intentionally de-emphasizing comment authorship by making the author names show up in a smaller font than the text of the comment? Reading the comments under the roadmap page, it feels slightly annoying that the author names are small enough that my brain ignores them instead of registering them automatically, and then I have to consciously re-focus my attention to see who wrote a comment, each time that I read a new comment.

Comment author: Habryka 16 September 2017 11:23:17PM 4 points [-]

That was indeed intentional, but after playing around with it a bit, I actually think it had a negative effect on the skimmability of comment threads, and I am planning to try out a few different solutions soon. In general I feel that I want to increase the spacing between different comments and make it easier to identify the author of a comment.

Comment author: Elo 17 September 2017 01:56:33AM 1 point [-]

I think I would prefer information density. I am annoyed by the classic mybb type forum of low density of comments and prefer the more, "Facebook" style density but it will shorten comments to go that dense. So a balance close to the current density would be my suggestion.

Comment author: cousin_it 15 September 2017 10:30:09AM *  9 points [-]

What will happen with existing LW posts and comments? I feel strongly that they should all stay accessible at their old URLs (though perhaps with new design).

Comment author: Habryka 15 September 2017 11:52:19AM 23 points [-]

All old links will continue working. I've put quite a bit of effort into that, and this was one of the basic design requirements we built the site around.

Comment author: Vaniver 15 September 2017 09:11:38PM 11 points [-]

"Basic design requirements" seems like it's underselling it a bit; this was Rule 0 that would instantly torpedo any plan where it wasn't possible.

It's also worth pointing out that we've already done one DB import (lesserwrong.com has all the old posts/comments/etc. as of May of this year) and will do another DB import of everything that's happened on LW since then, so that LW moving forward will have everything from the main site and the beta branch.

Comment author: IlyaShpitser 15 September 2017 02:28:41PM *  4 points [-]

(a) Thanks for making the effort!

(b)

"I am currently experimenting with a karma system based on the concept of eigendemocracy by Scott Aaronson, which you can read about here, but which basically boils down to applying Google’s PageRank algorithm to karma allocation."

This won't work, for the same reason PageRank did not work, you can game it by collusion. Communities are excellent at collusion. I think the important thing to do is making toxic people (defined in a socially constructed way as people you don't want around) go away. Ranking posts from best to worst in folks who remain I don't think is that helpful. People will know quality without numbers.

Comment author: Manfred 15 September 2017 05:27:50PM 8 points [-]

I think votes have served several useful purposes.

Downvotes have been a very good way of enforcing the low-politics norm.

When there's lots of something, you often want to sort by votes, or some ranking that mixes votes and age. Right now there aren't many comments per thread, but if there were 100 top-level comments, I'd want votes. Similarly, as a new reader, it was very helpful to me to look for old posts that people had rated highly.

Comment author: IlyaShpitser 15 September 2017 06:57:30PM *  0 points [-]

How are you going to prevent gaming the system and collusion?


Goodhart's law: you can game metrics, you can't game targets. Quality speaks for itself.

Comment author: Manfred 15 September 2017 07:58:46PM *  1 point [-]

Moderation is basically the only way, I think. You could try to use fancy pagerank-anchored-by-trusted-users ratings, or make votes costly to the user in some way, but I think moderation is the necessary fallback.

Goodhart's law is real, but people still try to use metrics. Quality may speak for itself, but it can be too costly to listen to the quality of every single thing anyone says.

Comment author: IlyaShpitser 15 September 2017 07:59:25PM 1 point [-]

People use name recognition in practice, works pretty well.

Comment author: tristanm 16 September 2017 09:55:10PM *  2 points [-]

Going to reply to this because I don't think it should be overlooked. It's a valid point - people tend to want to filter out information that's not from the sources they trust. I think these kind of incentive pressures are what led to the "LessWrong Diaspora" being concentrated around specific blogs belonging to people with very positive reputation such as Scott Alexander. And when people want to look at different sources of information they will follow the advice of said people usually. This is how I operate when I'm doing my own reading / research - I start somewhere I consider to be the "safest" and move out from there according to the references given at that spot and perhaps a few more steps outward.

When we use a karma / voting system, we are basically trying to calculate P(this contains useful information | this post has a high number of votes) but no voting system ever offers as much evidence as a specific reference from someone we recognize as trustworthy. The only way to increase the evidence gained from a voting system is to add further complexity to the system by increasing the amount of information contained in a vote, either by weighing the votes or by identifying the person behind the vote. And then from there you can add more to a vote, like a specific comment or a more nuanced judgement. I think the end of that track is basically what we have now, blogs by a specific person linking to other blogs, or social media like Facebook where no user is anonymous and everyone has their information filtered in some way.

Essentially I'm saying we should not ignore the role that optimization pressure has played in producing the systems we already have.

Comment author: Kaj_Sotala 17 September 2017 10:53:33AM 4 points [-]

I can use name recognition to scroll through a comment thread to find all the comments by the people that I consider in high regard, but this is much more effort than just having a karma system which automatically shows the top-voted comments first. (The karma system also doesn't discriminate against new writers as badly as relying on name recognition does.)

Comment author: Vladimir_Nesov 15 September 2017 10:17:31PM 0 points [-]

Quality may speak for itself, but it can be too costly to listen to the quality of every single thing anyone says.

Which is why there should be a way to vote on users, not content, the quantity of unevaluated content shouldn't divide the signal. This would matter if the primary mission succeeds and there is actual conversation worth protecting.

Comment author: John_Maxwell_IV 16 September 2017 07:43:07AM *  5 points [-]

How are you going to prevent gaming the system and collusion?

Keep tweaking the rules until you've got a system where the easiest way to get karma is to make quality contributions?

There probably exist karma systems which are provably non-gameable in relevant ways. For example, if upvotes are a conserved quantity (i.e. by upvoting you, I give you 1 upvote and lose 1 of my own upvotes), then you can't manufacture them from thin air using sockpuppets.

However, it also seems like for a small community, you're probably better off just moderating by hand. The point of a karma system is to automatically scale moderation up to a much larger number of people, at which point it makes more sense to hash out details. In other worse, maybe I should go try to get a job on reddit's moderator tools team.

Comment author: IlyaShpitser 16 September 2017 11:34:38AM *  1 point [-]

Keep tweaking the rules until you've got a system where the easiest way to get karma is to make quality contributions?

This will never ever work. Predicting this in advance.

There probably exist karma systems which are provably non-gameable in relevant ways.

You should tell Google and academia, they will be most interested in your ideas. Don't you think people already thought very hard about this? This is such a typical LW attitude.

Comment author: DragonGod 16 September 2017 05:03:31PM *  2 points [-]

You should tell Google and academia, they will be most interested in your ideas. Don't you think people already thought very hard about this? This is such a typical LW attitude.

This reply contributes nothing to the discussion of the problem at hand, and is quite uncharitable, I hope such replies were discouraged, and if downvoting was enabled, I would have downvoted it.

If thinking that they can solve the problem at hand (and making attempts at it) is a "typical LW attitude", then it is an attitude I want to see more of and believe should be encouraged (thus, I'll be upvoting /u/John_Maxwell_IV 's post). A priori assuming that one cannot solve a problem (that hasn't been proven/isnt known to be unsolvable) and thus refraining from even attempting the problem, isn't an attitude that I want to see become the norm in Lesswrong. It's not an attitude that I think is useful, productive, optimal or efficient.

It is my opinion, that we want to encourage people to attempt problems of interest to the community (the potential benefits are vast (e.g the problem is solved, and/or significant improvements are made on the problem, and future endeavours would have a better starting point), and the potential demerits are of lesser impact (time (ours and whoever attempts it) is wasted on an unpromising solution).

Coming back to the topic that was being discussed, I think methods of costly signalling are promising (for example, when you upvote a post you transfer X karma to the user, and you lose k*X (k < 1)).

Comment author: Vladimir_Nesov 16 September 2017 05:17:15PM 0 points [-]

A priori assuming that one cannot solve a problem

("A priori" suggests lack of knowledge to temper an initial impression, which doesn't apply here.)

There are problems one can't by default solve, and a statement, standing on its own, that it's feasible to solve them is known to be wrong. A "useful attitude" of believing something wrong is a popular stance, but is it good? How does its usefulness work, specifically, if it does, and can we get the benefits without the ugliness?

Comment author: DragonGod 16 September 2017 05:54:26PM *  0 points [-]

that hasn't been proven/isnt known to be unsolvable)

An optimistic attitude towards problems that are potentially solvable is instrumentally useful—and dare I argue—instrumentally rational. The drawbacks of encouraging an optimistic attitude towards open problems are far outweighed by the potential benefits.

Comment author: Vladimir_Nesov 16 September 2017 06:09:25PM *  0 points [-]

(The quote markup in your comment designates a quote from your earlier comment, not my comment.)

You are not engaging the distinction I've drawn. Saying "It's useful" isn't the final analysis, there are potential improvements that avoid the horror of intentionally holding and professing false beliefs (to the point of disapproving of other people pointing out their falsehood; this happened in your reply to Ilya).

The problem of improving over the stance of an "optimistic attitude" might be solvable.

Comment author: DragonGod 16 September 2017 08:32:04PM *  0 points [-]

(The quote markup in your comment designates a quote from your earlier comment, not my comment.)

I know: I was quoting myself.

Saying "It's useful" isn't the final analysis

I guess for me it is.

there are potential improvements that avoid the horror of intentionally holding and professing false beliefs (to the point of disapproving of other people pointing out their falsehood; this happened in your reply to Ilya)

The beliefs aren't known to be false. It is not clear to me, that someone believing they can solve a problem (that isn't known/proven or even strongly suspected to be unsolvable) is a false belief.

What do you propose to replace the optimism I suggest?

Comment author: IlyaShpitser 16 September 2017 11:16:52PM *  1 point [-]

I have been here for a few years, I think my model of "the LW mindset" is fairly good.


I suppose the general thing I am trying to say is: "speak less, read more." But at the end of the day, this sort of advice is hopelessly entangled with status considerations. So it's hard to give to a stranger, and have it be received well. Only really works in the context of an existing apprenticeship relationship.

Comment author: DragonGod 17 September 2017 12:58:02AM 0 points [-]

Status games outside, the sentiment expressed in my reply are my real views on the matter.

Comment author: John_Maxwell_IV 17 September 2017 02:05:44AM *  2 points [-]

Don't you think people already thought very hard about this?

Can you show me 3 peer-reviewed papers which discuss discussion site karma systems that differ meaningfully from reddit's, and 3 discussion sites that implement karma systems that differ from reddit's in interesting ways? If not, it seems like a neglected topic to me.

Maybe I'm just not very good at doing literature searches. I did a search on Google Scholar for "reddit karma" and found only one paper which focuses on reddit karma. It's got brilliant insights such as

The aforementioned conflict between idealistically and quantitatively motivated contributions has however led to a discrepancy between value assessments of content.

...

This is such a typical LW attitude.

I believe Robin Hanson when he says academics neglect topics if they are too weird-seeming. Do you disagree?

It's certainly plausible that there is academic research relevant to the design of karma systems, but I don't see why the existence of such research is a compelling reason to not spend 5 minutes thinking about the question from first principles on my own. Relevant quote.

Coincidentally, just a couple days ago I was having a conversation with a math professor here at UC Berkeley about the feasibility of doing research outside of academia. The professor's opinion was that this is very difficult to do in math, because math is a very "vertical" field where you have to climb to the top before making a contribution, and as long as you are going to spend half a decade or more climbing to the top, you might as well do so within the structure of academia. However, the professor did not think this was true of computer science (see: stuff like Bitcoin which did not come out of academia).

Comment author: IlyaShpitser 17 September 2017 03:33:17PM *  3 points [-]

Maybe I'm just not very good at doing literature searches. I did a search on Google Scholar for "reddit karma" and found only one paper which focuses on reddit karma.

You can't do lit searches with google. Here's one paper with a bunch of references on attacks on reputation systems, and reputation systems more generally:

https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/36757.pdf

You are right that lots of folks outside of academia do research on this, in particular game companies (due to toxic players in multiplayer games). This is far from a solved problem -- Valve, Riot and Blizzard spend an enormous amount of effort on reputation systems.


I don't see why the existence of such research is a compelling reason to not spend 5 minutes thinking about the question from first principles on my own.

I don't think there is a way to write this in a way that doesn't sound mean: because you are an amateur. Imo, the best way for amateurs to proceed is to (a) trust experts, (b) read expert stuff, and (c) mostly not talk. Changes are, your 5 minute thoughts on the matter are only adding noise to the discussion. In principle, taking expert consensus as the prior is a part of rationality. In practice, people ignore this part because it is not a practice that is fun to follow. It's much more fun to talk than to read papers.

LW's love affair with amateurism is one of the things I hate most about its culture.


My favorite episode in the history of science is how science "forgot" what the cure of scurvy was. In order for human civilization not to forget things, we need to be better about (a), (b), (c) above.

Comment author: DragonGod 17 September 2017 08:22:11PM *  2 points [-]

(Upvoted).

Changes are, your 5 minute thoughts on the matter are only adding noise to the discussion.

This is where we differ; I think the potential for substantial contribution vastly outweighs any "noise" that may be be caused by amateurs taking stabs at he problem. I do not think all the low hanging fruit are gone (and if they were, how would we know so?), I think that amateurs are capable of substantial contributions in several fields. I think that optimism towards open problems is a more productive attitude.

I support "LW's love affair with amateurism", and it's a part of the culture I wouldn't want to see disappear.

Comment author: Kaj_Sotala 16 September 2017 03:34:59PM 9 points [-]

Curious as to why you think that LW2.0 will have a problem with gaming karma when LW1.0 hasn't had such a problem (unless you count Eugine, and even if you do, we've been promised the tools for dealing with Eugines now).

Comment author: Habryka 17 September 2017 12:05:58AM 4 points [-]

I think this roughly summarizes my perspective on this. Karma seems to work well for a very large range of online forums and applications. We didn't really have any problems with collusion on LW outside of Eugine, and that was a result of a lack of moderator tools, not a problem with the karma system itself.

I agree that you should never fully delegate your decision making process to a simple algorithm, that's what the value-loading problem is all about, but that's what we have moderators and admins for. If we see suspicious behavior in the voting patterns we investigate and if we find someone is gaming the system we punish them. This is how practically all social rules and systems get enforced.

Comment author: IlyaShpitser 17 September 2017 03:46:13PM *  0 points [-]

LW1.0's problem with karma is that karma isn't measuring anything useful (certainly not quality). How can a distributed voting system decide on quality? Quality is not decided by majority vote.

The biggest problem with karma systems is in people's heads -- people think karma does something other than what it does in reality.

Comment author: Kaj_Sotala 17 September 2017 04:02:12PM 4 points [-]

LW1.0's problem with karma is that karma isn't measuring anything useful (certainly not quality).

That's the exact opposite of my experience. Higher-voted comments are consistently more insightful and interesting than low-voted ones.

Quality is not decided by majority vote.

Obviously not decided by it, but aggregating lots of individual estimates of quality sure can help discover the quality.

Comment author: Vladimir_Nesov 17 September 2017 04:15:34PM 1 point [-]

Higher-voted comments are consistently more insightful and interesting than low-voted ones.

This was also my experience (on LW) several years ago, but not recently. On Reddit, I don't see much difference between highly- and moderately-upvoted comments, only poorly-upvoted comments (in a popular thread) are consistently bad.

Comment author: IlyaShpitser 17 September 2017 04:34:02PM *  0 points [-]

aggregating lots of individual estimates of quality sure can help discover the quality.

I guess we fundamentally disagree. Lots of people with no clue about something aren't going to magically transform into a method for discerning clue regardless of aggregation method -- garbage in garbage out. For example: aggregating learners in machine learning can work, but requires strong conditions.

Comment author: tristanm 17 September 2017 06:04:56PM *  1 point [-]

Hopefully this question is not too much of a digression - but has anyone considered using something like Arxiv-Sanity but instead of for papers it could include content (blog posts, articles, etc.) produced by the wider rationality community? Because at least with that you are measuring similarity to things you have already read and liked, things other people have read and liked, or things people are linking to and commenting on, and you can search things pretty well based on content and authorship. Ranking things by (what people have stored in their library and are planning on taking time to study) might contain more information than karma.

Comment author: DragonGod 17 September 2017 08:14:48PM 0 points [-]

Karma serves as an indicator of the reception that certain content got. High karma means several people liked it. Negative karma means it was very disliked, etc.

Comment author: SaidAchmiz 15 September 2017 07:31:16PM 4 points [-]

Strongly seconded. I think there should be no karma system.

I commented on LW 2.0 itself about another reason why a karma system is bad.

Comment author: IlyaShpitser 15 September 2017 08:00:32PM 1 point [-]

Yeah I agree that people need to weigh experts highly. LW pays lipservice to this, but only that -- basically as soon as people have a strong opinion experts get discarded. Started with EY.

Comment author: Vaniver 16 September 2017 01:42:41AM 2 points [-]

My impression of how to do this is to give experts an "as an expert, I..." vote. So you could see that a post has 5 upvotes and a beaker downvote, and say "hmm, the scientist thinks this is bad and other people think it's good."

Multiple flavors lets you separate out different parts of the comment in a way that's meaningfully distinct from the Slashdot-style "everyone can pick a descriptor;" you don't want everyone to be able to say "that's funny," just the comedians.

This works somewhat better than simple vote weighting because it lets people say whether they're doing this as just another reader or 'in their professional capacity;' I want Ilya's votes on stats comments to be very highly weighted and I want his votes on, say, rationality quotes to be weighted roughly like anyone else's.

Of course, this sketch has many problems of its own. As written, I lumped many different forms of expertise into "scientist," and you're trusting the user to vote in the right contexts.

Comment author: SaidAchmiz 16 September 2017 04:01:04AM 3 points [-]

If you have a more-legible quality signal (in the James C. Scott sense of "legibility"), and a less-legible quality signal, you will inevitably end up using the more-legible quality signal more, and the less-legible one will be ignored—even if the less-legible one is tremendously more accurate and valuable.

Your suggestion is not implausible on its face, but the devil is in the details. No doubt you know this, as you say "this sketch has many problems of its own". But these details and problems conspire to make such a formalized version of the "expert's vote" either substantially decoupled from what it's supposed to represent, or not nearly as legible as the simple "people's vote". In the former case, what's the point? In the latter case, the result is that the "people's vote" will remain much more influential on visibility, ranking, inclusion in canon, contribution to a member's influence in various ways, and everything else you might care to use such formalized rating numbers for.

The question of reputation, and of whose opinion to trust and value, is a deep and fundamental one. I don't say it's impossible to algorithmize, but if possible, it is surely quite difficult. And simple karma (based on unweighted votes) is, I think, a step in the wrong direction.

Comment author: ingres 16 September 2017 04:17:28AM 1 point [-]

As far as an algorithm for reputation goes, academia seems to have something that sort of scales in the form of citations and co-authors:

http://www.overcomingbias.com/2017/08/the-problem-with-prestige.html

It's certainly a difficult problem however.

Comment author: IlyaShpitser 17 September 2017 02:05:36AM *  0 points [-]

Vaniver, I sympathize with the desire to automate figuring out who experts are via point systems, but consider that even in academia (with a built-in citation pagerank), people still rely on names. That's evidence about pagerank systems not being great on their own. People game the hell out of citations.


Probably should weigh my opinion of rationality stuff quite low, I am neither a practitioner nor a historian of rationality. I have gotten gradually more pessimistic about the whole project.

Comment author: Vaniver 15 September 2017 09:15:54PM 5 points [-]

Oli and I disagree somewhat on voting systems. I think you get a huge benefit from doing voting at all, a small benefit from doing simple weighted voting (including not allowing people below ~10 karma to vote), and then there's not left from complicated vote weighting schemes (like eigenkarma or so on). Part of this is because more complicated systems don't necessarily have more complicated gaming mechanics.

There are empirical questions involved; we haven't looked at, for example, the graph of what karma converges to if you use my simplistic vote weighting scheme vs. an eigenkarma scheme, but my expectation is a very high correlation. (I'd be very surprised if it were less than .8, and pretty surprised if it were less than .95.)

I expect the counterfactual questions--"how would Manfred have voted if we were using eigenkarma instead of simple aggregation?"--to not make a huge difference in practice, altho they may make a difference for problem users.

Comment author: IlyaShpitser 15 September 2017 09:18:40PM 1 point [-]

What's the benefit? Also, what's the harm? (to you)

Comment author: Vaniver 15 September 2017 11:17:13PM 8 points [-]

Main benefits to karma are feedback for writers (both informative and hedonic) and sorting for attention conservation. Main costs are supporting the underlying tech, transparency / explaining the system, and dealing with efforts to game it.

(For example, if we just clicked a radio button and we had eigenkarma, I would be much more optimistic about it. As is, there are other features I would much rather have.)

Comment author: Habryka 17 September 2017 12:02:18AM *  10 points [-]

"This won't work, for the same reason PageRank did not work"

I am very confused by this. Google's search vastly outperformed its competitors with PageRank and is still using a heavily tweaked version of PageRank to this day, delivering by far the best search on the market. It seems to me that PageRank should widely be considered to be the most successful reputation algorithm that has ever been invented, having demonstrated extraordinary real-world success. In what way does it make sense to say "PageRank did not work"?

Comment author: IlyaShpitser 17 September 2017 01:08:59AM *  4 points [-]

Google is using a much more complicated algorithm that is constantly tweaked, and is a trade secret -- precisely because as soon as it became profitable to do so, the ecosystem proceeded to game the hell out of PageRank.

Google hasn't been using PageRank-as-in-the-paper for ages. The real secret sauce behind Google is not eigenvalues, it's the fact that it's effectively anti-inductive, because the algorithm isn't open and there is an army of humans looking for attempts to game it, and modifying it as soon as such an attempt is found.

Comment author: Wei_Dai 17 September 2017 02:04:03AM 8 points [-]

Given that, it seems equally valid to say "this will work, for the same reason that PageRank worked", i.e., we can also tweak the reputation algorithm as people try to attack it. We don't have as much resources as Google, but then we also don't face as many attackers (with as strong incentives) as Google does.

I personally do prefer a forum with karma numbers, to help me find quality posts/comments/posters that I would likely miss or have to devote a lot of time and effort to sift through.

Comment author: IlyaShpitser 17 September 2017 03:05:01PM 2 points [-]

It's not PageRank that worked, it's anti-induction that worked. PageRank did not work, as soon as it faced resistance.

Comment author: ZorbaTHut 17 September 2017 12:19:54PM 4 points [-]

FWIW, I worked at Google about a decade ago, and even then, PageRank was basically no longer used. I can't imagine it's gotten more influence since.

It did work, but I got the strong sense that it no longer worked.

Comment author: ESRogs 17 September 2017 09:35:57AM 7 points [-]

Ranking posts from best to worst in folks who remain I don't think is that helpful. People will know quality without numbers.

Ranking helps me know what to read.

The SlateStarCodex comments are unusable for me because nothing is sorted by quality, so what's at the top is just whoever had the fastest fingers and least filter.

Maybe this isn't a problem for fast readers (I am a slow reader), but I find automatic sorting mechanisms to be super useful.

Comment author: Kaj_Sotala 17 September 2017 10:55:12AM 7 points [-]

This. SSC comments I basically only read if there are very few of them, because of the lack of karma; on LW even large discussions are actually readable, thanks to karma sorting.

Comment author: IlyaShpitser 17 September 2017 03:13:13PM 1 point [-]

That's an illusion of readability though, it's only sorting in a fairly arbitrary way.

Comment author: ESRogs 17 September 2017 05:33:12PM 8 points [-]

As long as it's not anti-correlated with quality, it helps.

It doesn't matter if the top comment isn't actually the very best comment. So long as the system does better than random, I as a reader benefit.

Comment author: Dustin 17 September 2017 05:50:15PM *  5 points [-]

Over the years I've gone through periods of time where I can devote the effort/time to thoroughly reading LW and periods of time where I can basically just skim it.

Because of this I'm in a good position to judge the reliability of karma in surfacing content for its readability.

My judgement is that karma strongly correlates with readability.

Comment author: DragonGod 15 September 2017 03:45:31PM 2 points [-]

This sounds very promising. The UI looks like a site from 2017 as well (as opposed to the previous 2008 feel). The design is very aesthetically pleasing.

I'm very excited about the personal blog feature (posting our articles to our page is basically like a blog).

How long would the open beta last?

Comment author: Manfred 15 September 2017 07:49:32PM 1 point [-]

The only thing I don't like about the "2017 feel" is that it sometimes feel like you're just adrift in the text, with no landmarks. Sometimes you just want guides to the eye, and landmarks to keep track of how far you've read!

Comment author: DragonGod 16 September 2017 03:01:02PM 0 points [-]

I haven't run into that problem, but I'm reading from my phone, and Chrome tracks where I've scrolled to.

Comment author: ozymandias 15 September 2017 03:55:59PM *  13 points [-]

Thank you for making this website! It looks really good and like someplace I might want to crosspost to.

If I may make two suggestions:

(1) It doesn't seem clear whether Less Wrong 2.0 will also have a "no politics" norm, but if it doesn't I would really appreciate a "no culture war" tag which alerts the moderators to nuke discussion of race, gender, free speech on college campuses, the latest outrageous thing [insert politician here] did, etc. I think that culture war stuff is salacious enough that people love discussing it in spite of its obvious unimportance, and it would be good to have a way to dissuade that. Personally, I've tended to avoid online rationalist spaces where I can't block people who annoy me, because culture war stuff keeps coming up and when interacting with certain people I get defensive and upset and not in a good frame for discussion at all.

(2) Some inconspicuous way of putting in assorted metadata (content warnings, epistemic statuses, that sort of thing) so that interested people can look at them but they are not taking up the first 500 words of the post.

Comment author: Regex 15 September 2017 04:22:08PM 4 points [-]

How culture war stuff is dealt with on the various discord servers is having a place to dump it all. This is often hidden to begin with and opt-in only, so people only become aware of it when they start trying to discuss it.

Comment author: Habryka 16 September 2017 10:55:28PM 4 points [-]

I've also been thinking quite a bit about certain tags on posts requiring a minimum karma for commenters. The minimum karma wouldn't have to be too high (e.g. 10-20 karma might be enough), but it would keep out people who only sign up to discuss highly political topics.

Comment author: Bakkot 15 September 2017 05:22:13PM 15 points [-]

I would strongly support just banning culture war stuff from LW 2.0. Those conversations can be fun, but they require disproportionately large amounts of work to keep the light / heat ratio decent (or indeed > 0), and they tend to dominate any larger conversation they enter. Besides, there's enough places for discussion of those topics already.

(For context: I moderate /r/SlateStarCodex, which gets several thousand posts in its weekly culture war thread every single week. Those discussions are a lot less bad than culture war discussions on the greater internet, I think, and we do a pretty good job keeping discussion to that thread only, but maintaining both of these requires a lot of active moderation, and the thread absolutely affects the tone of the rest of the subreddit even so.)

Comment author: ozymandias 15 September 2017 06:55:16PM *  5 points [-]

I'm not sure if I agree with banning it entirely. There are culture-war-y discussions that seem relevant to LW 2.0: for instance, people might want to talk about sexism in the rationality community, free speech norms, particular flawed studies that touch on some culture-war issue, dating advice, whether EAs should endorse politically controversial causes, nuclear war as existential risk, etc.

OTOH a policy that people should post this sort of content on their own private blogs seems sensible. There are definite merits in favor of banning culture war things. In addition to what you mention, it's hard to create a consensus about what a "good" culture war discussion is. To pick a fairly neutral example, my blog Thing of Things bans neoreactionaries on sight while Slate Star Codex bans the word in the hopes of limiting the amount they take over discussion; the average neoreactionary, of course, would strongly object to this discriminatory policy.

Comment author: Bakkot 15 September 2017 09:21:37PM 3 points [-]

I think - I hope - we could discuss most of those without getting into the more culture war-y parts, if there were sufficiently strong norms against culture war discussions in general.

Maybe just opt-in rather than opt-out would be sufficient, though. That is, you could explicitly choose to allow CW discussions on your post, but they'd be prohibited by default.

Comment author: philh 15 September 2017 05:25:16PM 7 points [-]

I would really appreciate a "no culture war" tag which alerts the moderators to nuke discussion of race, gender, free speech on college campuses, the latest outrageous thing [insert politician here] did, etc.

To clarify: you want people to be able to apply this tag to their own posts, and in posts with it applied, culture war discussion is forbidden?

I approve of this.

I also wonder if it would be worth exploring a more general approach, where submitters have some limited mod powers on their own posts.

Comment author: ozymandias 15 September 2017 06:42:00PM 5 points [-]

Yes, that was my intent.

I believe the plan is to eventually allow some trusted submitters to e.g. ban people from commenting on their posts, but I would hope the "no culture war" tag could be applied even by people whom the mod team doesn't trust with broader moderation powers.

Comment author: Vaniver 15 September 2017 09:22:40PM 8 points [-]

I expect the norm to be "no culture war" and "no politics" but there to be some flexibility. I don't want to end up with a LW where, say, this SSC post would be banned, and banning discussions of the rationality community that might get uncomfortable seems bad, and so on, but also I don't want to end up with a LW that puts other epistemic standards in front of rationality ones. (One policy we joked about was "no politics, unless you're Scott," and something like allowing people to put it on their personal page but basically never promoting it accomplishes roughly the same thing.)

Comment author: ozymandias 16 September 2017 01:25:38AM 7 points [-]

Sorry, this might not be clear from the comment, but as a prospective writer I was primarily thinking about the comments on my posts. Even if I avoid culture war stuff in my posts, the comment section might go off on a tangent. (This is particularly a concern for me because of course my social-justice writing is the most well-known, so people might be primed to bring it up.) On my own blog, I tend to ban people who make me feel scared and defensive; if I don't have this capability and people insist on talking about culture-war stuff in the comments of my posts anyway, being on LW 2.0 will probably be unpleasant and aversive enough that I won't want to do it. Of course, I'm just one person and it doesn't make sense to set policy based on luring me in specific; however, I suspect this preference is common enough across political ideologies that having a way to accommodate it would attract more writers.

Comment author: Vaniver 16 September 2017 01:50:37AM 3 points [-]

Got it; I expect the comments to have basically the same rules as the posts, and for you to be able to respond in some low-effort fashion to people derailing posts with culture war (by, say, just flagging a post and then the Sunshine Regiment doing something about it).

Comment author: Habryka 16 September 2017 10:54:03PM 0 points [-]

Yeah, that's roughly what I've been envisioning as well.

Comment author: NancyLebovitz 15 September 2017 04:19:49PM *  10 points [-]

Thank you for developing this.

I'm reminded of an annoying feature of LW 1.0. The search function was pretty awful. The results weren't even in reverse chronological order.

I'm not sure how important better search is, but considering your very reasonable emphasis on continuity of discussion, it might matter a lot.

Requiring tags while offering a list of standard tags might also help.

Comment author: ingres 15 September 2017 09:33:51PM *  1 point [-]

Better search is paramount in my opinion. Part of how academic institutions maintain a shared discussion is through a norm of checking for previous work in a space before embarking on new adventures. Combined with strong indexing this norm means that things which could be like so many forgotten Facebook discussions get many chances to be seen and read by members of the academic community.

http://www.overcomingbias.com/2007/07/blogging-doubts.html

Comment author: Habryka 16 September 2017 11:10:16PM 1 point [-]

Yeah, we do now have much better word-based search, but also still feel that we want a way to archive content on the site into more hierarchical or tag-based structures. I am very open to suggestions of existing websites that do this well, or maybe even physical library systems that work here.

I've been reading some information architecture textbooks (http://shop.oreilly.com/product/0636920034674.do) on this, but still haven't found a great solution or design pattern that doesn't feel incredibly cumbersome and adds a whole other dimension to the page that users need to navigate.

Comment author: SaidAchmiz 17 September 2017 12:19:42AM 3 points [-]

… [we] still feel that we want a way to archive content on the site into more hierarchical or tag-based structures. I am very open to suggestions of existing websites that do this well…

This is a slightly odd comment, if only because "hierarchical or tag-based structures" describes almost all extant websites that aggregate / archive / collect content in any way! You would, I think, be somewhat hard-pressed to find a site that does not use either a hierarchical or a tag-based structure (or, indeed, both!).

But here are some concrete examples of sites that both do this well, and where it plays a critical role:

  • Wikipedia. MediaWiki Categories incorporate both tag-based and hierarchical elements (subcategories).
  • Other Wikis. TVTropes, which uses a modified version of the PmWiki engine, is organized primarily by placing all pages into one or more indexes, along many (often orthogonal) categories. The standard version of PmWiki offers several forms of hierarchical (groups, SiteMapper) and tag-based (Categories, pagelists in general) structures and navigation schemes.
  • Blogs, such as Wordpress. Tags are a useful way to find all posts on a subject.
  • Tumblr. I have much beef with Tumblr, but tags are a sensible feature.
  • Pinboard. Tags, including the ability to list intersections of tag-based bookmark sets, is key to Pinboard's functionality.
  • Forums, such as ENWorld. The organization is hierarchical (forum groups contain forums contain subforums contain threads contain posts) and tag-based (threads are prefixed with a topic tag). You can search by hierarchical location or by tag(s) or by text or by any combination of those.
Comment author: Habryka 17 September 2017 02:51:10AM 1 point [-]

Thanks for the recommendations!

"This is a slightly odd comment, if only because "hierarchical or tag-based structures" describes almost all extant websites that aggregate / archive / collect content in any way!"

Well, the emphasis here was on the "more". I.e. there are more feed based architectures, and there are more taxonomy/tagging based architectures. There is a spectrum, and reddit very much leans towards the feed direction, which is what LessWrong has historically been. And wiki's very much lean towards the taxonomy spectrum. I feel we want to be somewhere in between, but I don't know where yet.

Comment author: SaidAchmiz 17 September 2017 04:08:01AM *  3 points [-]

Certainly there is variation, but I actually don't think that viewing that variation as a unidimensional spectrum is correct. Consider:

I have a blog. It functions just like a regular (wordpress) blog—it's sequential, it even has the usual RSS feed, etc. But it runs on pmwiki. So every page is a wikipage (and thus pages are organized into groups; they have tags and are viewable by group, by tag, by custom pagelist, etc.)

So what is that? Feed-based, or tag-based, or hierarchical, or... what? I think these things are much more orthogonal than you give them credit for. Tag-based structure can overlay hierarchical structure without affecting it; custom pagelist/index structure, ditto; and you can serve anything you like as a feed by simply applying an ordering (by timestamp is the obvious and common one, but there are many other possibilities), and you can have multiple feeds, custom feeds, dynamic feeds, etc.; you can subset (filter) in various ways…

(Graph-theoretic interpretations of this are probably obvious, but if anyone wants me to comment on that aspect of it, I will)

P.S.: I think reddit is a terrible model, quite honestly. The evolution of reddit, into what it is today, makes it fairly obvious (to me, anyway) that it's not to be emulated.

Edit: To be clear, the scenario above isn't hypothetical—that is how my actual blog works.

Edit2: Consider also https://readthesequences.com. (It, too, runs on pmwiki.) There's a linear structure (it's a book; the linear navigation UI takes you through the content in order), but it would obviously be trivial to apply tags to pages, and the book/sequence structure is hierarchical already.

Comment author: 9eB1 17 September 2017 03:13:06AM 0 points [-]

That is very interesting. An exception might be "Google search pages." Not only is there no hierarchical structure, there is also no explicit tag structure and the main user engagement model is search-only. Internet Archive is similar but with their own stored content.

With respect to TV Tropes, I'd note that while it is nominally organized according to those indexes, the typical usage pattern is as a sort of pure garden path in my experience.

Comment author: SaidAchmiz 17 September 2017 04:11:51AM 1 point [-]

With respect to TV Tropes, I'd note that while it is nominally organized according to those indexes, the typical usage pattern is as a sort of pure garden path in my experience.

I have encountered a truly shocking degree of variation in how people use TVTropes, to the extent that I've witnessed several people talking to each other about this were each in utter disbelief (to the point of anger) that the other person's usage pattern is a real thing.

Generalizations about TVTropes usage patterns are extremely fraught.

Comment author: 9eB1 17 September 2017 02:47:31PM 0 points [-]

Sure.

Since then I've thought of a couple more sites that are neither hierarchical nor tag-based. Facebook and eHow style sites.

There is another pattern that is neither hierarchical, tag-based nor search-based, which is the "invitation-only" pattern of a site like pastebin. You can only find content by referral.

Comment author: SaidAchmiz 17 September 2017 05:56:55PM 1 point [-]

It is therefore not a coincidence that Facebook is utterly terrible as a content repository. (I am unfamiliar with eHow.)

Comment author: Benito 15 September 2017 09:38:30PM *  6 points [-]

We all thought search was very important, and so tried to make it very efficient and effective. Try out the search bar on the new site.

Added: I realise that comment links are currently broken - oops! We'll fix that before open beta.

Comment author: NancyLebovitz 16 September 2017 12:24:28AM 1 point [-]

I've tried it and it's very fast. I haven't come up with good ideas for testing it yet.

Comment author: Kaj_Sotala 15 September 2017 05:25:13PM 2 points [-]

To solve this bottleneck, here is the rough content structure that I am currently planning to implement on LessWrong:

I notice that this picture doesn't seem to include link posts. Will those still exist?

Comment author: Raemon 15 September 2017 06:27:16PM 8 points [-]

We have link post functionality but I think we're trying to shift away from it, and instead more directly solve the problem of people-posting-to-other-blogs (both by making it a better experience to post things here on your personal section, and to make it possible to post things to your blog that are auto-imported into LW)

Comment author: ingres 15 September 2017 09:49:20PM 0 points [-]

and to make it possible to post things to your blog that are auto-imported into LW

What kind of technical implementation are you looking at for this?

Comment author: Raemon 16 September 2017 06:58:42AM 0 points [-]

Habryka knows that better than I, I just know that it's in the works.

Comment author: Habryka 16 September 2017 11:41:13PM 3 points [-]

This already exists! You can see an example of that with Elizabeth's blog "Aceso Under Glass" here:

https://www.lesserwrong.com/posts/mjneyoZjyk9oC5ocA/epistemic-spot-check-a-guide-to-better-movement-todd

We set it up so that Elizabeth has a tag on her wordpress blog such that whenever she adds something to that tag, it automatically gets crossposted to LessWrong. We can do this with arbitrary RSS feeds, as long as the RSS feeds export the full html of the post.

Comment author: NancyLebovitz 15 September 2017 05:51:07PM 7 points [-]

I'm hoping there will be something like the feature at ssc to choose the time when the site considers comments to be new. It's frustrating to not be able to recover the pink borders on new comments on posts at LW.

Comment author: Benito 15 September 2017 09:25:17PM 10 points [-]

I agree - and we've built this feature! It's currently live on the beta site.

Comment author: arundelo 15 September 2017 06:51:45PM *  5 points [-]

Has the team explicitly decided to call it "LessWrong" (no space) instead of "Less Wrong" (with a space)?

The spaced version has more precedent behind it. It's used by Eliezer and by most of the static content on lesswrong.com, including the <title> element.

Comment author: Habryka 16 September 2017 11:35:15PM 5 points [-]

Being aware that this is probably the most bikesheddy thing in this whole discussion, I've actually thought about this a bit.

From skimming a lot of early Eliezer posts, I've seen all three uses "LessWrong", "Lesswrong" and "Less Wrong" and so there isn't a super clear precedent here, though I do agree that "Less Wrong" was used a bit more often.

I personally really like "Less Wrong", because it has two weirdly capitalized words, and I don't like brand names that are two words. It makes it sound too much like it wants to refer to the original meaning of the words, instead of being a pointer towards the brand/organization/online-community, and while one might think that is actually useful, it usually just results in a short state of confusion when I read a sentence that has "Less Wrong" in it, because I just didn't parse it as the correct reference.

I am currently going with "LessWrong" and "LESSWRONG", which is what I am planning to use in the site navigation, logos and other areas of the page. If enough people object I would probably change my mind.

Comment author: Elo 17 September 2017 01:58:52AM 1 point [-]

Irrelevant as to which. Just pick one and stick to. It.

Comment author: ESRogs 17 September 2017 09:20:23AM 1 point [-]

I personally really like "Less Wrong", because it has two weirdly capitalized words, and I don't like brand names that are two words.

Did you mean to write, 'dislike' "Less Wrong"'?

Comment author: Habryka 17 September 2017 07:05:40PM 3 points [-]

Wow... yes. This is the second time in this comment thread that I forgot to add a "dis" in front of a word.

Comment author: arundelo 17 September 2017 08:06:54PM *  6 points [-]

I just used Wei Dai's lesswrong_user script to download Eliezer's posts and comments (excluding, last I knew, those that don't show up on his "OVERVIEW" page e.g. for karma reasons). This went back to late December 2009 before the network connection got dropped.

I counted his uses of "LessWrong" versus "Less Wrong". (Of course I didn't count things such as the domain name "lesswrong.com", the English phrase "less wrong", or derived words like "LessWrongers".)

"LessWrong": 1 2 3* 4*

"Less Wrong": 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20* 21 22* 23 24 25 26

Entries with asterisks appear in both lists. Of his four uses of "LessWrong", three are modifying another word (e.g., "LessWrong hivemind").

(For what it's worth, "LessWrongers": 1 2; "Less Wrongians": 1.)

Comment author: Manfred 15 September 2017 07:47:53PM 7 points [-]

I also agree that HPMOR might need to go somewhere other than the front page. From a strategic perspective, I somehow want to get the benefits of HPMOR existing (publicity, new people finding the community) without the drawbacks (it being too convenient to judge our ideas by association).

Comment author: Habryka 16 September 2017 11:13:55PM 4 points [-]

I am somewhat conflicted about this. HPMOR has been really successful at recruiting people to this community (HPMOR is the path by which I ended up here), and according to last year's survey about 25% of people who took the survey found out about LessWrong via HPMOR. I am hesitant to hide our best recruitment tool behind trivial inconveniences.

One solution to this that I've been thinking about is to have a separate section of the page filled with rationalist art and fiction, which would prominently feature HPMOR, Unsong and some of the other best rationalist fiction out there. I can imagine that section of the page itself getting a lot of traffic, since fiction is a lot easier to get into than the usually more dry reading on LW and SSC, and if we set up a good funnel between that part of the site and the main discussion we might get a lot of benefits, without needing to feature HPMOR prominently on the frontpage.

Comment author: SaidAchmiz 16 September 2017 11:32:52PM 3 points [-]

Are you sure that the set of people that are being recruited to the community via HPMOR, and the set of people whom we most want to recruit into the community, have a lot of overlap? Or are these, perhaps, largely disjoint sets? What about the set of people whom we most want to recruit, and the set of people who are repelled by HPMOR? Might there not be quite a bit of overlap there?

Numbers aren't everything!

I agree with the idea of having a separate rationalist fiction page. (Perhaps we might even make it so separate that it's actually a whole other site! A page / site section of "links to rationality-themed fiction" wouldn't be out of place, however.)

Comment author: DragonGod 17 September 2017 01:25:30AM 0 points [-]

Perhaps we might even make it so separate that it's actually a whole other site!

I think this is counter productive.

Comment author: Habryka 17 September 2017 02:35:44AM 4 points [-]

"Are you sure that the set of people that are being recruited to the community via HPMOR, and the set of people whom we most want to recruit into the community, have a lot of overlap?"

I agree that this is a concern to definitely think about, though in this case I feel like I have pretty solid evidence that there is indeed large amount of overlap. A lot of the best people that I've seen show up over the last few years seem to have been attracted by HPMOR (I would say more than 25%). It would be great to have some better formatted data on this, and for a long time I wanted someone to just create a spreadsheet for a large set of people in the rationalist community and codify their origin story, but until we have something like this, the data that I have from various surveys + personal experience + being in a key position to observe where people are coming from (working with CFAR and CEA for the last few years) I am pretty sure that there is significant overlap.

Comment author: DragonGod 17 September 2017 09:35:43AM 1 point [-]

One solution to this that I've been thinking about is to have a separate section of the page filled with rationalist art and fiction, which would prominently feature HPMOR, Unsong and some of the other best rationalist fiction out there. I can imagine that section of the page itself getting a lot of traffic, since fiction is a lot easier to get into than the usually more dry reading on LW and SSC, and if we set up a good funnel between that part of the site and the main discussion we might get a lot of benefits, without needing to feature HPMOR prominently on the frontpage.

I think this is a great solution.

Comment author: Alicorn 15 September 2017 08:23:41PM 11 points [-]

I feel more optimistic about this project after reading this! I like the idea of curation being a separate action and user-created sequence collections that can be voted on. I'm... surprised to learn that we had view tracking that can figure out how much Sequence I have read? I didn't know about that at all. The thing that pushed me from "I hope this works out for them" to "I will bother with this myself" is the Medium-style individual blog page; that strikes a balance between desiderata in a good place for me, and I occasionally idly wish for a place for thoughts of the kind I would tweet and the size I would tumbl but wrongly themed for my tumblr.

I don't like the font. Serifs on a screen are bad. I can probably fix this client side or get used to it but it stood out to me a surprising amount. But I'm excited overall.

Comment author: DragonGod 16 September 2017 03:45:53PM 1 point [-]

Agreed, generally, it seems that sans serif are for screens, and serif is for print.

Comment author: SaidAchmiz 16 September 2017 04:53:35PM 4 points [-]

This is old "received wisdom", and hasn't been the case for quite a while.

Folks, this is what people mean when they talk about LessWrong ignoring the knowledge of experts. Here's a piece of "knowledge" about typography and web design, that is repeated unreflectively, without any consideration of whether there exists some relevant body of domain knowledge (and people with that domain knowledge).

What do the domain experts have to say? Let's look:

But this domain knowledge has not, apparently, reached LessWrong; here, "Serifs on a screen are bad" and "sans serif are for screens, and serif is for print" is still true.

And now we have two people agreeing with each other about it. So, what? Does that make it more true? What if 20 people upvoted those comments, and five more other LessWrongers posted in agreement? Would that make it more true? What amount of karma and local agreement does it take to get to the truth?

Comment author: DragonGod 16 September 2017 05:28:45PM *  0 points [-]

I have no knowledge of typography, but was thought in University that serif fonts should be used for screens, and sans serif for print; it is very possible, that my lecturers were wrong.

Would that make it more true?

No.

What amount of karma and local agreement does it take to get to the truth?

None. The truth is orthogonal to the level of local agreement. That said, local agreement is Bayesian evidence for the veracity of a proposition.

Comment author: SaidAchmiz 16 September 2017 06:33:40PM 0 points [-]

… local agreement is Bayesian evidence for the veracity of a proposition.

Why? Are people around here more likely to agree with true propositions than false ones? This might be true in general, but is it true in domains where there exists non-trivial expertise? That's not obvious to me at all. What makes you think so?

Comment author: DragonGod 16 September 2017 08:26:12PM 1 point [-]

Are people around here more likely to agree with true propositions than false ones? This might be true in general,

I was generalising from the above. I expect the epistemic hygiene on LW to be significantly higher than the norm.

For any belief b, let Pr(b) be the probability that b is true. Forall b such that b is a consensus on Lesswrong (greater than some k% of Lesswrongers believe b), then Pr(b) > 0.50 is a belief I hold.

Comment author: SaidAchmiz 16 September 2017 11:28:43PM 0 points [-]

But this is an entirely unwarranted generalization!

Broad concepts like "the epistemic hygiene on LW [is] significantly higher than the norm" simply don't suffice to conclude that LessWrongers are likely to have a finger on the pulse of arbitrary domains of knowledge/expertise, nor that LessWrongers have any kind of healthy respect for expertise—especially since, in the latter case, we know that they in fact do not.

Comment author: DragonGod 17 September 2017 01:03:26AM 2 points [-]

simply don't suffice to conclude that LessWrongers are likely to have a finger on the pulse of arbitrary domains of knowledge/expertise

Do you suggest that the consensus on Lesswrong about arbitrary domains is likely to be true with P <= 0.5?
As long as Pr(B|lesswrong consensus) is > 0.5, then Lesswrong consensus remains Bayesian evidence for truth.

Comment author: SaidAchmiz 17 September 2017 01:14:31AM 1 point [-]

Do you suggest that the consensus on Lesswrong about arbitrary domains is likely to be true with P <= 0.5?

For some domains, sure. For others, not.

We have no real reason to expect any particular likelihood ratio here, so should probably default to P = 0.5.

Comment author: DragonGod 17 September 2017 01:37:12AM 1 point [-]

I expect that for most domains (possibly all), Lesswrong consensus is more likely to be right than wrong. I haven't yet seen reason to believe otherwise; (it seems you have?).

Comment author: SaidAchmiz 16 September 2017 06:48:57PM 0 points [-]

… it is very possible, that my lecturers were wrong.

They were lecturers in what subject? Design / typography / etc.? Or, some unrelated subject?

Comment author: DragonGod 16 September 2017 08:23:33PM 0 points [-]

Unrelated subjects (insofar as webdesign is classified as unrelated).

Comment author: SaidAchmiz 16 September 2017 11:26:27PM 0 points [-]

Well, in that case, what I conjecture is simply that either this (your university classes) took place a while ago, or your lecturers formed their opinions a while ago and didn't keep up with developments, or both.

"Use sans-serif fonts for screen" made sense. Once. When most people had 72ppi displays (if not lower), and no anti-aliasing, or subpixel rendering.

None of that has been true for many, many years.

Comment author: DragonGod 17 September 2017 01:00:46AM 0 points [-]

I am currently in my fourth year.

or your lecturers formed their opinions a while ago and didn't keep up with developments

I have expressed this sentiment myself, so it is plausible.

Comment author: quanticle 17 September 2017 03:56:50PM 1 point [-]

Mathematically, if the truth is orthogonal to the level of local agreement, local agreement cannot constitute Bayesian evidence for the veracity of the proposition. If we're taking local agreement as Bayesian evidence for the veracity of the proposition, we're assuming the veracity of the proposition and local agreement are not linearly independent, which would violate orthogonality.

Comment author: DragonGod 17 September 2017 04:36:00PM 0 points [-]

Either I don't know what Bayesian evidence is, or you don't.

My understanding is:

An outcome is Bayesian evidence for a proposition, if the outcome is more likely to occur if the proposition is true, than vice versa.

Based on that understanding of Bayesian evidence, I argue that Lesswrong consensus on a proposition is Bayesian evidence for that proposition. Lesswrongers have better than average epistemic hygiene, and pursue true beliefs. You expect the average lesswronger to have a higher percentage of true beliefs than a lay person. Furthermore if a belief is consensus among the Lesswrong community, then it is more likely to be true. (A single Lesswronger may have some false beliefs), but the set of false beliefs that would be shared by the overwhelming majority of Lesswrongers would be very small.

Comment author: quanticle 17 September 2017 04:54:44PM 2 points [-]

An outcome is Bayesian evidence for a proposition, if the outcome is more likely to occur if the proposition is true, than vice versa.

That assumes that there is a statistical correlation between the two, no? If the two are orthogonal to each other, they're statistically uncorrelated, by definition.

Comment author: DragonGod 17 September 2017 06:17:06PM 0 points [-]
  1. http://lesswrong.com/lw/nz/arguing_by_definition/
  2. The local agreement (on Lesswrong) on a proposition is not independent of the veracity of the proposition. To claim otherwise is to claim that Lesswrongers form their beliefs through a process that is no better than random guessing. That's a very strong claim to make, and extraordinary claims require extraordinary evidence.
Comment author: entirelyuseless 17 September 2017 07:07:16PM 0 points [-]

"The local agreement (on Lesswrong) on a proposition is not independent of the veracity of the proposition."

Sure, and that is equally true of indefinitely many other populations in the world and the whole population as well. It would take an argument to establish that LW local agreement is better than any particular one of those populations.

Comment author: DragonGod 17 September 2017 07:58:15PM 0 points [-]

Sure,

Then we are in agreement.

It would take an argument to establish that LW local agreement is better than any particular one of those populations.

As for Lesswrong vs the general population, I point to the difference in epistemic hygiene between the two groups.

Comment author: SaidAchmiz 16 September 2017 04:59:23PM *  7 points [-]

I don't like the font. … I can probably fix this client side or get used to it but it stood out to me a surprising amount.

My other comment aside, this is (apart from the general claim) a reasonable user concern. I would recommend (to the LW 2.0 folks) the following simple solution:

  • Have several pre-designed themes (one with a serif font, one with a well-chosen sans font, and then "dark theme" versions of both, at least)
  • Let users select between those themes via their Profile screen

This should satisfy most people, and would still preserve the site's aesthetics.

Comment author: Habryka 16 September 2017 11:20:43PM 1 point [-]

I am slightly hesitant to force authors to think about how their posts will look like in different fonts, and different styles. While I don't expect this to be a problem most of the time, there are posts that I write where the font choice would matter for how the content comes across.

Medium allows the writer to chose between a sans-serif and a serif font, which I like a bit more, but I would expect would not really satisfy Alicorn's preferences.

Maintaining multiple themes also adds a lot of design constraints and complexity to updating various parts of the page. The width of a button might change with different fonts, and depending on the implementation, you might end up needing to add special cases for each theme choice, which I would really prefer to avoid.

Overall, my hypothesis is that Alicorn might not dislike serif-fonts in general, but might be unhappy about our specific choice of serif fonts, which is indeed very serify. I would be curios whether she also has a similar reaction to the default Medium font, for example displayed in this post: https://medium.com/@pshrmn/a-simple-react-router-v4-tutorial-7f23ff27adf

Comment author: SaidAchmiz 16 September 2017 11:43:06PM *  6 points [-]

As with many such things, there are standard, canonical solutions to your concerns.

In this case, the answer is "select pairs/sets of fonts that are specifically designed to have the same width in both the serif and the sans variants". There are many such "font superfamilies". If you'd like, I can draw up a list of recommendations. (It would be helpful if you could let me know your constraints w.r.t. licensing and budget.)

Theme variants do not have to be comprehensive redesigns. It is eminently possible to design a set themes that will not lead to the content being perceived very differently depending on the active theme.

P.S.:

Overall, my hypothesis is that Alicorn might not dislike serif-fonts in general, but might be unhappy about our specific choice of serif fonts, which is indeed very serify.

I suspect the distinction you're looking for, here, is between transitional serifs (of which Charter, the Medium font, is one, although it's also got slab-serif elements) and the quite different old-style serifs (of which ET Book, the current LW 2.0 font, is one). (There are also other differences, orthogonal to that distinction—such as ET Book's considerably smaller x-height—which also affect readability.)

Alicorn, if you're reading this, I wonder what your reaction is to the font used on this website:

https://www.readthesequences.com

P.P.S.: It is also possible that the off-black text color is negatively impacting readability! (Especially since it can interact in a somewhat unfortunate manner with certain text rendering engines.)

Alicorn, what OS and browser are you viewing the LW 2.0 site on?

Comment author: Alicorn 17 September 2017 01:19:57AM 3 points [-]

I do not like the readthesequences font. It feels like I'm back in grad school and also reading is suddenly harder.

I'm on a Mac 'fox.

Comment author: SaidAchmiz 17 September 2017 02:37:50AM *  3 points [-]

Ok, thanks!

FYI, your assessment is in the extreme minority; most people who have seen that site have responded very positively to the font choice (and the typography in general). This suggests that your preferences are unusual, in this sphere.

I say this, not to suggest that your preference / reaction is somehow "wrong" (that would be silly!), but a) to point out the the danger in generalizing from one's own example (typical mind blah blah), and b) to underscore the importance of user choice and customization options!

rest of this response is not specifically for Alicon but is re: this whole comment thread

This is still a gold standard of UX design: sane defaults plus good[1] customizability.

[1] "Good" here means:

  • comprehensive
  • intuitive
  • non-overwhelming (i.e. layered)

Note, these are ideals, not basic requirements; every step we take toward the ideal is a good step. So by no means should you (the designer/developer) ever feel like "comprehensive customizability is an unreachable goal; there's no reason to bother, since Doing It Right™ is too much effort"! So in this case, just offering a couple of themes, which are basic variations on each other (different-but-matching font choices, a different color scheme), is already a great thing and will greatly improve the user experience.

Comment author: Alicorn 17 September 2017 01:18:39AM 2 points [-]

The Medium font is much less bad but still not great.

Comment author: MaryCh 15 September 2017 09:16:06PM 0 points [-]

And the "Recent on rationality blogs" button will work again?

Comment author: moridinamael 15 September 2017 09:48:48PM 12 points [-]

I've heard that in some cases, humans regard money to be an incentive.

Integrating Patreon, Paypal or some existing micropayments system could allow users to not only upvote but financially reward high-value community members.

If Less Wrong had a little "support this user on Patreon" icon next to every poster's username, I would certainly have thrown some dollars at more than a handful of Less Wrong posters. Put more explicitly - maybe Yvain and Eliezer would be encouraged to post certain content on LW2.0 rather than SSC/Facebook if they reliably got a little cash from the community at large every time they did it.

Speaking of the uses of money, I'm fond of communities that are free to read but require a small registration fee in order to post. Such fees are a practically insurmountable barrier to trolls. Eugine Nier could not have done what he did if registering an account cost $10, or even $1.

Comment author: casebash 15 September 2017 10:40:52PM 0 points [-]

Are there many communities that do that apart from meta-filter?

Comment author: moridinamael 15 September 2017 10:50:38PM 1 point [-]

You mean communities that require a fee? I'm specifically thinking of SomethingAwful. Which has a bad reputation, but is actually an excellent utility if you visit only the subforums and avoid the general discussion and politics sections of the site.

Comment author: John_Maxwell_IV 16 September 2017 07:37:18AM *  7 points [-]

Does anyone know the literature on intrinsic motivation well enough to comment on whether paying users to post is liable to undermine other sources of motivation?

The registration fee idea is interesting, but exacerbates the chicken and egg problem inherent in online communities. I also have a hunch that registration fees tend to make people excessively concerned with preserving their account's reputation (so they can avoid getting banned and losing something they paid money for), in a way that's cumulatively harmful to discourse, but I can't prove this.

Comment author: Elo 16 September 2017 07:50:13AM 1 point [-]

Yes it will probably cause people to devalue the site. If you pay a dollar it will tend to "feel like" the entire endeavour is worth a dollar.

Comment author: moridinamael 16 September 2017 02:27:18PM 1 point [-]

So charge $50 =)

Comment author: NancyLebovitz 16 September 2017 02:29:35PM 1 point [-]

Metafilter has continued to be a pretty good site even though it requires a small fee to join. There's also a requirement to post a few comments (you can comment for free but need to be a member to do top level posts) and wait a week after sending in money. And it's actively moderated.

http://www.metafilter.com/about.mefi

Comment author: John_Maxwell_IV 17 September 2017 02:06:48AM 1 point [-]

I was talking about paying people to contribute. Not having people pay for membership.

Comment author: lifelonglearner 16 September 2017 03:14:17PM *  7 points [-]

Yep!

See here and here

As one might expect, money is often a deterrent for actual habituation.

EDIT: Additional clarification:

The first link shows that monetary payment is only effective as a short-term motivator.

The second link is a massive study involving almost 2,000 people which tried to pay people to go to the gym. We found that after the payment period ended, gym attendance fell back to roughly pre-payment levels.

Comment author: DragonGod 16 September 2017 05:22:44PM 0 points [-]

What about a currency say tokens that you get with upvotes and posts? 10 upvotes gives 1 token. You may add token payment for posts and/or comments to incentivise activity (I'm not sure this would be all round a good idea though (adding payment incentives may lead to a greater quantity of activity, but lower quality). So token payments on activity that garners a certain number of upvotes?

Tokens can be given to other users, and gives them karma as well (if 10 karma = 1 token, then transferring a token may cause the recipient to gain an increase of 5 - 9 karma)? Tokens would be a method of costly signalling that you enjoyed particular content--sort of a non money analogue of reddit gold.

Comment author: richardbatty 16 September 2017 12:07:41PM *  15 points [-]

Have you done user interviews and testing with people who it would be valuable to have contribute, but who are not currently in the rationalist community? I'm thinking people who are important for existential risk and/or rationality such as: psychologists, senior political advisers, national security people, and synthetic biologists. I'd also include people in the effective altruism community, especially as some effective altruists have a low opinion of the rationalist community despite our goals being aligned.

You should just test this empirically, but here are some vague ideas for how you could increase the credibility of the site to these people:

  • My main concern is that lesswrong 2.0 will come across as (or will actually be) a bizarre subculture, rather than a quality intellectual community. The rationality community is offputting to some people who on the face of it should be interested (such as myself). A few ways you could improve the situation:
    • Reduce the use of phrases and ideas that are part of rationalist culture but are inessential for the project, such as references to HPMOR. I don't think calling the moderation group "sunshine regiment" is a good idea for this reason.
    • Encourage the use of standard jargon from academia where it exists, rather than LW jargon. Only coin new jargon words when necessary.
    • Encourage writers to do literature reviews to connect to existing work in relevant fields.
  • It could also help to:
    • Encourage quality empiricism. It seems like rationalists have a tendency to reason things out without much evidence. While we don't want to force a particular methodology, it would be good to nudge people in an empirical direction.
    • Encourage content that's directly relevant to people doing important work, rather than mainly being abstract stuff.
Comment author: NancyLebovitz 16 September 2017 02:37:09PM 7 points [-]

It seems to me that you want to squeeze a lot of the fun out of the site.

I'm not sure how far it would be consistent with having a single focus for rationality online, but perhaps there should be a section or a nearby site for more dignified discussion.

I think the people you want to attract are likely to be busy, and not necessarily interested in interviews and testing for a rather hypothetical project, but I could be wrong.

Comment author: Habryka 16 September 2017 11:50:40PM *  12 points [-]

I feel that this comment deserves a whole post in response, but I probably won't get around to that for a while, so here is a short summary:

  • I generally think people have confused models about what forms of weirdness are actually costly. The much more common error mode for online communities is being boring and uninteresting. The vast majority of the most popular online forums are really weird and have a really strong distinct culture. The same is true for religions. There are forms of weirdness that prevent you from growing, but I feel that implementing the suggestions in this comment in a straightforward way would mostly result in the forum becoming boring and actually stinting its meaningful growth.

  • LessWrong is more than just weird in a general sense. A lot of the things that make LessWrong weird are actually the result of people having thought about how to have discourse, and then actually implementing those norms. That doesn't mean that they got it right, but if you want to build a successful intellectual community you have to experiment with norms around discourse, and avoiding weirdness puts a halt to that.

  • I actually think that one of the biggest problem with Effective Altruism is the degree to which large parts of it are weirdness averse, which I see as one of the major reasons why EA kind of hasn't really produced any particularly interesting insights or updates in the past few years. CEA at least seems to agree with me (probably partially because I used to work there and shaped the culture a bit, so this isn't independent), and tried to counteract this by making the explicit theme of this years EA Global in SF about "accepting the weird parts of EA". As such, I am not very interested in appeasing current EAs need for normalcy and properness and instead hope that this will move EA towards becoming more accepting of weird things.

I would love to give more detailed reasoning for all of the above, but time is short, so I will leave it at this. I hope this gave people at least a vague sense of my position on this.

Comment author: richardbatty 17 September 2017 06:55:47PM 6 points [-]

You're mainly arguing against my point about weirdness, which I think was less important than my point about user testing with people outside of the community. Perhaps I could have argued more clearly: the thing I'm most concerned about is that you're building lesswrong 2.0 for the current rationality community rather than thinking about what kinds of people you want to be contributing to it and learning from it and building it for them. So it seems important to do some user interviews with people outside of the community who you'd like to join it.

On the weirdness point: maybe it's useful to distinguish between two meanings of 'rationality community'. One meaning is the intellectual of community of people who further the art of rationality. Another meaning is more of a cultural community: a set of people who know each other as friends, have similar lifestyles and hobbies, like the same kinds of fiction, in jokes, etc. I'm concerned that less wrong 2.0 will select for people who want to join the cultural community, rather than people who want to join the intellectual community. But the intellectual community seems much more important. This then gives us two types of weirdness: weirdness that comes out of the intellectual content of the community is important to keep - ideas such as existential risk fit in here. Weirdness that comes more out of the cultural community seems unnecessary - such as references to HPMOR.

We can make an analogy with science here: scientists come from a wide range of cultural, political, and religious backgrounds. They come together to do science, and are selected on their ability to do science, not their desire to fit into a subculture. I'd like to see lesswrong 2.0 to be more like this, i.e. an intellectual community rather than a subculture.

Comment author: NancyLebovitz 17 September 2017 07:27:56PM 3 points [-]

My impression is that you don't understand how communities form. I could be mistaken, but I think communities form because people discover they share a desire rather than because there's a venue that suits them-- the venue is necessary, but stays empty unless the desire comes into play.

" I'm thinking people who are important for existential risk and/or rationality such as: psychologists, senior political advisers, national security people, and synthetic biologists. I'd also include people in the effective altruism community, especially as some effective altruists have a low opinion of the rationalist community despite our goals being aligned."

Is there something they want to do which would be better served by having a rationality community that suits them better than the communities they've got already?

Comment author: richardbatty 17 September 2017 08:19:47PM 1 point [-]

"I think communities form because people discover they share a desire"

I agree with this, but would add that it's possible for people to share a desire with a community but not want to join it because there are aspects of the community that they don't like.

"Is there something they want to do which would be better served by having a rationality community that suits them better than the communities they've got already?"

That's something I'd like to know. But I think it's important for the rationality community to attempt to serve these kinds of people both because these people are important for the goals of the rationality community and because they will probably have useful ideas to contribute. If the rationality community is largely made up of programmers, mathematicians, and philosophers, it's going to be difficult for it to solve some of the world's most important problems.

Perhaps we have different goals in mind for lesswrong 2.0. I'm thinking of it as a place to further thinking on rationality and existential risk, where the contributors are anyone who both cares about those goals and is able to make a good contribution. But you might have a more specific goal: a place to further thinking on rationality and existential risk, but targeted specifically at the current rationality community so as to make better use of the capable people within it. If you had the second goal in mind then you'd care less about appealing to audiences outside of the community.

Comment author: NancyLebovitz 17 September 2017 08:50:14PM 3 points [-]

I'm fond of LW (or at least its descendants). I'm somewhat weird myself, and more tolerant of weirdness than many.

It has taken me years and some effort to get a no doubt incomplete understanding of people who are repulsed by weirdness.

From my point of view, you are proposing to destroy something I like which has been somewhat useful in the hopes of creating a community which might not happen.

The community you imagine might be a very good thing. It may have to be created by the people who will be in it. Maybe you could start the survey process?

I'm hoping that the LW 2.0 software will be open source. The world needs more good discussion venues.

Comment author: John_Maxwell_IV 18 September 2017 05:05:16AM *  4 points [-]

We can make an analogy with science here: scientists come from a wide range of cultural, political, and religious backgrounds.

I'm not persuaded that this is substantially more true of scientists than people in the LW community.

Notably, the range of different kinds of expertise that one finds on LW is much broader than that of a typical academic department (see "Profession" section here).

They come together to do science, and are selected on their ability to do science, not their desire to fit into a subculture.

I don't think people usually become scientists unless they like the culture of academic science.

I'd like to see lesswrong 2.0 to be more like this, i.e. an intellectual community rather than a subculture.

I think "intellectual communities" are just a high-status kind of subculture. "Be more high status" is usually not useful advice.

I think it might make sense to see academic science as a culture that's optimized for receiving grant money. Insofar as it is bland and respectable, that could be why.

If you feel that receiving grant money and accumulating prestige is the most important thing, then you probably also don't endorse spending a lot of time on internet fora. Internet fora have basically never been a good way to do either of those things.

Comment author: lifelonglearner 16 September 2017 04:17:57PM 2 points [-]

Two things I'd like to see:

1) Some sort of "example-pedia" where, in addition to some sort of glossary, we're able crowd-source examples of the concepts to build upon understanding. I think examples continue to be in short supply, and that's a large understanding gap, especially when we deal with concepts unfamiliar to most people.

2) Something similar to Arbital's hover-definitions, or a real-time searchable glossary that's easily available.

I think the above two things could be very useful features, given the large swath of topics we like to discuss, from cognitive psych to decision theory, to help people more invested in one area more easily swap to reading stuff in another area.

Comment author: DragonGod 16 September 2017 05:15:28PM *  0 points [-]

Doesn't the wiki already achieve (1) to a satisfactory level?

I support (2).

Comment author: Habryka 16 September 2017 11:59:18PM 0 points [-]

1) I think this would be great, but is also really hard. I feel like you would need to build a whole wiki-structure with conflict resolution and moderation norms and collaborative editing features to achieve that kind of thing. But who knows, there might be an elegant and simple implementation that would work that I haven't thought of.

2) Arbital-style greenlinks are in the works and should definitely exist. For now they would only do the summary and glossary thing when you link to LW posts, but we can probably come up with a way of crowdsourcing more definitions of stuff without needing to create whole posts for it. Open to design suggestions here.

Comment author: lifelonglearner 17 September 2017 12:59:10AM 0 points [-]

The easiest method for 1, I think, would just to have a section under every item in the glossary called "Examples" and trust the community to put in good ones and delete bad ones.

For 2, I was thinking about something like a page running Algolia instant search, that would quickly find the term you want, bolded, with it's accompanying definition after it, dictionary-esque.

Comment author: DragonGod 17 September 2017 01:21:03AM 3 points [-]

1) I think this would be great, but is also really hard. I feel like you would need to build a whole wiki-structure with conflict resolution and moderation norms and collaborative editing features to achieve that kind of thing. But who knows, there might be an elegant and simple implementation that would work that I haven't thought of.

I think the wiki is an integral feature of LW, such that if the new site lacks a Wiki, I'll resist moving to the new site.

Comment author: Habryka 17 September 2017 02:48:45AM 1 point [-]

We are planning to leave the wiki up, and probably restyle it at some point, so it will not be gone. User accounts will no longer be shared though, for the foreseeable future, which I don't think will be too much of an issue.

But I don't yet have a model of how to make the wiki in general work well. The current wiki is definitely useful, but I feel that it's main use has been the creation of sequences and collections of posts, which is now integrated more deeply into the site via the sequences functionality.

Comment author: DragonGod 17 September 2017 09:26:50AM 3 points [-]

Several people have suggested pmwiki; perhaps you should give it a try?

Comment author: Wei_Dai 17 September 2017 04:56:53PM 3 points [-]

The wiki is also useful for defining basic concepts used by this community, and linking to them in posts and comments when you think some of your readers might not be familiar with them. It might also be helpful for outreach, for example our wiki page for decision theory shows up in the first page of Google results for "decision theory".

Comment author: Habryka 17 September 2017 07:20:45PM 2 points [-]

Oh, that's cool! I didn't know that.

This does update me towards the wiki being important. I just pinged Malo on whether I can get access to the LessWrong wiki analytics, so that I can look a bit more into this.

Comment author: DragonGod 16 September 2017 05:08:02PM 1 point [-]

I've often faced frustration (I access LW from mobile) due to the "close" button being clicked (it is often not visible when typing in portrait mode (my phone is such that I can't see the comment while typing in landscape, and I'm used to the portrait keyboard)) resulting in me losing the entire comment. This is very demotivating, and quite frustrating. I hope that this is not a problem in Lesswrong 2.0, and hope that functionality for saving drafts of comments is added.

Comment author: Habryka 16 September 2017 11:42:13PM 2 points [-]

Yeah, the design of the commenting UI is sufficiently different, and more optimized for mobile that I expect this problem to be gone. That said, we are still having some problems with our editor on mobile, and it will take a bit to sort that out.

Comment author: DragonGod 17 September 2017 01:10:21AM *  0 points [-]

Thanks. Even if it's no longer a problem, I think saving drafts of comments (if it's not too big a headache to add) would be a nice improvement.

Comment author: DragonGod 16 September 2017 05:13:39PM *  6 points [-]

On StackExchange, you lose reputation whoever you downvote a question/answer; this makes downvoting a costly signal for displeasure. I like the notion, and hope it is included in the new site. If you have to spend your hard-earned karma to cause someone to lose karma, then it may discourage karma assassination, and ensure that downvotes are only used on content people have strong negative feelings towards.

Pros

  1. Users only downvote content they feel strong displeasure towards.
  2. Karma assassination via sockpuppets becomes impossible, and targeted karma attacks through your main account because you dislike a user becomes very costly.
  3. Moderation of downvoting behaviour would be vastly reduced as users downvote less, and only on content they have strong feelings towards.

Cons

  1. There are much less downvotes.
  2. I don't think downvotes should be costly. On StackExchange mediocre content can get a high score if it relates to a popular topic.
    Given that this website has the goal of filtering content in a way that allows people who only want to read a subset to read the high quality posts downvotes of medicore content as useful information.

I think the first con is a feature and not a bug; it is not clear to me that more downvotes are intrinsically beneficial. The second point is valid criticism and I think we need to way the benefit of the downvotes against their cost.

I think you lose one reputation, per downvote, and cause the person downvoted to lose 2 - 5 reputation.

I think downvoting costing 0.33 - 0.5 the karma you deduct from the target of your downvote is a good idea, and will encourage better downvote practices and would overall be an improvement to the karma feature.

Comment author: Habryka 16 September 2017 11:30:34PM *  4 points [-]

Hmm... I feel that this disincentivizes downvoting too strongly, and just makes downvoting feel kind of shitty on an emotional level.

An alternative thing that I've been thinking about is to make it so that when you downvote something, you have to give a short explanation between 40 and 400 characters about why you think the comment was bad. Which both adds a cost to downvoting, and actually translates that cost into meaningful information for the commenter. Another alternative implementation of this could work with a set of common tags that you can choose from when downvoting a comment, maybe of the type "too aggressive", "didn't respond to original claim", "responded to strawman", etc.

Comment author: DragonGod 17 September 2017 01:09:44AM *  1 point [-]

Hmm... I feel that this incentivizes downvoting too strongly

How does this incentivise downvoting? Downvoting is costly signal of displeasure, and as downvotes cost a certain fraction of the karma you deduct, it disincentivises downvoting.

makes downvoting feel kind of shitty on an emotional level.

This is a feature not a bug; we don't want to encourage downvoting and karma assassination. The idea is that downvoting becomes costly signalling of displeasure. Mere disagreement would not cause downvoting. Downvoting should be costly signalling.

An alternative thing that I've been thinking about is to make it so that when you downvote something, you have to give a short explanation between 40 and 400 characters about why you think the comment was bad. Which both adds a cost to downvoting, and actually translates that cost into meaningful information for the commenter.

I thought of this as well, but decided that the StackExchange system of making downvotes cost karma is better for the purposes I thought of.

Another alternative implementation of this could work with a set of common tags that you can choose from when downvoting a comment, maybe of the type "too aggressive", "didn't respond to original claim", "responded to strawman", etc.

This fails to achieve "adds a cost to downvoting"; if there are custom downvoting tags, then the cost of downvoting is removed. I think making downvotes cost a fraction (<= 0.5) of the karma you deduct serves to discourage downvoting.

Comment author: Habryka 17 September 2017 02:40:51AM 0 points [-]

"How does this incentivise downvoting?"

Sorry, my bad. I wanted to write "disincentivize", but failed. I guess it's a warning against using big words.

Comment author: DragonGod 17 September 2017 09:24:07AM *  1 point [-]

Oh, okay. I still think we want to disincentivise downvoting though.

Pros

  1. Users only downvote content they feel strong displeasure towards.
  2. Karma assassination via sockpuppets becomes impossible, and targeted karma attacks through your main account because you dislike a user becomes very costly.
  3. Moderation of downvoting behaviour would be vastly reduced as users downvote less, and only on content they have strong feelings towards.

Cons

  1. There are much less downvotes.
  2. I don't think downvotes should be costly. On StackExchange mediocre content can get a high score if it relates to a popular topic.
    Given that this website has the goal of filtering content in a way that allows people who only want to read a subset to read the high quality posts downvotes of medicore content as useful information.

I think the first con is a feature and not a bug; it is not clear to me that more downvotes are intrinsically beneficial. The second point is valid criticism and I think we need to way the benefit of the downvotes against their cost.

I suggest users lose 40% of the karma they deduct (since you want to give different users different weights). For example, if you downvote someone, they lose 5 karma, but you lose 2 karma.

Comment author: NancyLebovitz 17 September 2017 07:32:02PM 2 points [-]

How about the boring simplicity of having downvote limits? Maybe something around one downvote/24 hours-- not cumulative.

If you're feeling generous, maybe add a downvote/24 hours per 1000 karma, with a maximum or 5 downvotes/24 hours.

Comment author: DragonGod 17 September 2017 08:02:19PM 1 point [-]

This is a solution as well; it is not clear to me though, that it is better than the solution I proposed.

Comment author: Yosarian2 16 September 2017 09:00:26PM *  4 points [-]

My concern around the writing portion of your idea is this: from my point of view, the biggest problem with lesswrong is that the sheer quantity of new content is extremely low. In order for a LessWrong 2.0 to succeed, you absolutly have to get more people spending the time and effort to create great content. Anything you do to make it harder for people to contribute new content will make that problem worse. Especially anything that creates a barrier for new people who want to post something in discussion. People will not want to write content that nobody might see unless it happens to get promoted.

Once you get a constant stream of content on a daily basis, then maybe you can find a way to curate it to highlight the best content. But you need that stream of content and engagement first and foremost or I worry the whole thing may be stillborn.

Comment author: Habryka 16 September 2017 11:56:18PM 4 points [-]

Agree with this.

I do however think that we actually have a really large stream of high-quality-content already in the broader rationality diaspora that we just need to tap into and get onto the new page. As such, the problem is a bit easier than getting a ton of new content creators, and is instead more of a problem of building something that the current content creators want to move towards.

And as soon as we have a high-quality stream of new content I think it will be easier to attract new writers who will be looking to expand their audience.

Comment author: Yosarian2 17 September 2017 01:48:05AM *  3 points [-]

Maybe; there certanly are a lot of good rationalist bloggers who have at least at some point been interested in LessWrong. I don't think bloggers will come back though unless the site first becomes more active then it currently is. (They may give it a chance after the Beta is rolled out, but if activity doesn't increase quickly they'll leave again.) Activity and an active community is necessary to keep a project like this going. Without an active community here there's no point in coming back here instead of posting on your own blog.

I guess my concern here though is that right now, LessWrong has a "discussion" side which is a little active and a "main" side which is totally dead. And it sounds like this plan would basically get rid of the discussion side, and make it harder to post on the main side. Won't the most likely outcome just be to lower the amount of content and the activity level even more, maybe to zero?

Fundamentally, I think the premise of your second bottleneck is incorrect. We don't really have a problem with signal-to-noise ratio here, most of the posts that do get posted here are pretty good, and the few that aren't don't get upvoted and most people ignore them without a problem. We have a problem with low total activity, which is almost the exact opposite problem.

Comment author: DragonGod 16 September 2017 09:53:14PM *  6 points [-]

I think adding a collection of the best Overcoming Bias posts, including posts like "you are never entitled to your own opinion" to the front page would be a great idea, and it might be better than putting a link to HPMOR (some users seem to believe that linking HPMOR on the front page may come across as puerile).

Comment author: Habryka 16 September 2017 11:53:37PM 3 points [-]

I agree that I really want a Robin Hanson collection in a similar style to how we already have a Scott Alexander collection. We will have to coordinate with Robin on that. I can imagine him being on board, but I can also imagine him being hesitant to have all his content crossposted to another site. He seemed to prefer having full control over everything on his own page, and apparently didn't end up posting very much on LessWrong, even as LW ended up with a much larger community and much more activity.

Comment author: DragonGod 17 September 2017 01:13:16AM 2 points [-]

Well, maintaining links to them (if he prefers them on his site) might be an acceptable compromise then? I think Robin's posts are a core part of the "rationalist curriculum", and the site would be incomplete if we don't include them.

Comment author: Craig_Heldreth 17 September 2017 02:27:52AM 1 point [-]

What would make you personally use the new LessWrong?

Quality content. Quality content. And quality content.

Is there any specific feature that would make you want to use it?

The features which I would most like to see:

Wiki containing all or at least most of the jargon.

Rationality quotations all in one file alphabetically ordered by author of the quote.

Book reviews and topical reading lists.

Pie in the sky: the Yudkowsky sequences edited, condensed, and put into an Aristotelian/Thomsian/Scholastic order. (Not that Aristotle or Thomas Aquinas ever did this but the tradition of the scholastics was always to get this pie in the sky.) It might be interesting to see what an experienced book editor would advise doing with this material.

Everything I would want to not see has been covered by yourself or others in this thread.

Comment author: DragonGod 17 September 2017 09:32:00AM 2 points [-]

Pie in the sky: the Yudkowsky sequences edited, condensed, and put into an Aristotelian/Thomsian/Scholastic order. (Not that Aristotle or Thomas Aquinas ever did this but the tradition of the scholastics was always to get this pie in the sky.) It might be interesting to see what an experienced book editor would advise doing with this material.

Doesn't Rationality: From AI to Zombies achieve this already?

Comment author: ingres 17 September 2017 01:46:37PM 0 points [-]

Rat:A-Z is like...a slight improvement over EY's first draft of the sequences. I think when Craig says condensed he has much more substantial editing in mind.

Comment author: Benito 17 September 2017 07:06:45PM 5 points [-]

FYI R:AZ is shorter than The Sequences by a factor of 2, which I think is a substantial improvement. Not that it couldn't be shorter still ;-)

Comment author: ingres 17 September 2017 08:41:54PM 1 point [-]

Oh huh, TIL. Thanks!

Comment author: ESRogs 17 September 2017 10:09:32AM 3 points [-]

If you write a post, it first shows up nowhere else but your personal user page, which you can basically think of being a medium-style blog. If other users have subscribed to you, your post will then show up on their frontpages

Some questions about this (okay if you don't have answers now):

  • Can anyone make a personal page?
  • Are there any requirements for the content -- does it need to be "rationality" themed, or can it be whatever the user wants (with the expectation that only LW-appropriate stuff will get promoted to the general frontpage)?
  • Can a user get kicked off for inappropriate content (whatever that means)?
Comment author: Benito 17 September 2017 07:04:22PM *  4 points [-]

Thanks for the questions.

  • From the start, all user pages will be personal pages. If you make an account, you'll have a basic blog.
  • No requirements for the content. This is for people in the community (and others) to write about whatever they're interested in. If you want a place to write those short statistical oddities you've been posting to tumblr; if you want a place to write those not-quite-essays you've been posting to facebook; if you want a place to try out writing full blog posts; if you wish, you can absolutely do that here.
  • I expect we'll have some basic norms of decency. I've not started the discussion within the Sunshine Regiment on what these will be yet, but once we've had a conversation we'll open it up to input from the community, and I'll make sure to publish clearly both the norms and info on what happens when someone breaks a norm.
Comment author: Habryka 17 September 2017 07:15:20PM 2 points [-]

Apparently me and Ben responded to this at the same time. We seem to have mostly said the same things, so we are apparently fairly in sync.

Comment author: Habryka 17 September 2017 07:13:51PM 1 point [-]

"Can anyone make a personal page? Are there any requirements for the content -- does it need to be "rationality" themed, or can it be whatever the user wants (with the expectation that only LW-appropriate stuff will get promoted to the general frontpage)? Can a user get kicked off for inappropriate content (whatever that means)?"

Current answer to all of those is:

I don't have a plan for that yet, let's figure it out as we run into that problem. For now having too much traffic or content to the site seems like a less important error mode, even if that content is bad, as long as it doesn't clog up the attention of everyone else.

I would probably suggest warning and eventually banning people who repeatedly try to bring highly controversial politics onto the site, or who repeatedly act in bad faith or taste, so I don't think we want to leave those personal pages fully unmoderated. But the moderation threshold should be a good bit higher than on the main page. No other constraints on content for now.

Comment author: JenniferRM 17 September 2017 11:22:27PM *  19 points [-]

I'm super impressed by all the work and the good intentions. Thank you for this! Please take my subsequent text in the spirit of trying to help bring about good long term outcomes.

Fundamentally, I believe that a major component of LW's decline isn't in the primary article and isn't being addressed. Basically, a lot of the people drifted away over time who were (1) lazy, (2) insightful, (3) unusual, and (4) willing to argue with each other in ways that probably felt to them like fun rather than work.

These people were a locus of much value, and their absence is extremely painful from the perspective of having interesting arguments happening here on a regular basis. Their loss seems to have been in parallel with a general decrease in public acceptance of agonism in the english speaking political world, and a widespread cultural retreat from substantive longform internet debates as a specific thing that is relevant to LW 2.0.

My impression is that part of people drifting away was because ideologically committed people swarmed into the space and tried to pull it in various directions that had little to do with what I see as the unifying theme of almost all of Eliezer's writing.

The fundamental issue seems to be existential risks to the human species from exceptionally high quality thinking with no predictably benevolent goals that was augmented by recursively improving computers (ie the singularity as original defined by Vernor Vinge in his 1993 article). This original vision covers (and has always covered) Artificial Intelligence and Intelligence Amplification.

Now, I have no illusions that an unincorporated community of people can retain stability of culture or goals over periods of time longer than about 3 years.

Also, even most incorporated communities drift quite a bit or fall apart within mere decades. Sometimes the drift is worthwhile. Initially the thing now called MIRI was a non-profit called "The Singularity Institute For Artificial Intelliegence". Then they started worrying that AI would turn out bad by default, and dropped the "...For Artificial Intelligence" part. Then a late arriving brand-taker-over ("Singularity University") bought their name for a large undisclosed amount of money and the real research started happening under the new name "Machine Intelligence Research Institute".

Drift is the default! As Hanson writes: Coordination Is Hard.

So basically my hope for "grit with respect to species level survival in the face of the singularity" rests in gritty individual humans whose commitment and skills arises from a process we don't understand, can't necessarily replicate, and often can't even reliably teach newbies to even identify.

Then I hope for these individuals to be able to find each other and have meaningful 1:1 conversations and coordinate at a smaller and more tractable scale to accomplish good things without too much interference from larger scale poorly coordinated social structures.

If these literal 1-on-1 conversations happen in a public forum, then that public forum is a place that "important conversations happen" and the conversation might be enshrined or not... but this enshrining is often not the point.

The real point is that the two gritty people had a substantive give and take conversation and will do things differently with their highly strategic lives afterwards.

Often times a good conversation between deeply but differently knowledgeable people looks like an exchange of jokes, punctuated every so often by a sharing of citations (basically links to non-crap content) when a mutual gap in knowledge is identified. Dennet's theory of humor is relevant here.

This can look, to the ignorant, almost like trolling. It can look like joking about megadeath or worse. And this appearance can become more vivid if third and fourth parties intervene in the conversation, and are brusquely or jokingly directed away.

The false inference of bad faith communication becomes especially pernicious if important knowledge is being transmitted outside of the publicly visible forums (perhaps because some of the shared or unshared knowledge verges on being an infohazard).

The practical upshot of much of this is that I think that a lot of the very best content on Lesswrong in the past happened in the comment section, and was in the form of conversations between individuals, often one of whom regularly posted comments with a net negative score.

I offer you Tim Tyler as an example of a very old commenter who (1) reliably got net negative votes on some of his comments while (2) writing from a reliably coherent and evidence based (but weird and maybe socially insensitive) perspective. He hasn't been around since 2014 that I'm aware of.

I would expect Tim to have reliably ended up with a negative score on his FIRST eigendemocracy vector, who would also probably be unusually high (maybe the highest user) on a second or third such vector. He seems to me like the kind of person you might actually be trying to drive away, while at the same time being something of a canary for the tolerance of people genuinely focused on something other than winning at a silly social media game.

Upvotes don't matter except to the degree that they conduce to surviving and thriving. Getting a lot of upvotes and enshrining a bunch of ideas into the canon of our community and then going extinct as a species is LOSING.

Basically, if I had the ability to, for the purposes of learning new things, I would just filter out all the people who are high on the first eigendemocracy vector.

Yes, I want those "traditionally good" people to exist and I respect their work... but I don't expect novel ideas to arise among them at nearly as high a rate, to even be available for propagation and eventual retention in a canon.

Also, the traditionally good people's content and conversations are probably going to be objectively improved if people high in the second and third and fourth such vectors also have a place, and that place allows them the ability to object in a fairly high profile way when someone high in the first eigendemocracy vector component proposes a stupid idea.

One of the stupidest ideas, that cuts pretty close to the heart of such issues, is the possible proposal that people and content whose first eigendemocracy vector are low should be purged, banned, deleted, censored, and otherwise made totally invisible and hard to find by any means.

I fear this would be the opposite of finding yourself a worthy opponent and another step in the direction of active damage to the community in the name of moderation and troll fighting, and it seems like it might be part of the mission, which makes me worried.