Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

LW 2.0 Strategic Overview

47 Post author: Habryka 15 September 2017 03:00AM

Update: We're in open beta! At this point you will be able to sign up / login with your LW 1.0 accounts (if the latter, we did not copy over your passwords, so hit "forgot password" to receive a password-reset email).

Hey Everyone! 

This is the post for discussing the vision that I and the rest of the LessWrong 2.0 team have for the new version of LessWrong, and to just generally bring all of you up to speed with the plans for the site. This post has been overdue for a while, but I was busy coding on LessWrong 2.0, and I am myself not that great of a writer, which means writing things like this takes quite a long time for me, and so this ended up being delayed a few times. I apologize for that.

With Vaniver’s support, I’ve been the primary person working on LessWrong 2.0 for the last 4 months, spending most of my time coding while also talking to various authors in the community, doing dozens of user-interviews and generally trying to figure out how to make LessWrong 2.0 a success. Along the way I’ve had support from many people, including Vaniver himself who is providing part-time support from MIRI, Eric Rogstad who helped me get off the ground with the architecture and infrastructure for the website, Harmanas Chopra who helped build our Karma system and did a lot of user-interviews with me, Raemon who is doing part-time web-development work for the project, and Ben Pace who helped me write this post and is basically co-running the project with me (and will continue to do so for the foreseeable future).

We are running on charitable donations, with $80k in funding from CEA in the form of an EA grant and $10k in donations from Eric Rogstad, which will go to salaries and various maintenance costs. We are planning to continue running this whole project on donations for the foreseeable future, and legally this is a project of CFAR, which helps us a bunch with accounting and allows people to get tax benefits from giving us money. 

Now that the logistics is out of the way, let’s get to the meat of this post. What is our plan for LessWrong 2.0, what were our key assumptions in designing the site, what does this mean for the current LessWrong site, and what should we as a community discuss more to make sure the new site is a success?

Here’s the rough structure of this post: 

  • My perspective on why LessWrong 2.0 is a project worth pursuing
  • A summary of the existing discussion around LessWrong 2.0 
  • The models that I’ve been using to make decisions for the design of the new site, and some of the resulting design decisions
  • A set of open questions to discuss in the comments where I expect community input/discussion to be particularly fruitful 

Why bother with LessWrong 2.0?  

I feel that independently of how many things were and are wrong with the site and its culture, overall, over the course of its history, it has been one of the few places in the world that I know off where a spark of real discussion has happened, and where some real intellectual progress on actually important problems was made. So let me begin with a summary of things that I think the old LessWrong got right, that are essential to preserve in any new version of the site:

On LessWrong…

 

  • I can contribute to intellectual progress, even without formal credentials 
  • I can sometimes have discussions in which the participants focus on trying to convey their true reasons for believing something, as opposed to rhetorically using all the arguments that support their position independent of whether those have any bearing on their belief
  • I can talk about my mental experiences in a broad way, such that my personal observations, scientific evidence and reproducible experiments are all taken into account and given proper weighting. There is no narrow methodology I need to conform to to have my claims taken seriously.
  • I can have conversations about almost all aspects of reality, independently of what literary genre they are associated with or scientific discipline they fall into, as long as they seem relevant to the larger problems the community cares about
  • I am surrounded by people who are knowledgeable in a wide range of fields and disciplines, who take the virtue of scholarship seriously, and who are interested and curious about learning things that are outside of their current area of expertise
  • We have a set of non-political shared goals for which many of us are willing to make significant personal sacrifices
  • I can post long-form content that takes up as much space at it needs to, and can expect a reasonably high level of patience of my readers in trying to understand my beliefs and arguments
  • Content that I am posting on the site gets archived, is searchable and often gets referenced in other people's writing, and if my content is good enough, can even become common knowledge in the community at large
  • The average competence and intelligence on the site is high, which allows discussion to generally happen on a high level and allows people to make complicated arguments and get taken seriously
  • There is a body of writing that is generally assumed to have been read by most people  participating in discussions that establishes philosophical, social and epistemic principles that serve as a foundation for future progress (currently that body of writing largely consists of the Sequences, but also includes some of Scott’s writing, some of Luke’s writing and some individual posts by other authors) 

 

When making changes to LessWrong, I think it is very important to preserve all of the above features. I don’t think all of them are universally present on LessWrong, but all of them are there at least some of the time, and no other place that I know of comes even remotely close to having all of them as often as LessWrong has. Those features are what motivated me to make LessWrong 2.0 happen, and set the frame for thinking about the models and perspectives I will outline in the rest of the post. 

I also think Anna, in her post about the importance of a single conversational locus, says another, somewhat broader thing, that is very important to me, so I’ve copied it in here: 

1. The world is locked right now in a deadly puzzle, and needs something like a miracle of good thought if it is to have the survival odds one might wish the world to have.

2. Despite all priors and appearances, our little community (the "aspiring rationality" community; the "effective altruist" project; efforts to create an existential win; etc.) has a shot at seriously helping with this puzzle.  This sounds like hubris, but it is at this point at least partially a matter of track record.

3. To aid in solving this puzzle, we must probably find a way to think together, accumulatively. We need to think about technical problems in AI safety, but also about the full surrounding context -- everything to do with understanding what the heck kind of a place the world is, such that that kind of place may contain cheat codes and trap doors toward achieving an existential win. We probably also need to think about "ways of thinking" -- both the individual thinking skills, and the community conversational norms, that can cause our puzzle-solving to work better.

4. One feature that is pretty helpful here, is if we somehow maintain a single "conversation", rather than a bunch of people separately having thoughts and sometimes taking inspiration from one another.  By "a conversation", I mean a space where people can e.g. reply to one another; rely on shared jargon/shorthand/concepts; build on arguments that have been established in common as probably-valid; point out apparent errors and then have that pointing-out be actually taken into account or else replied-to).

5. One feature that really helps things be "a conversation" in this way, is if there is a single Schelling set of posts/etc. that people (in the relevant community/conversation) are supposed to read, and can be assumed to have read.  Less Wrong used to be a such place; right now there is no such place; it seems to me highly desirable to form a new such place if we can.

6. We have lately ceased to have a "single conversation" in this way.  Good content is still being produced across these communities, but there is no single locus of conversation, such that if you're in a gathering of e.g. five aspiring rationalists, you can take for granted that of course everyone has read posts such-and-such.  There is no one place you can post to, where, if enough people upvote your writing, people will reliably read and respond (rather than ignore), and where others will call them out if they later post reasoning that ignores your evidence.  Without such a locus, it is hard for conversation to build in the correct way.  (And hard for it to turn into arguments and replies, rather than a series of non sequiturs.)

The Existing Discussion Around LessWrong 2.0

Now that I’ve given a bit of context on why I think LessWrong 2.0 is an important project, it seems sensible to look at what has been said so far, so we don’t have to repeat the same discussions over and over again. There has already been a lot of discussion about the decline of LessWrong, the need for a new platform and the design of LessWrong 2.0, and I won’t be able to summarise it all here, but I can try my best to summarize the most important points, and give a bit of my own perspective on them.

Here is a comment by Alexandros, on Anna’s post I quoted above:

Please consider a few gremlins that are weighing down LW currently:

1. Eliezer's ghost -- He set the culture of the place, his posts are central material, has punctuated its existence with his explosions (and refusal to apologise), and then, upped and left the community, without actually acknowledging that his experiment (well kept gardens etc) has failed. As far as I know he is still the "owner" of this website, retains ultimate veto on a bunch of stuff, etc. If that has changed, there is no clarity on who the owner is (I see three logos on the top banner, is it them?), who the moderators are, who is working on it in general. I know tricycle are helping with development, but a part-time team is only marginally better than no-team, and at least no-team is an invitation for a team to step up.

[...]

...I consider Alexei's hints that Arbital is "working on something" to be a really bad idea, though I recognise the good intention. Efforts like this need critical mass and clarity, and diffusing yet another wave of people wanting to do something about LW with vague promises of something nice in the future... is exactly what I would do if I wanted to maintain the status quo for a few more years.

Any serious attempt at revitalising lesswrong.com should focus on defining ownership and plan clearly. A post by EY himself recognising that his vision for lw 1.0 failed and passing the batton to a generally-accepted BDFL would be nice, but i'm not holding my breath. Further, I am fairly certain that LW as a community blog is bound to fail. Strong writers enjoy their independence. LW as an aggregator-first (with perhaps ability to host content if people wish to, like hn) is fine. HN may have degraded over time, but much less so than LW, and we should be able to improve on their pattern.

I think if you want to unify the community, what needs to be done is the creation of a hn-style aggregator, with a clear, accepted, willing, opinionated, involved BDFL, input from the prominent writers in the community (scott, robin, eliezer, nick bostrom, others), and for the current lesswrong.com to be archived in favour of that new aggregator. But even if it's something else, it will not succeed without the three basic ingredients: clear ownership, dedicated leadership, and as broad support as possible to a simple, well-articulated vision. Lesswrong tried to be too many things with too little in the way of backing.

I think Alexandros hits a lot of good points here, and luckily these are actually some of the problems I am most confident we have solved. The biggest bottleneck – the thing that I think caused most other problems with LessWrong – is simply that there was nobody with the motivation, the mandate and the resources to fight against the inevitable decline into entropy. I feel that the correct response to the question of “why did LessWrong decline?” is to ask “why should it have succeeded?”. 

In the absence of anyone with the mandate trying to fix all the problems that naturally arise, we should expect any online platform to decline. Most of the problems that will be covered in the rest of this post are things that could have been fixed many years ago, but simply weren’t because nobody with the mandate put much resources into fixing them. I think the cause for this was a diffusion of responsibility, and a lot of vague promises of problems getting solved by vague projects in the future. I myself put off working on LessWrong for a few months because I had some vague sense that Arbital would solve the problems that I was hoping to solve, even though Arbital never really promised to solve them. Then Arbital’s plan ended up not working out, and I had wasted months of precious time. 

Since this comment was written, Vaniver has been somewhat unanimously declared benevolent dictator for life of LessWrong. He and I have gotten various stakeholders on board, received funding, have a vision, and have free time – and so we have the mandate, the resources and the motivation to not make the same mistakes. With our new codebase, link posts are now something I can build in an afternoon, rather than something that requires three weeks of getting permissions from various stakeholders, performing complicated open-source and confidentiality rituals, and hiring a new contractor who has to first understand the mysterious Reddit fork from 2008 that LessWrong is based on. This means at least the problem of diffusion of responsibility is solved. 


Scott Alexander also made a recent comment on Reddit on why he thinks LessWrong declined, and why he is somewhat skeptical of attempts to revive the website: 

1. Eliezer had a lot of weird and varying interests, but one of his talents was making them all come together so you felt like at the root they were all part of this same deep philosophy. This didn't work for other people, and so we ended up with some people being amateur decision theory mathematicians, and other people being wannabe self-help gurus, and still other people coming up with their own theories of ethics or metaphysics or something. And when Eliezer did any of those things, somehow it would be interesting to everyone and we would realize the deep connections between decision theory and metaphysics and self-help. And when other people did it, it was just "why am I reading this random bulletin board full of stuff I'm not interested in?"

2. Another of Eliezer's talents was carefully skirting the line between "so mainstream as to be boring" and "so wacky as to be an obvious crackpot". Most people couldn't skirt that line, and so ended up either boring, or obvious crackpots. This produced a lot of backlash, like "we need to be less boring!" or "we need fewer crackpots!", and even though both of these were true, it pretty much meant that whatever you posted, someone would be complaining that you were bad.

3. All the fields Eliezer wrote in are crackpot-bait and do ring a bunch of crackpot alarms. I'm not just talking about AI - I'm talking about self-help, about the problems with the academic establishment, et cetera. I think Eliezer really did have interesting things to say about them - but 90% of people who try to wade into those fields will just end up being actual crackpots, in the boring sense. And 90% of the people who aren't will be really bad at not seeming like crackpots. So there was enough kind of woo type stuff that it became sort of embarassing to be seen there, especially given the thing where half or a quarter of the people there or whatever just want to discuss weird branches of math or whatever.

4. Communities have an unfortunate tendency to become parodies of themselves, and LW ended up with a lot of people (realistically, probably 14 years old) who tended to post things like "Let's use Bayes to hack our utility functions to get superfuzzies in a group house!". Sometimes the stuff they were posting about made sense on its own, but it was still kind of awkward and the sort of stuff people felt embarassed being seen next to.

5. All of these problems were exacerbated by the community being an awkward combination of Google engineers with physics PhDs and three startups on one hand, and confused 140 IQ autistic 14 year olds who didn't fit in at school and decided that this was Their Tribe Now on the other. The lowest common denominator that appeals to both those groups is pretty low.

6. There was a norm against politics, but it wasn't a very well-spelled-out norm, and nobody enforced it very well. So we would get the occasional leftist who had just discovered social justice and wanted to explain to us how patriarchy was the real unfriendly AI, the occasional rightist who had just discovered HBD and wanted to go on a Galileo-style crusade against the deceptive establishment, and everyone else just wanting to discuss self-help or decision-theory or whatever without the entire community becoming a toxic outcast pariah hellhole. Also, this one proto-alt-right guy named Eugene Nier found ways to exploit the karma system to mess with anyone who didn't like the alt-right (ie 98% of the community) and the moderation system wasn't good enough to let anyone do anything about it.

7. There was an ill-defined difference between Discussion (low-effort random posts) and Main (high-effort important posts you wanted to show off). But because all these other problems made it confusing and controversial to post anything at all, nobody was confident enough to post in Main, and so everything ended up in a low-effort-random-post bin that wasn't really designed to matter. And sometimes the only people who didpost in Main were people who were too clueless about community norms to care, and then their posts became the ones that got highlighted to the entire community.

8. Because of all of these things, Less Wrong got a reputation within the rationalist community as a bad place to post, and all of the cool people got their own blogs, or went to Tumblr, or went to Facebook, or did a whole bunch of things that relied on illegible local knowledge. Meanwhile, LW itself was still a big glowing beacon for clueless newbies. So we ended up with an accidental norm that only clueless newbies posted on LW, which just reinforced the "stay off LW" vibe.

I worry that all the existing "resurrect LW" projects, including some really high-effort ones, have been attempts to break coincidental vicious cycles - ie deal with 8 and the second half of 7. I think they're ignoring points 1 through 6, which is going to doom them.

At least judging from where my efforts went, I would agree that I have spent a pretty significant amount of resources on fixing the problems that Scott described in point 6 and 7, but I also spent about equal time thinking about how to fix 1-5. The broader perspective that I have on those latter points is I think best illustrated in an analogy: 

When I read Scott’s comments about how there was just a lot of embarrassing and weird writing on LessWrong, I remember my experiences as a Computer Science undergraduate. When the median undergrad makes claims about the direction of research in their field, or some other big claim about their field that isn't explicitly taught in class, or if you ask an undergraduate physics student what they think about how to do physics research, or what ideas they have for improving society, they will often give you quite naive sounding answers (I have heard everything from “I am going to build a webapp to permanently solve political corruption” to “here’s my idea of how we can transmit large amounts of energy wirelessly by using low-frequency tesla-coils”.) I don’t think we should expect anything different on LessWrong. I actually think we should expect it to be worse here, since we are actively encouraging people to have opinions, as opposed to the more standard practice of academia, which seems to consist of treating undergraduates as slightly more intelligent dogs that need to be conditioned with the right mixture of calculus homework problems and mandatory class attendance, so that they might be given the right to have any opinion at all if they spend 6 more years getting their PhD. 

So while I do think that Eliezer’s writing encouraged topics that were slightly more likely to attract crackpots, I think a large chunk of the weird writing is just a natural consequence of being an intellectual community that has a somewhat constant influx of new members. 

And having undergraduates go through the phase where they have bad ideas, and then have it explained to them why their ideas are bad, is important. I actually think it’s key to learning any topic more complicated than high-school mathematics. It takes a long time until someone can productively contribute to the intellectual progress of an intellectual community (in academia it’s at least 4 years, though usually more like 8), and during all that period they will say very naive and silly sounding things (though less and less so as time progresses). I think LessWrong can do significantly better than 4 years, but we should still expect that it will take new members time to acclimate and get used to how things work (based on user-interviews of a lot of top commenters it usually took something like 3-6 months until someone felt comfortable commenting frequently and about 6-8 months until someone felt comfortable posting frequently. This strikes me as a fairly reasonable expectation for the future). 

And I do think that we have many graduate students and tenured professors of the rationality community who are not Eliezer, and who do not sound like crackpots, that can speak reasonably about the same topics Eliezer talked about, and who I feel are acting with a very similar focus to what Eliezer tried to achieve. Luke Muehlhauser, Carl Shulman, Anna Salamon, Sarah Constantin, Ben Hoffman, Scott himself and many more, most of whose writing would fit very well on LessWrong (and often still ends up there). 

But all of this doesn’t mean what Scott describes isn’t a problem. It’s still a bad experience for everyone to constantly have to read through bad first year undergrad essays, but I think the solution can’t involve those essays not getting written at all. Instead it has to involve some kind of way of not forcing everyone to see those essays, while still allowing them to get promoted if someone shows up who does write something insightful from day one. I am currently planning to tackle this mostly with improvements to the karma system, as well as changes to the layout of the site, where users primarily post to their own profiles and can get content promoted to the frontpage by moderators and high-karma members. A feed consisting solely of content of the quality of the average Scott, Anna, Ben or Luke post would be an amazing read, and is exactly the kind of feed I am hoping to create with LessWrong, while still allowing users to engage with the rest of the content on the site (more on that later).

I would very very roughly summarize what Scott says in the first 5 points as two major failures: first a failure of separating the signal from the noise, and second a failure of enforcing moderation norms when people did turn out to be crackpots or just unable to productively engage with the material on the site. Both of which are natural consequences of the abandonment of promoting things to main, the fact that discussion is ordered by default by recency and not by some kind of scoring system, and the fact that the moderation tools were completely insufficient (but more on the details of that in the next section)


My models of LessWrong 2.0

I think there are three major bottlenecks that LessWrong is facing (after the zeroth bottleneck, which is just that no single group had the mandate, resources and motivation to fix any of the problems): 

  1. We need to be able to build on each other’s intellectual contributions, archive important content and avoid primarily being news-driven
  2. We need to improve the signal-to-noise ratio for the average reader, and only broadcast the most important writing
  3. We need to actively moderate in a way that is both fun for the moderators, and helps people avoid future moderation policy violations

I. 

The first bottleneck for our community, and the biggest I think, is the ability to build common knowledge. On facebook, I can read an excellent and insightful discussion, yet one week later I forgot it. Even if I remember it, I don’t link to the facebook post (because linking to facebook posts/comments is hard) and it doesn’t have a title so I don’t casually refer to it in discussion with friends. On facebook, ideas don’t get archived and built upon, they get discussed and forgotten. To put this another way, the reason we cannot build on the best ideas this community had over the last five years, is because we don’t know what they are. There’s only fragments of memories of facebook discussions which maybe some other people remember. We have the sequences, and there’s no way to build on them together as a community, and thus there is stagnation.

Contrast this with science. Modern science is plagued by many severe problems, but of humanity’s institutions it has perhaps the strongest record of being able to build successfully on its previous ideas. The physics community has this system where the new ideas get put into journals, and then eventually if they’re new, important, and true, they get turned into textbooks, which are then read by the upcoming generation of physicists, who then write new papers based on the findings in the textbooks. All good scientific fields have good textbooks, and your undergrad years are largely spent reading them. I think the rationality community has some textbooks, written by Eliezer (and we also compiled a collection of Scott’s best posts that I hope will become another textbook of the community), but there is no expectation that if you write a good enough post/paper that your content will be included in the next generation of those textbooks, and the existing books we have rarely get updated. This makes the current state of the rationality community analogous to a hypothetical state of physics, had physics no journals, no textbook publishers, and only one textbook that is about a decade old. 

This seems to me what Anna is talking about - the purpose of the single locus of conversation is the ability to have common knowledge and build on it. The goal is to have every interaction with the new LessWrong feel like it is either helping you grow as a rationalist or has you contribute to lasting intellectual progress of the community. If you write something good enough, it should enter the canon of the community. If you make a strong enough case against some existing piece of canon, you should be able to replace or alter that canon. I want writing to the new LessWrong to feel timeless. 

To achieve this, we’ve built the following things: 

  • We created a section for core canon on the site that is prominently featured on the frontpage and right now includes Rationality: A-Z, The Codex (a collection of Scott’s best writing, compiled by Scott and us), and HPMOR. Over time I expect these to change, and there is a good chance HPMOR will move to a different section of the site (I am considering adding an “art and fiction” section) and will be replaced by a new collection representing new core ideas in the community.
  • Sequences are now a core feature of the website. Any user can create sequences of their own and other users posts, and those sequences themselves can be voted and commented on. The goal is to help users compile the best writing on the site, and make it so that good timeless writing gets read by users for a long time, as opposed to disappearing into the void. Separating creative and curatorial effort allows the sort of professional specialization that you see in serious scientific fields.
  • Of those sequences, the most upvoted and most important ones will be chosen to be prominently featured on other sections of the site, allowing users easy access to read the best content on the site and get up to speed with the current state of knowledge of the community.
  • For all posts and sequences the site keeps track of how much of them you’ve read (including importing view-tracking from old LessWrong, so you will get to see how much of the original sequences you’ve actually read). And if you’ve read all of a sequence you get a small badge that you can choose to display right next to your username, which helps people navigate how much of the content of the site you are familiar with.
  • The design of the core content of the site (e.g. the Sequences, the Codex, etc.) tries to communicate a certain permanence of contributions. The aesthetic feels intentionally book-like, which I hope gives people a sense that their contributions will be archived, accessible and built-upon.
    One important issue with this is that there also needs to be a space for sketches on LessWrong. To quote PaulGraham: “What made oil paint so exciting, when it first became popular in the fifteenth century, was that you could actually make the finished work from the prototype. You could make a preliminary drawing if you wanted to, but you weren't held to it; you could work out all the details, and even make major changes, as you finished the painting.”
  • We do not want to discourage sketch-like contributions, and want to build functionality that helps people build a finished work from a prototype (this is one of the core competencies of Google Docs, for example).

And there are some more features the team is hoping to build in this direction, such as: 

  • Easier archiving of discussions by allowing discussions to be turned into top-level posts (similar to what Ben Pace did with a recent Facebook discussion between Eliezer, Wei Dai, Stuart Armstrong, and some others, which he turned into a post on LessWrong 2.0
  • The ability to continue reading the content you’ve started reading with a single click from the frontpage. Here's an example logged-in frontpage:

 

 

II.

The second bottleneck is improving the signal-to-noise ratio. It needs to be possible for someone to subscribe to only the best posts on LessWrong, and only the most important content needs to turned into common-knowledge. 

I think this is a lot of what Scott was pointing at in his summary about the decline of LessWrong. We need a way for people to learn from their mistakes, while also not flooding the inboxes of everyone else, and while giving people active feedback on how to improve in their writing. 

The site structure: 

To solve this bottleneck, here is the rough content structure that I am currently planning to implement on LessWrong: 

The writing experience: 

If you write a post, it first shows up nowhere else but your personal user page, which you can basically think of being a medium-style blog. If other users have subscribed to you, your post will then show up on their frontpages (or only show up after it hit a certain karma threshold, if users who subscribed to you set a minimum karma threshold). If you have enough karma you can decide to promote your content to the main frontpage feed (where everyone will see it by default), or a moderator can decide to promote your content (if you allowed promoting on that specific post). The frontpage itself is sorted by a scoring system based on the HN algorithm, which uses a combination of total karma and how much time has passed since the creation of the post. 

If you write a good comment on a post a moderator or a high-karma user can promote that comment to the frontpage as well, where we will also feature the best comments on recent discussions. 

Meta

Meta will just be a section of the site to discuss changes to moderation policies, issues and bugs with the site, discussion about site features, as well as general site-policy issues. Basically the thing that all StackExchanges have. Karma here will not add to your total karma and will not give you more influence over the site. 

Featured posts

In addition to the main thread, there is a promoted post section that you can subscribe to via email and RSS, that has on average three posts a week, which for now are just going to be chosen by moderators and editors on the site to be the posts that seem most important to turn into common-knowledge for the community. 

Meetups (implementation unclear)

There will also be a separate section of the site for meetups and event announcements that will feature a map of meetups, and generally serve as a place to coordinate the in-person communities. The specific implementation of this is not yet fully figured out. 

Shortform (implementation unclear)

Many authors (including Eliezer) have requested a section of the site for more short-form thoughts, more similar to the length of an average FB post. It seems reasonable to have a section of the site for that, though I am not yet fully sure how it should be implemented. 

Why? 

The goal of this structure is to allow users to post to LessWrong without their content being directly exposed to the whole community. Their content can first be shown to the people who follow them, or the people who actively seek out content from the broader community by scrolling through all new posts. Then, if a high-karma users among them finds their content worth posting to the frontpage, it will get promoted. The key to this is a larger userbase that has the ability to promote content (i.e. many more than have the ability to promote content to main on the current LessWrong), and the continued filtering of the frontpage based on the karma level of the posts. 

The goal of all of these is to allow users to see good content at various levels of engagement with the site, while giving some personalization options so that people can follow the people they are particularly interested and while also ensuring that this does not sabotage the attempt at building common knowledge by having the best posts from the whole ecosystem be featured and promoted on the frontpage. 

The karma system:

Another thing I’ve been working on to fix the signal-to-noise ratio is to improve the karma system. It’s important that the people having the most significant insights are able to shape a field more. If you’re someone who regularly produces real insights, you’re better able to notice and bring up other good ideas. To achieve this we’ve built a new karma system, where your upvotes and downvotes weight more if you have a lot of karma already. So far the current weighting is a very simple heuristic, whereby your upvotes and downvotes count for log base 5 of your total karma. Ben and I will post another top-level post to discuss just the karma system at some point in the next few weeks, but feel free to ask any questions now, and we will just include those in that post.

(I am currently experimenting with a karma system based on the concept of eigendemocracy by Scott Aaronson, which you can read about here, but which basically boils down to applying Google’s PageRank algorithm to karma allocation. How trusted you are as a user (your karma) is based on how much trusted users upvote you, and the circularity of this definition is solved using linear algebra.)

I am also interested in having some form of two-tiered voting, similarly to how Facebook has a primary vote interaction (the like) and a secondary interaction that you can access via a tap or a hover (angry, sad, heart, etc.). But the implementation of that is also currently undetermined. 

III

The third and last bottleneck is an actually working moderation system that is fun to use by moderators, while also giving people whose content was moderated a sense of why, and how they can improve. 

The most common, basic complaint currently on LessWrong pertains to trolls and sockpuppet accounts that the reddit fork’s mod tools are vastly inadequate for dealing with (Scott's sixth point refers to this). Raymond Arnold and I are currently building more nuanced mod tools, that include abilities for moderators to set the past/future votes of a user to zero, to see who upvoted a post, and to know the IP address that an account comes from (this will be ready by the open beta). 

Besides that, we are currently working on cultivating a moderation group we are calling “Sunshine Regiment.” Members of the sunshine regiment that will have the ability to take various smaller moderation actions around the site (such as temporarily suspending comment threads, making general moderating comments in a distinct font and promoting content), and so will have the ability to generally shape the culture and content of the website to a larger degree.

The goal is moderation that goes far beyond dealing with trolls, and actively makes the epistemic norms a ubiquitous part of the website. Right now Ben Pace is thinking about moderation norms that encourage archiving and summarizing good discussion, as well as other patterns of conversation that will help the community make intellectual progress. He’ll be posting to the open beta to discuss what norms the site and moderators should have in the coming weeks. We're both in agreement that moderation can and should be improved, and that moderators need better tools, and would appreciate good ideas about what else to give them.


How you can help and issues to discuss:

The open beta of the site is starting in a week, and so you can see all of this for yourself. For the duration of the open beta, we’ll continue the discussion on the beta site. At the conclusion of the open beta, we plan to have a vote open to those who had a thousand karma or more on 9/13 to determine whether we should move forward with the new site design, which would move to the lesswrong.com url from its temporary beta location, or leave LessWrong as it is now. (As this would represent the failure of the plan to revive LW, this would likely lead to the site being archived rather than staying open in an unmaintained state.) For now, this is an opportunity for the current LessWrong community to chime in here and object to anything in this plan.

During the open beta (and only during that time) the site will also have an Intercom button in the bottom right corner that allows you to chat directly with us. If you run into any problems, or notice any bugs, feel free to ping us directly on there and Ben and I will try to help you out as soon as possible.

Here are some issues where I discussion would be particularly fruitful: 

  • What are your thoughts about the karma system? Does an eigendemocracy based system seem reasonable to you? How would you implement the details? Ben and I will post our current thoughts on this in a separate post in the next two weeks, but we would be interested in people’s unprimed ideas.
  • What are your experiences with the site so far? Is anything glaringly missing, or are there any bugs you think I should definitely fix? 
  • Do you have any complaints or thoughts about how work on LessWrong 2.0 has been proceeding so far? Are there any worries or issues you have with the people working on it? 
  • What would make you personally use the new LessWrong? Is there any specific feature that would make you want to use it? For reference, here is our current feature roadmap for LW 2.0.
  • And most importantly, do you think that the LessWrong 2.0 project is doomed to failure for some reason? Is there anything important I missed, or something that I misunderstood about the existing critiques?
The closed beta can be found at www.lesserwrong.com.

Ben, Vaniver, and I will be in the comments!

Comments (294)

Sort By: Controversial
Comment author: BrassLion 11 October 2017 09:17:47PM 1 point [-]

I will say that lesserwrong is already useful to me, and I'm poking around reading a few things. I haven't been on LessWrong (this site) in a long time before just now, and only got here because I was wondering where this "LesserWrong" site came from. So, at the very least, your efforts are reaching people like me who often read and sometimes change their behavior based on posts, but rarely post themselves. Thanks for the all work you did - the UX end of the new site is much, much better.

Comment author: AFinerGrain 03 October 2017 12:45:06AM 1 point [-]

I've always been half-way interested in LessWrong. SlateStar, Robin Hanson, and Bryan Caplan have been favorite reading for a very long time. But every once in a while I'd have a look at the LessWrong, read something, and forget about it for months at a time.

After the rework I find this place much more appealing. I created a profile and I'm even commenting. I hope one day I can contribute. But honestly, I feel 200% better about just browsing and reading.

Great job.

Comment author: ChristianKl 20 September 2017 01:04:06PM 1 point [-]

I think one big problem about using the Reddit Codebase was that while there was a lot of additional code development we couldn't simply copy the code over as changing the code to be about LW takes the editing of source code.

Given that you now published the code under an MIT license, I ask myself whether it would be good to to have a separate open source project for the basic engine behind the website that can be used by different communities.

The effective altruism forum also used a Reddit fork and might benefit from using the basic engine behind the website as well. If there's a good openly licensed engine I would expect it to be used by additional projects and that as a result more people would contribute to the code.

Have you thought about such a setup? If so, why do you believe that having one Git Hub project for Lesswrong 2.0 is the right decision?

Comment author: efenj 19 September 2017 01:21:56AM 4 points [-]

Thank you, very much for making this effort! I love the new look of the site — it reminds me of http://practicaltypography.com/ which is (IMO) the nicest looking site on the internet. I also like the new font.

Some feedback, especially regarding the importing of old posts.

  • Firstly, I'm impressed by the fact that the old links (with s/lesswrong.com/lesserwrong.com/) seem to consistently redirect to the correct new locations of the posts and comments. The old anchor tag links (like http://lesswrong.com/lw/qx/timeless_identity/#kl2 ) do not work, but with the new structuring of the comments on the page that's probably unavoidable.

  • Some comments seem to have just disappeared (e.g. http://lesswrong.com/lw/qx/timeless_identity/dhmt ). I'm not sure if these are deliberate or not.

  • Both the redirection and the new version, in general, somehow feel slow/heavy in a way that the old versions did not (I'd chalk that up to my system being to blame, but why would it disproportionately affect the new rather than the old versions).

  • Images seem to be missing from the new versions (e.g. from http://lesswrong.com/lw/qx/timeless_identity/https://www.lesserwrong.com/static/imported/2008/06/02/manybranches4.png for instance does not exist)

  • Citations (blockquotes) are not standing out very well in the new versions, to the extent that I have trouble easily determining where they end and the surrounding text restarts. (A possible means of improving this could perhaps be to increase the padding of blockquotes.) For an example, see http://lesswrong.com/lw/qx/timeless_identity .

  • Straight quotation marks ("), rather than (“ ”) look out of place with the new font (I have no idea how to easily remedy this.) For examples, yet again see http://lesswrong.com/lw/qx/timeless_identity .

Comment author: Gram_Stone 18 September 2017 02:03:52PM 4 points [-]

Will there be LaTeX support?

Comment author: DragonGod 20 September 2017 08:01:12AM 0 points [-]

Please add this.

Comment author: JenniferRM 17 September 2017 11:22:27PM *  19 points [-]

I'm super impressed by all the work and the good intentions. Thank you for this! Please take my subsequent text in the spirit of trying to help bring about good long term outcomes.

Fundamentally, I believe that a major component of LW's decline isn't in the primary article and isn't being addressed. Basically, a lot of the people drifted away over time who were (1) lazy, (2) insightful, (3) unusual, and (4) willing to argue with each other in ways that probably felt to them like fun rather than work.

These people were a locus of much value, and their absence is extremely painful from the perspective of having interesting arguments happening here on a regular basis. Their loss seems to have been in parallel with a general decrease in public acceptance of agonism in the english speaking political world, and a widespread cultural retreat from substantive longform internet debates as a specific thing that is relevant to LW 2.0.

My impression is that part of people drifting away was because ideologically committed people swarmed into the space and tried to pull it in various directions that had little to do with what I see as the unifying theme of almost all of Eliezer's writing.

The fundamental issue seems to be existential risks to the human species from exceptionally high quality thinking with no predictably benevolent goals that was augmented by recursively improving computers (ie the singularity as original defined by Vernor Vinge in his 1993 article). This original vision covers (and has always covered) Artificial Intelligence and Intelligence Amplification.

Now, I have no illusions that an unincorporated community of people can retain stability of culture or goals over periods of time longer than about 3 years.

Also, even most incorporated communities drift quite a bit or fall apart within mere decades. Sometimes the drift is worthwhile. Initially the thing now called MIRI was a non-profit called "The Singularity Institute For Artificial Intelliegence". Then they started worrying that AI would turn out bad by default, and dropped the "...For Artificial Intelligence" part. Then a late arriving brand-taker-over ("Singularity University") bought their name for a large undisclosed amount of money and the real research started happening under the new name "Machine Intelligence Research Institute".

Drift is the default! As Hanson writes: Coordination Is Hard.

So basically my hope for "grit with respect to species level survival in the face of the singularity" rests in gritty individual humans whose commitment and skills arises from a process we don't understand, can't necessarily replicate, and often can't even reliably teach newbies to even identify.

Then I hope for these individuals to be able to find each other and have meaningful 1:1 conversations and coordinate at a smaller and more tractable scale to accomplish good things without too much interference from larger scale poorly coordinated social structures.

If these literal 1-on-1 conversations happen in a public forum, then that public forum is a place that "important conversations happen" and the conversation might be enshrined or not... but this enshrining is often not the point.

The real point is that the two gritty people had a substantive give and take conversation and will do things differently with their highly strategic lives afterwards.

Often times a good conversation between deeply but differently knowledgeable people looks like an exchange of jokes, punctuated every so often by a sharing of citations (basically links to non-crap content) when a mutual gap in knowledge is identified. Dennet's theory of humor is relevant here.

This can look, to the ignorant, almost like trolling. It can look like joking about megadeath or worse. And this appearance can become more vivid if third and fourth parties intervene in the conversation, and are brusquely or jokingly directed away.

The false inference of bad faith communication becomes especially pernicious if important knowledge is being transmitted outside of the publicly visible forums (perhaps because some of the shared or unshared knowledge verges on being an infohazard).

The practical upshot of much of this is that I think that a lot of the very best content on Lesswrong in the past happened in the comment section, and was in the form of conversations between individuals, often one of whom regularly posted comments with a net negative score.

I offer you Tim Tyler as an example of a very old commenter who (1) reliably got net negative votes on some of his comments while (2) writing from a reliably coherent and evidence based (but weird and maybe socially insensitive) perspective. He hasn't been around since 2014 that I'm aware of.

I would expect Tim to have reliably ended up with a negative score on his FIRST eigendemocracy vector, who would also probably be unusually high (maybe the highest user) on a second or third such vector. He seems to me like the kind of person you might actually be trying to drive away, while at the same time being something of a canary for the tolerance of people genuinely focused on something other than winning at a silly social media game.

Upvotes don't matter except to the degree that they conduce to surviving and thriving. Getting a lot of upvotes and enshrining a bunch of ideas into the canon of our community and then going extinct as a species is LOSING.

Basically, if I had the ability to, for the purposes of learning new things, I would just filter out all the people who are high on the first eigendemocracy vector.

Yes, I want those "traditionally good" people to exist and I respect their work... but I don't expect novel ideas to arise among them at nearly as high a rate, to even be available for propagation and eventual retention in a canon.

Also, the traditionally good people's content and conversations are probably going to be objectively improved if people high in the second and third and fourth such vectors also have a place, and that place allows them the ability to object in a fairly high profile way when someone high in the first eigendemocracy vector component proposes a stupid idea.

One of the stupidest ideas, that cuts pretty close to the heart of such issues, is the possible proposal that people and content whose first eigendemocracy vector are low should be purged, banned, deleted, censored, and otherwise made totally invisible and hard to find by any means.

I fear this would be the opposite of finding yourself a worthy opponent and another step in the direction of active damage to the community in the name of moderation and troll fighting, and it seems like it might be part of the mission, which makes me worried.

Comment author: ESRogs 10 October 2017 03:08:46PM 0 points [-]

I would expect Tim to have reliably ended up with a negative score on his FIRST eigendemocracy vector, who would also probably be unusually high (maybe the highest user) on a second or third such vector.

Is there a natural interpretation of what the first vector means vs what the second or third mean? My lin alg is rusty.

Comment author: Lukas 12 November 2017 01:37:09PM 1 point [-]

I wondered the same thing. The explanation I've come up with is the following:

See https://en.wikipedia.org/wiki/Linear_dynamical_system for the relevant math.

Assuming the interaction matrix is diagonizable, the system state can be represented as a linear combination of the eigenvectors. The eigenvector with the largest positive eigenvalue grows the fastest under the system dynamics. Therefore, the respective compontent of the system state will become the dominating component, much larger than the others. (The growth of the components is exponential.) Ultimately, the normalized system state will be approximately equal to the fastest growing eigenvector, unless there are equally strongly growing other eigenvectors.

If we assume the eigenvalues are non-degenerate and thus sortable by size, one can identify the strongest growing eigenvector, the second strongest growing eigenvector, etc. I think this is what JenniferRM means with 'first' and 'second' eigenvector.

Comment author: ESRogs 17 September 2017 10:09:32AM 3 points [-]

If you write a post, it first shows up nowhere else but your personal user page, which you can basically think of being a medium-style blog. If other users have subscribed to you, your post will then show up on their frontpages

Some questions about this (okay if you don't have answers now):

  • Can anyone make a personal page?
  • Are there any requirements for the content -- does it need to be "rationality" themed, or can it be whatever the user wants (with the expectation that only LW-appropriate stuff will get promoted to the general frontpage)?
  • Can a user get kicked off for inappropriate content (whatever that means)?
Comment author: Habryka 17 September 2017 07:13:51PM 1 point [-]

"Can anyone make a personal page? Are there any requirements for the content -- does it need to be "rationality" themed, or can it be whatever the user wants (with the expectation that only LW-appropriate stuff will get promoted to the general frontpage)? Can a user get kicked off for inappropriate content (whatever that means)?"

Current answer to all of those is:

I don't have a plan for that yet, let's figure it out as we run into that problem. For now having too much traffic or content to the site seems like a less important error mode, even if that content is bad, as long as it doesn't clog up the attention of everyone else.

I would probably suggest warning and eventually banning people who repeatedly try to bring highly controversial politics onto the site, or who repeatedly act in bad faith or taste, so I don't think we want to leave those personal pages fully unmoderated. But the moderation threshold should be a good bit higher than on the main page. No other constraints on content for now.

Comment author: ChristianKl 20 September 2017 01:09:04PM 2 points [-]

When deciding whether to publish content it seems to me to be important whether content is welcome or isn't. Unclarity about the policy can hold people back from contributing.

Comment author: Benito 17 September 2017 07:04:22PM *  4 points [-]

Thanks for the questions.

  • From the start, all user pages will be personal pages. If you make an account, you'll have a basic blog.
  • No requirements for the content. This is for people in the community (and others) to write about whatever they're interested in. If you want a place to write those short statistical oddities you've been posting to tumblr; if you want a place to write those not-quite-essays you've been posting to facebook; if you want a place to try out writing full blog posts; if you wish, you can absolutely do that here.
  • I expect we'll have some basic norms of decency. I've not started the discussion within the Sunshine Regiment on what these will be yet, but once we've had a conversation we'll open it up to input from the community, and I'll make sure to publish clearly both the norms and info on what happens when someone breaks a norm.
Comment author: Habryka 17 September 2017 07:15:20PM 2 points [-]

Apparently me and Ben responded to this at the same time. We seem to have mostly said the same things, so we are apparently fairly in sync.

Comment author: Craig_Heldreth 17 September 2017 02:27:52AM 1 point [-]

What would make you personally use the new LessWrong?

Quality content. Quality content. And quality content.

Is there any specific feature that would make you want to use it?

The features which I would most like to see:

Wiki containing all or at least most of the jargon.

Rationality quotations all in one file alphabetically ordered by author of the quote.

Book reviews and topical reading lists.

Pie in the sky: the Yudkowsky sequences edited, condensed, and put into an Aristotelian/Thomsian/Scholastic order. (Not that Aristotle or Thomas Aquinas ever did this but the tradition of the scholastics was always to get this pie in the sky.) It might be interesting to see what an experienced book editor would advise doing with this material.

Everything I would want to not see has been covered by yourself or others in this thread.

Comment author: DragonGod 17 September 2017 09:32:00AM 2 points [-]

Pie in the sky: the Yudkowsky sequences edited, condensed, and put into an Aristotelian/Thomsian/Scholastic order. (Not that Aristotle or Thomas Aquinas ever did this but the tradition of the scholastics was always to get this pie in the sky.) It might be interesting to see what an experienced book editor would advise doing with this material.

Doesn't Rationality: From AI to Zombies achieve this already?

Comment author: ingres 17 September 2017 01:46:37PM 0 points [-]

Rat:A-Z is like...a slight improvement over EY's first draft of the sequences. I think when Craig says condensed he has much more substantial editing in mind.

Comment author: Benito 17 September 2017 07:06:45PM 5 points [-]

FYI R:AZ is shorter than The Sequences by a factor of 2, which I think is a substantial improvement. Not that it couldn't be shorter still ;-)

Comment author: gjm 19 September 2017 11:14:04AM 1 point [-]

How much of that is selection (omitting whole articles) and how much is condensation (making individual articles shorter)?

Comment author: Benito 19 September 2017 08:29:00PM 1 point [-]

I don't know for sure, my guess is 80/20. Rob wrote some great introductions that give more context, but mostly the remaining posts are written the same (I think).

Comment author: ingres 17 September 2017 08:41:54PM 1 point [-]

Oh huh, TIL. Thanks!

Comment author: DragonGod 16 September 2017 09:53:14PM *  6 points [-]

I think adding a collection of the best Overcoming Bias posts, including posts like "you are never entitled to your own opinion" to the front page would be a great idea, and it might be better than putting a link to HPMOR (some users seem to believe that linking HPMOR on the front page may come across as puerile).

Comment author: Habryka 16 September 2017 11:53:37PM 3 points [-]

I agree that I really want a Robin Hanson collection in a similar style to how we already have a Scott Alexander collection. We will have to coordinate with Robin on that. I can imagine him being on board, but I can also imagine him being hesitant to have all his content crossposted to another site. He seemed to prefer having full control over everything on his own page, and apparently didn't end up posting very much on LessWrong, even as LW ended up with a much larger community and much more activity.

Comment author: DragonGod 17 September 2017 01:13:16AM 2 points [-]

Well, maintaining links to them (if he prefers them on his site) might be an acceptable compromise then? I think Robin's posts are a core part of the "rationalist curriculum", and the site would be incomplete if we don't include them.

Comment author: Yosarian2 16 September 2017 09:00:26PM *  4 points [-]

My concern around the writing portion of your idea is this: from my point of view, the biggest problem with lesswrong is that the sheer quantity of new content is extremely low. In order for a LessWrong 2.0 to succeed, you absolutly have to get more people spending the time and effort to create great content. Anything you do to make it harder for people to contribute new content will make that problem worse. Especially anything that creates a barrier for new people who want to post something in discussion. People will not want to write content that nobody might see unless it happens to get promoted.

Once you get a constant stream of content on a daily basis, then maybe you can find a way to curate it to highlight the best content. But you need that stream of content and engagement first and foremost or I worry the whole thing may be stillborn.

Comment author: Habryka 16 September 2017 11:56:18PM 4 points [-]

Agree with this.

I do however think that we actually have a really large stream of high-quality-content already in the broader rationality diaspora that we just need to tap into and get onto the new page. As such, the problem is a bit easier than getting a ton of new content creators, and is instead more of a problem of building something that the current content creators want to move towards.

And as soon as we have a high-quality stream of new content I think it will be easier to attract new writers who will be looking to expand their audience.

Comment author: Yosarian2 17 September 2017 01:48:05AM *  3 points [-]

Maybe; there certanly are a lot of good rationalist bloggers who have at least at some point been interested in LessWrong. I don't think bloggers will come back though unless the site first becomes more active then it currently is. (They may give it a chance after the Beta is rolled out, but if activity doesn't increase quickly they'll leave again.) Activity and an active community is necessary to keep a project like this going. Without an active community here there's no point in coming back here instead of posting on your own blog.

I guess my concern here though is that right now, LessWrong has a "discussion" side which is a little active and a "main" side which is totally dead. And it sounds like this plan would basically get rid of the discussion side, and make it harder to post on the main side. Won't the most likely outcome just be to lower the amount of content and the activity level even more, maybe to zero?

Fundamentally, I think the premise of your second bottleneck is incorrect. We don't really have a problem with signal-to-noise ratio here, most of the posts that do get posted here are pretty good, and the few that aren't don't get upvoted and most people ignore them without a problem. We have a problem with low total activity, which is almost the exact opposite problem.

Comment author: Viliam 19 September 2017 11:17:14PM 0 points [-]

the sheer quantity of new content is extremely low

That depends on how much time you actually want to spend reading LW. I mean, the optimal quantity will be different for a person who reads LW two hours a day, or a person who reads LW two hours a week. Now the question is which one of these should we optimize LW for? The former seems more loyal, but the latter is probably more instrumentally rational if we agree that people should be doing things besides reading web. (Also, these days LW competes for time with SSC and others.)

Comment author: Yosarian2 19 September 2017 11:28:55PM 1 point [-]

Ideally, you would want to generate enough content for the person who wants to read LW two hours a day, an then promote or highlight the best 5%-10% of the content so someone who has only two hours a week can see it.

Everyone is much better off that way. The person with only two hours a week is getting much better content then if there was much less content to begin with.

Comment author: Viliam 20 September 2017 10:38:32PM 1 point [-]

If LW2 remembers who read what, I guess "a list of articles you haven't read yet, ordered by highest karma, and secondarily by most recent" would be a nice feature that would scale automatically.

Comment author: DragonGod 16 September 2017 05:13:39PM *  6 points [-]

On StackExchange, you lose reputation whoever you downvote a question/answer; this makes downvoting a costly signal for displeasure. I like the notion, and hope it is included in the new site. If you have to spend your hard-earned karma to cause someone to lose karma, then it may discourage karma assassination, and ensure that downvotes are only used on content people have strong negative feelings towards.

Pros

  1. Users only downvote content they feel strong displeasure towards.
  2. Karma assassination via sockpuppets becomes impossible, and targeted karma attacks through your main account because you dislike a user becomes very costly.
  3. Moderation of downvoting behaviour would be vastly reduced as users downvote less, and only on content they have strong feelings towards.

Cons

  1. There are much less downvotes.
  2. I don't think downvotes should be costly. On StackExchange mediocre content can get a high score if it relates to a popular topic.
    Given that this website has the goal of filtering content in a way that allows people who only want to read a subset to read the high quality posts downvotes of medicore content as useful information.

I think the first con is a feature and not a bug; it is not clear to me that more downvotes are intrinsically beneficial. The second point is valid criticism and I think we need to way the benefit of the downvotes against their cost.

I think you lose one reputation, per downvote, and cause the person downvoted to lose 2 - 5 reputation.

I think downvoting costing 0.33 - 0.5 the karma you deduct from the target of your downvote is a good idea, and will encourage better downvote practices and would overall be an improvement to the karma feature.

Comment author: ChristianKl 20 September 2017 01:20:29PM 2 points [-]

I don't think downvotes should be costly. On StackExchange mediocre content can get a high score if it relates to a popular topic.

Given that this website has the goal of filtering content in a way that allows people who only want to read a subset to read the high quality posts downvotes of medicore content as useful information.

Comment author: DragonGod 20 September 2017 07:32:43PM 0 points [-]

I'll add the point you raise about downvotes to the "cons" of my argument.

Comment author: Viliam 19 September 2017 10:58:14PM *  1 point [-]

So... let's imagine that one day the website will attack e.g. hundreds of crackpots... each of them posting obviously crazy stuff, dozens of comments each... but most people will hesitate to downvote them, because they would remember that doing so reduces their own karma.

Okay, this will probably not happen. But I think that downvoting is an important thing and should not be disincentivized per se. Bad stuff needs to get downvoted. Actually, other than Eugine, people usually don't downvote enough. (And for Eugine, this is not a problem at all; he will get the karma back by upvoting himself with his other sockpuppets.)

I think it is already too easy to get a lot of karma on LW just by posting a lot of mediocre quality comments, each getting 1 karma point on average. Sometimes I suspect that maybe half of my own karma is for the quality of things I wrote, and the remaining half is for spending too much time commenting here even when I have nothing especially insightful to say.

Comment author: DragonGod 20 September 2017 07:53:21AM *  0 points [-]

Okay, this will probably not happen.

Thank God you agree, and thus I think it's value as a thought experiment is nil.

But I think that downvoting is an important thing and should not be disincentivized per se.

Disincentivising downvoting discourages frivolous use of downvotes, and encourages responsible downvoting usage.

If you just disagree with someone, you're more likely to reply than downvote them if you care about your karma for example.

Actually, other than Eugine, people usually don't downvote enough. (And for Eugine, this is not a problem at all; he will get the karma back by upvoting himself with his other sockpuppets.)

On StackExchange upvotes and downvotes from accounts with less than 15 rep are recorded but don't count (presumably until the account gains more than 15 rep). LW may decide to set her bar lower (10 rep?) or higher (>= 20 rep?), but I think the core insight is very good and would be a significant improvement if applied to LW.

Comment author: Habryka 16 September 2017 11:30:34PM *  4 points [-]

Hmm... I feel that this disincentivizes downvoting too strongly, and just makes downvoting feel kind of shitty on an emotional level.

An alternative thing that I've been thinking about is to make it so that when you downvote something, you have to give a short explanation between 40 and 400 characters about why you think the comment was bad. Which both adds a cost to downvoting, and actually translates that cost into meaningful information for the commenter. Another alternative implementation of this could work with a set of common tags that you can choose from when downvoting a comment, maybe of the type "too aggressive", "didn't respond to original claim", "responded to strawman", etc.

Comment author: DragonGod 17 September 2017 01:09:44AM *  1 point [-]

Hmm... I feel that this incentivizes downvoting too strongly

How does this incentivise downvoting? Downvoting is costly signal of displeasure, and as downvotes cost a certain fraction of the karma you deduct, it disincentivises downvoting.

makes downvoting feel kind of shitty on an emotional level.

This is a feature not a bug; we don't want to encourage downvoting and karma assassination. The idea is that downvoting becomes costly signalling of displeasure. Mere disagreement would not cause downvoting. Downvoting should be costly signalling.

An alternative thing that I've been thinking about is to make it so that when you downvote something, you have to give a short explanation between 40 and 400 characters about why you think the comment was bad. Which both adds a cost to downvoting, and actually translates that cost into meaningful information for the commenter.

I thought of this as well, but decided that the StackExchange system of making downvotes cost karma is better for the purposes I thought of.

Another alternative implementation of this could work with a set of common tags that you can choose from when downvoting a comment, maybe of the type "too aggressive", "didn't respond to original claim", "responded to strawman", etc.

This fails to achieve "adds a cost to downvoting"; if there are custom downvoting tags, then the cost of downvoting is removed. I think making downvotes cost a fraction (<= 0.5) of the karma you deduct serves to discourage downvoting.

Comment author: Habryka 17 September 2017 02:40:51AM 0 points [-]

"How does this incentivise downvoting?"

Sorry, my bad. I wanted to write "disincentivize", but failed. I guess it's a warning against using big words.

Comment author: DragonGod 17 September 2017 09:24:07AM *  1 point [-]

Oh, okay. I still think we want to disincentivise downvoting though.

Pros

  1. Users only downvote content they feel strong displeasure towards.
  2. Karma assassination via sockpuppets becomes impossible, and targeted karma attacks through your main account because you dislike a user becomes very costly.
  3. Moderation of downvoting behaviour would be vastly reduced as users downvote less, and only on content they have strong feelings towards.

Cons

  1. There are much less downvotes.
  2. I don't think downvotes should be costly. On StackExchange mediocre content can get a high score if it relates to a popular topic.
    Given that this website has the goal of filtering content in a way that allows people who only want to read a subset to read the high quality posts downvotes of medicore content as useful information.

I think the first con is a feature and not a bug; it is not clear to me that more downvotes are intrinsically beneficial. The second point is valid criticism and I think we need to way the benefit of the downvotes against their cost.

I suggest users lose 40% of the karma they deduct (since you want to give different users different weights). For example, if you downvote someone, they lose 5 karma, but you lose 2 karma.

Comment author: NancyLebovitz 17 September 2017 07:32:02PM 2 points [-]

How about the boring simplicity of having downvote limits? Maybe something around one downvote/24 hours-- not cumulative.

If you're feeling generous, maybe add a downvote/24 hours per 1000 karma, with a maximum or 5 downvotes/24 hours.

Comment author: J_Thomas_Moros 18 September 2017 05:41:56PM 1 point [-]

I'm not opposed to downvote limits, but I think they need to not be too low. There are situations where I am more likely to downvote many things just because I am more heavily moderating. For example, on comments on my own post I care more and am more likely to both upvote and downvote whereas other times I might just not care that much.

Comment author: DragonGod 17 September 2017 08:02:19PM 1 point [-]

This is a solution as well; it is not clear to me though, that it is better than the solution I proposed.

Comment author: DragonGod 16 September 2017 05:08:02PM 1 point [-]

I've often faced frustration (I access LW from mobile) due to the "close" button being clicked (it is often not visible when typing in portrait mode (my phone is such that I can't see the comment while typing in landscape, and I'm used to the portrait keyboard)) resulting in me losing the entire comment. This is very demotivating, and quite frustrating. I hope that this is not a problem in Lesswrong 2.0, and hope that functionality for saving drafts of comments is added.

Comment author: Habryka 16 September 2017 11:42:13PM 2 points [-]

Yeah, the design of the commenting UI is sufficiently different, and more optimized for mobile that I expect this problem to be gone. That said, we are still having some problems with our editor on mobile, and it will take a bit to sort that out.

Comment author: lifelonglearner 16 September 2017 04:17:57PM 2 points [-]

Two things I'd like to see:

1) Some sort of "example-pedia" where, in addition to some sort of glossary, we're able crowd-source examples of the concepts to build upon understanding. I think examples continue to be in short supply, and that's a large understanding gap, especially when we deal with concepts unfamiliar to most people.

2) Something similar to Arbital's hover-definitions, or a real-time searchable glossary that's easily available.

I think the above two things could be very useful features, given the large swath of topics we like to discuss, from cognitive psych to decision theory, to help people more invested in one area more easily swap to reading stuff in another area.

Comment author: richardbatty 16 September 2017 12:07:41PM *  15 points [-]

Have you done user interviews and testing with people who it would be valuable to have contribute, but who are not currently in the rationalist community? I'm thinking people who are important for existential risk and/or rationality such as: psychologists, senior political advisers, national security people, and synthetic biologists. I'd also include people in the effective altruism community, especially as some effective altruists have a low opinion of the rationalist community despite our goals being aligned.

You should just test this empirically, but here are some vague ideas for how you could increase the credibility of the site to these people:

  • My main concern is that lesswrong 2.0 will come across as (or will actually be) a bizarre subculture, rather than a quality intellectual community. The rationality community is offputting to some people who on the face of it should be interested (such as myself). A few ways you could improve the situation:
    • Reduce the use of phrases and ideas that are part of rationalist culture but are inessential for the project, such as references to HPMOR. I don't think calling the moderation group "sunshine regiment" is a good idea for this reason.
    • Encourage the use of standard jargon from academia where it exists, rather than LW jargon. Only coin new jargon words when necessary.
    • Encourage writers to do literature reviews to connect to existing work in relevant fields.
  • It could also help to:
    • Encourage quality empiricism. It seems like rationalists have a tendency to reason things out without much evidence. While we don't want to force a particular methodology, it would be good to nudge people in an empirical direction.
    • Encourage content that's directly relevant to people doing important work, rather than mainly being abstract stuff.
Comment author: scarcegreengrass 19 September 2017 04:05:08PM 1 point [-]

This is a real dynamic that is worth attention. I particularly agree with removing HPMoR from the top of the front page.

Counterpoint: The serious/academic niche can also be filled by external sites, like https://agentfoundations.org/ and http://effective-altruism.com/.

Comment author: Nisan 19 September 2017 04:28:16AM 1 point [-]

Regarding a couple of your concrete suggestions: I like the idea of using existing academic jargon where it exists. That way, reading LW would teach me search terms I could use elsewhere or to communicate with non-LW users. (Sometimes, though, it's better to come up with a new term; I like "trigger-action plans" way better than "implementation intentions".)

It would be nice if users did literature reviews occasionally, but I don't think they'll have time to do that often at all.

Comment author: Habryka 16 September 2017 11:50:40PM *  12 points [-]

I feel that this comment deserves a whole post in response, but I probably won't get around to that for a while, so here is a short summary:

  • I generally think people have confused models about what forms of weirdness are actually costly. The much more common error mode for online communities is being boring and uninteresting. The vast majority of the most popular online forums are really weird and have a really strong distinct culture. The same is true for religions. There are forms of weirdness that prevent you from growing, but I feel that implementing the suggestions in this comment in a straightforward way would mostly result in the forum becoming boring and actually stinting its meaningful growth.

  • LessWrong is more than just weird in a general sense. A lot of the things that make LessWrong weird are actually the result of people having thought about how to have discourse, and then actually implementing those norms. That doesn't mean that they got it right, but if you want to build a successful intellectual community you have to experiment with norms around discourse, and avoiding weirdness puts a halt to that.

  • I actually think that one of the biggest problem with Effective Altruism is the degree to which large parts of it are weirdness averse, which I see as one of the major reasons why EA kind of hasn't really produced any particularly interesting insights or updates in the past few years. CEA at least seems to agree with me (probably partially because I used to work there and shaped the culture a bit, so this isn't independent), and tried to counteract this by making the explicit theme of this years EA Global in SF about "accepting the weird parts of EA". As such, I am not very interested in appeasing current EAs need for normalcy and properness and instead hope that this will move EA towards becoming more accepting of weird things.

I would love to give more detailed reasoning for all of the above, but time is short, so I will leave it at this. I hope this gave people at least a vague sense of my position on this.

Comment author: richardbatty 17 September 2017 06:55:47PM 6 points [-]

You're mainly arguing against my point about weirdness, which I think was less important than my point about user testing with people outside of the community. Perhaps I could have argued more clearly: the thing I'm most concerned about is that you're building lesswrong 2.0 for the current rationality community rather than thinking about what kinds of people you want to be contributing to it and learning from it and building it for them. So it seems important to do some user interviews with people outside of the community who you'd like to join it.

On the weirdness point: maybe it's useful to distinguish between two meanings of 'rationality community'. One meaning is the intellectual of community of people who further the art of rationality. Another meaning is more of a cultural community: a set of people who know each other as friends, have similar lifestyles and hobbies, like the same kinds of fiction, in jokes, etc. I'm concerned that less wrong 2.0 will select for people who want to join the cultural community, rather than people who want to join the intellectual community. But the intellectual community seems much more important. This then gives us two types of weirdness: weirdness that comes out of the intellectual content of the community is important to keep - ideas such as existential risk fit in here. Weirdness that comes more out of the cultural community seems unnecessary - such as references to HPMOR.

We can make an analogy with science here: scientists come from a wide range of cultural, political, and religious backgrounds. They come together to do science, and are selected on their ability to do science, not their desire to fit into a subculture. I'd like to see lesswrong 2.0 to be more like this, i.e. an intellectual community rather than a subculture.

Comment author: John_Maxwell_IV 18 September 2017 05:05:16AM *  4 points [-]

We can make an analogy with science here: scientists come from a wide range of cultural, political, and religious backgrounds.

I'm not persuaded that this is substantially more true of scientists than people in the LW community.

Notably, the range of different kinds of expertise that one finds on LW is much broader than that of a typical academic department (see "Profession" section here).

They come together to do science, and are selected on their ability to do science, not their desire to fit into a subculture.

I don't think people usually become scientists unless they like the culture of academic science.

I'd like to see lesswrong 2.0 to be more like this, i.e. an intellectual community rather than a subculture.

I think "intellectual communities" are just a high-status kind of subculture. "Be more high status" is usually not useful advice.

I think it might make sense to see academic science as a culture that's optimized for receiving grant money. Insofar as it is bland and respectable, that could be why.

If you feel that receiving grant money and accumulating prestige is the most important thing, then you probably also don't endorse spending a lot of time on internet fora. Internet fora have basically never been a good way to do either of those things.

Comment author: richardbatty 18 September 2017 09:19:57AM 3 points [-]

The core of my argument is: try to select as much as possible on what you care about (ability and desire to contribute and learn from lesswrong 2.0) and as little as possible on stuff that's not so important (e.g. do they get references to hpmor). And do testing to work out how best to achieve this.

By intellectual community I wasn't meaning 'high status subculture', I was trying to get across the idea of a community that selects on people's ability to make intellectual contributions, rather than fit in to a culture. Science is somewhat like this, although as you say there is a culture of academic science which makes it more subculture-like. stackoverflow might be a better example.

I'm not hoping that lesswrong 2.0 will accumulate money and prestige, I'm hoping that it will make intellectual progress needed for solving the world's most important problems. But I think this aim would be better served if it attracted a wide range of people who are both capable and aligned with its aims.

Comment author: NancyLebovitz 17 September 2017 07:27:56PM 3 points [-]

My impression is that you don't understand how communities form. I could be mistaken, but I think communities form because people discover they share a desire rather than because there's a venue that suits them-- the venue is necessary, but stays empty unless the desire comes into play.

" I'm thinking people who are important for existential risk and/or rationality such as: psychologists, senior political advisers, national security people, and synthetic biologists. I'd also include people in the effective altruism community, especially as some effective altruists have a low opinion of the rationalist community despite our goals being aligned."

Is there something they want to do which would be better served by having a rationality community that suits them better than the communities they've got already?

Comment author: richardbatty 17 September 2017 08:19:47PM 1 point [-]

"I think communities form because people discover they share a desire"

I agree with this, but would add that it's possible for people to share a desire with a community but not want to join it because there are aspects of the community that they don't like.

"Is there something they want to do which would be better served by having a rationality community that suits them better than the communities they've got already?"

That's something I'd like to know. But I think it's important for the rationality community to attempt to serve these kinds of people both because these people are important for the goals of the rationality community and because they will probably have useful ideas to contribute. If the rationality community is largely made up of programmers, mathematicians, and philosophers, it's going to be difficult for it to solve some of the world's most important problems.

Perhaps we have different goals in mind for lesswrong 2.0. I'm thinking of it as a place to further thinking on rationality and existential risk, where the contributors are anyone who both cares about those goals and is able to make a good contribution. But you might have a more specific goal: a place to further thinking on rationality and existential risk, but targeted specifically at the current rationality community so as to make better use of the capable people within it. If you had the second goal in mind then you'd care less about appealing to audiences outside of the community.

Comment author: NancyLebovitz 17 September 2017 08:50:14PM 3 points [-]

I'm fond of LW (or at least its descendants). I'm somewhat weird myself, and more tolerant of weirdness than many.

It has taken me years and some effort to get a no doubt incomplete understanding of people who are repulsed by weirdness.

From my point of view, you are proposing to destroy something I like which has been somewhat useful in the hopes of creating a community which might not happen.

The community you imagine might be a very good thing. It may have to be created by the people who will be in it. Maybe you could start the survey process?

I'm hoping that the LW 2.0 software will be open source. The world needs more good discussion venues.

Comment author: richardbatty 18 September 2017 09:05:51AM 2 points [-]

"From my point of view, you are proposing to destroy something I like which has been somewhat useful in the hopes of creating a community which might not happen."

I think a good argument against my position is that projects need to focus quite narrowly, and it makes sense to focus on the existing community given that it's also already produced good stuff.

Hopefully that's the justification that the project leaders have in mind, rather than them focusing on the current rationality community because they think that there aren't many people outside of it who could make valuable contributions.

Comment author: NancyLebovitz 16 September 2017 02:37:09PM 7 points [-]

It seems to me that you want to squeeze a lot of the fun out of the site.

I'm not sure how far it would be consistent with having a single focus for rationality online, but perhaps there should be a section or a nearby site for more dignified discussion.

I think the people you want to attract are likely to be busy, and not necessarily interested in interviews and testing for a rather hypothetical project, but I could be wrong.

Comment author: moridinamael 15 September 2017 09:48:48PM 12 points [-]

I've heard that in some cases, humans regard money to be an incentive.

Integrating Patreon, Paypal or some existing micropayments system could allow users to not only upvote but financially reward high-value community members.

If Less Wrong had a little "support this user on Patreon" icon next to every poster's username, I would certainly have thrown some dollars at more than a handful of Less Wrong posters. Put more explicitly - maybe Yvain and Eliezer would be encouraged to post certain content on LW2.0 rather than SSC/Facebook if they reliably got a little cash from the community at large every time they did it.

Speaking of the uses of money, I'm fond of communities that are free to read but require a small registration fee in order to post. Such fees are a practically insurmountable barrier to trolls. Eugine Nier could not have done what he did if registering an account cost $10, or even $1.

Comment author: John_Maxwell_IV 16 September 2017 07:37:18AM *  7 points [-]

Does anyone know the literature on intrinsic motivation well enough to comment on whether paying users to post is liable to undermine other sources of motivation?

The registration fee idea is interesting, but exacerbates the chicken and egg problem inherent in online communities. I also have a hunch that registration fees tend to make people excessively concerned with preserving their account's reputation (so they can avoid getting banned and losing something they paid money for), in a way that's cumulatively harmful to discourse, but I can't prove this.

Comment author: lifelonglearner 16 September 2017 03:14:17PM *  7 points [-]

Yep!

See here and here

As one might expect, money is often a deterrent for actual habituation.

EDIT: Additional clarification:

The first link shows that monetary payment is only effective as a short-term motivator.

The second link is a massive study involving almost 2,000 people which tried to pay people to go to the gym. We found that after the payment period ended, gym attendance fell back to roughly pre-payment levels.

Comment author: Elo 16 September 2017 07:50:13AM 1 point [-]

Yes it will probably cause people to devalue the site. If you pay a dollar it will tend to "feel like" the entire endeavour is worth a dollar.

Comment author: John_Maxwell_IV 17 September 2017 02:06:48AM 1 point [-]

I was talking about paying people to contribute. Not having people pay for membership.

Comment author: NancyLebovitz 16 September 2017 02:29:35PM 1 point [-]

Metafilter has continued to be a pretty good site even though it requires a small fee to join. There's also a requirement to post a few comments (you can comment for free but need to be a member to do top level posts) and wait a week after sending in money. And it's actively moderated.

http://www.metafilter.com/about.mefi

Comment author: moridinamael 16 September 2017 02:27:18PM 1 point [-]

So charge $50 =)

Comment author: Alicorn 15 September 2017 08:23:41PM 11 points [-]

I feel more optimistic about this project after reading this! I like the idea of curation being a separate action and user-created sequence collections that can be voted on. I'm... surprised to learn that we had view tracking that can figure out how much Sequence I have read? I didn't know about that at all. The thing that pushed me from "I hope this works out for them" to "I will bother with this myself" is the Medium-style individual blog page; that strikes a balance between desiderata in a good place for me, and I occasionally idly wish for a place for thoughts of the kind I would tweet and the size I would tumbl but wrongly themed for my tumblr.

I don't like the font. Serifs on a screen are bad. I can probably fix this client side or get used to it but it stood out to me a surprising amount. But I'm excited overall.

Comment author: quanticle 28 September 2017 09:31:38PM 1 point [-]

In the age of the Internet and in the company of nonconformists, it does get a little tiring reading the 451st public email from someone saying that the Common Project isn't worth their resources until the website has a sans-serif font.

Eliezer Yudkowsky

Comment author: gjm 29 September 2017 01:54:07PM 0 points [-]

It may be worth saying explicitly that this is from 2009 and therefore can't be talking about responses to "LW 2.0".

Comment author: SaidAchmiz 16 September 2017 04:59:23PM *  7 points [-]

I don't like the font. … I can probably fix this client side or get used to it but it stood out to me a surprising amount.

My other comment aside, this is (apart from the general claim) a reasonable user concern. I would recommend (to the LW 2.0 folks) the following simple solution:

  • Have several pre-designed themes (one with a serif font, one with a well-chosen sans font, and then "dark theme" versions of both, at least)
  • Let users select between those themes via their Profile screen

This should satisfy most people, and would still preserve the site's aesthetics.

Comment author: Habryka 16 September 2017 11:20:43PM 1 point [-]

I am slightly hesitant to force authors to think about how their posts will look like in different fonts, and different styles. While I don't expect this to be a problem most of the time, there are posts that I write where the font choice would matter for how the content comes across.

Medium allows the writer to chose between a sans-serif and a serif font, which I like a bit more, but I would expect would not really satisfy Alicorn's preferences.

Maintaining multiple themes also adds a lot of design constraints and complexity to updating various parts of the page. The width of a button might change with different fonts, and depending on the implementation, you might end up needing to add special cases for each theme choice, which I would really prefer to avoid.

Overall, my hypothesis is that Alicorn might not dislike serif-fonts in general, but might be unhappy about our specific choice of serif fonts, which is indeed very serify. I would be curios whether she also has a similar reaction to the default Medium font, for example displayed in this post: https://medium.com/@pshrmn/a-simple-react-router-v4-tutorial-7f23ff27adf

Comment author: SaidAchmiz 18 September 2017 08:19:39AM *  2 points [-]

Update: I have a recommendation for you!

Take a look at this page: https://wiki.obormot.net/Reference/MerriweatherFontsDemo

The Merriweather and Merriweather Sans fonts (available for free via Google Fonts) are, as you can see, designed to be identical in width, line spacing, etc. They are quite interchangeable, in body text, UI, etc. Both are quite readable, and aesthetically pleasing.

(As a bonus, active on that page is a tiny bit of JavaScript trickery that sets different body text font weights depending on whether the client is running on a Mac, Windows, or Linux platform, to ensure that everyone sees basically the same thing, and enjoys equally good text readability, despite differences in text rendering engines. Take a look at the page source to see how it's done!)

UPDATE 2: A couple of mockups (linking to individual images because Imgur's zoom sucks otherwise). Be sure to zoom in on each image (i.e. view at full magnification):

LW 2.0 with Merriweather:

LW 2.0 with Merriweather Sans:

Comment author: Viliam 19 September 2017 10:39:33PM *  0 points [-]

When I compare the two examples, the second one feels "clear", while the first one feels "smudgy". I have to focus more to read the first one.

EDIT: Windows 10, Firefox 55.0.3, monitor 1920x1080 px

Comment author: SaidAchmiz 19 September 2017 11:00:54PM *  2 points [-]

1. OS (and version), browser (and version), device/display, etc.?

(General note: folks, please, please include this information whenever you say anything to a web designer/developer/etc. about how a website looks or works for you!!)

2. Great! If one of them feels clear, then this goes to show exactly what I was saying: user choice is good.

Comment author: Alicorn 17 September 2017 01:18:39AM 2 points [-]

The Medium font is much less bad but still not great.

Comment author: SaidAchmiz 16 September 2017 11:43:06PM *  6 points [-]

As with many such things, there are standard, canonical solutions to your concerns.

In this case, the answer is "select pairs/sets of fonts that are specifically designed to have the same width in both the serif and the sans variants". There are many such "font superfamilies". If you'd like, I can draw up a list of recommendations. (It would be helpful if you could let me know your constraints w.r.t. licensing and budget.)

Theme variants do not have to be comprehensive redesigns. It is eminently possible to design a set themes that will not lead to the content being perceived very differently depending on the active theme.

P.S.:

Overall, my hypothesis is that Alicorn might not dislike serif-fonts in general, but might be unhappy about our specific choice of serif fonts, which is indeed very serify.

I suspect the distinction you're looking for, here, is between transitional serifs (of which Charter, the Medium font, is one, although it's also got slab-serif elements) and the quite different old-style serifs (of which ET Book, the current LW 2.0 font, is one). (There are also other differences, orthogonal to that distinction—such as ET Book's considerably smaller x-height—which also affect readability.)

Alicorn, if you're reading this, I wonder what your reaction is to the font used on this website:

https://www.readthesequences.com

P.P.S.: It is also possible that the off-black text color is negatively impacting readability! (Especially since it can interact in a somewhat unfortunate manner with certain text rendering engines.)

Alicorn, what OS and browser are you viewing the LW 2.0 site on?

Comment author: Alicorn 17 September 2017 01:19:57AM 3 points [-]

I do not like the readthesequences font. It feels like I'm back in grad school and also reading is suddenly harder.

I'm on a Mac 'fox.

Comment author: SaidAchmiz 17 September 2017 02:37:50AM *  3 points [-]

Ok, thanks!

FYI, your assessment is in the extreme minority; most people who have seen that site have responded very positively to the font choice (and the typography in general). This suggests that your preferences are unusual, in this sphere.

I say this, not to suggest that your preference / reaction is somehow "wrong" (that would be silly!), but a) to point out the the danger in generalizing from one's own example (typical mind blah blah), and b) to underscore the importance of user choice and customization options!

rest of this response is not specifically for Alicon but is re: this whole comment thread

This is still a gold standard of UX design: sane defaults plus good[1] customizability.

[1] "Good" here means:

  • comprehensive
  • intuitive
  • non-overwhelming (i.e. layered)

Note, these are ideals, not basic requirements; every step we take toward the ideal is a good step. So by no means should you (the designer/developer) ever feel like "comprehensive customizability is an unreachable goal; there's no reason to bother, since Doing It Right™ is too much effort"! So in this case, just offering a couple of themes, which are basic variations on each other (different-but-matching font choices, a different color scheme), is already a great thing and will greatly improve the user experience.

Comment author: DragonGod 16 September 2017 03:45:53PM 1 point [-]

Agreed, generally, it seems that sans serif are for screens, and serif is for print.

Comment author: SaidAchmiz 16 September 2017 04:53:35PM 4 points [-]

This is old "received wisdom", and hasn't been the case for quite a while.

Folks, this is what people mean when they talk about LessWrong ignoring the knowledge of experts. Here's a piece of "knowledge" about typography and web design, that is repeated unreflectively, without any consideration of whether there exists some relevant body of domain knowledge (and people with that domain knowledge).

What do the domain experts have to say? Let's look:

But this domain knowledge has not, apparently, reached LessWrong; here, "Serifs on a screen are bad" and "sans serif are for screens, and serif is for print" is still true.

And now we have two people agreeing with each other about it. So, what? Does that make it more true? What if 20 people upvoted those comments, and five more other LessWrongers posted in agreement? Would that make it more true? What amount of karma and local agreement does it take to get to the truth?

Comment author: gjm 19 September 2017 11:53:04AM 6 points [-]

Here's what I think is the conventional wisdom about serif/sans-serif; I don't think it is in any way contradicted by the material you've linked to.

Text that is small when measured in display pixels is generally harder to read fluently when set in a typeface with serifs.

Only interested in readers with lovely high-DPI screens? Go ahead, use serifs everywhere; it'll probably be fine. Writing a headline, or a splash screen with like 20 words on it? Use serifs if they create the effect you want; the text won't be small enough, nor will there be enough of it in a block, for there to be a problem.

But if you are choosing a typeface for substantial chunks of text that might be read on a not-so-great screen, you will likely get better results with a sans-serif typeface.

So, what about those domain experts? Jakob Nielsen is only addressing how things look on "decent computer screens with pixel densities of 220 PPI or more". Design Shack article 1 says that a blanket prohibition on serifed typefaces on screens is silly, which it is. But look at the two screenshots offered as counterexamples to "Only use serifs in print". One has a total of seven words in it. The other has a headline in a typeface with serifs ... followed by a paragraph of sanf-serif text. Design Shack article 2 says that sans-serif typefaces are better "for low-resolution displays", though it's not perfectly clear what they count as low-resolution. The Quora question has a bunch of answers saying different things, mostly not from "domain experts" in any strong sense.

I like seriffed typefaces. In a book, sans-serif is generally hideous and offputting to me. On my phone or my laptop, both of which have nice high-resolution displays, Lesser Wrong content with serifs looks just fine. (Better than if it were set sans-serif? Dunno.) On the desktop machine I'm using right now, though, it's ugly and it feels more effortful to read than the corresponding thing on, say, Less Wrong. For me, that is.

now we have two people agreeing [...] Does that make it more true?

Yes. More precisely: the proposition we should actually care about here is not some broad generality about serif versus sans-serif typefaces, but something like "Users of Lesser Wrong will, on the whole, find it a bit easier on the eyes if content is generally set in sans-serif typefaces". Consider the limiting case where every LW user looks at the site and says "ugh, don't like that font, the serifs make it harder for me to read". Even if all those users are shockingly ignorant of typography, this is a case where if no one likes it, then it is ipso facto bad.

Of course we don't have (anything like) the entire LW community saying in chorus how much they dislike those serifs. But yes, when what matters is the experience of a particular group of people, each individual person who finds a thing bad does contribute to its badness, and each individual person who says it's bad does provide evidence for its badness.

What amount of karma and local agreement does it take to get to the truth?

Karma is relevant here only as a proxy for participation. A crude answer to this question is: enough to constitute a majority of users, weighted by frequency of use.

In case I haven't made it clear enough yet, I am not arguing that LW people are always right, or that high-karma LW people are always right. I am arguing that when the thing at issue is the experience of LW people, the experiences of LW people should not be dismissed. And I am arguing that on the more general question (are typefaces with serifs a bad idea on the web?) the simple answer "no; that's an outdated bit of bogus conventional wisdom" is in fact just as wrong as the simple answer "yes; everyone knows that".

Comment author: SaidAchmiz 19 September 2017 05:32:13PM *  1 point [-]

And I am arguing that on the more general question (are typefaces with serifs a bad idea on the web?) the simple answer "no; that's an outdated bit of bogus conventional wisdom" is in fact just as wrong as the simple answer "yes; everyone knows that".

Disagree. (Keep reading for details.)

But if you are choosing a typeface for substantial chunks of text that might be read on a not-so-great screen, you will likely get better results with a sans-serif typeface.

This is still incorrect, because serif readability is superior to that of sans-serif, and see below for the matter of "not-so-great screens".

Screen DPS

Given the pixel resolution per character you need to make serifs work, they are inferior on the screen… if you have a 72ppi (or less) display.

Now, such displays exist; here's one. They are quite rare, though, and designed for entertainment, not work. The idea that any appreciable percentage of LW users have such hardware seems implausible.

On a ~96ppi display (such as this nearly decade-old cheap flat-panel I'm using right now, or indeed any display display made in the past 15+ years), the apparent (angular, a.k.a. "CSS reference pixel") font size that you need to bring out the superiority of serif typefaces is no larger than the minimum size called for by other accessibility guidelines.

“The LW 2.0 font is less readable”

On the desktop machine I'm using right now, though, it's ugly and it feels more effortful to read than the corresponding thing on, say, Less Wrong. For me, that is.

1. What OS is this on? If the answer is "Linux" or "Windows", then part of the answer is "text rendering works very different on those operating systems, and you have a) test your site on those systems, b) make sure to make typographic choices that compensate, c) take specific actions to ensure that the user experience is adjusted for each client platform". I of course can't speak to (a), but (b) and (c) are not in evidence here.

2. The body text font size on LW 2.0 is too small (especially for that font), period. Again I refer you to https://www.readthesequences.com/Biases-An-Introduction; the body text is at 21px there. I consider that to be a minimum (adjusted for the particular font); whereas LW 2.0 (with a similar type of font) is at 16px. Yes, it looks tiny and hard to read. (But have you tried zooming in? What happens then?)

3. Other issues, like color (#444, in this case) affecting text rendering. I speak of this in my other comments.

“Consensus matters locally”

Consider the limiting case where every LW user looks at the site and says "ugh, don't like that font, the serifs make it harder for me to read". Even if all those users are shockingly ignorant of typography, this is a case where if no one likes it, then it is ipso facto bad.

If every LW user looks at the site and says that, then we can't still conclude anything about serifs from that, because if all of those users have not the smallest ounce of typography or design expertise, then they don't know what the heck they like or dislike, serif-wise.

Let me be clear: I'm not saying that people can't tell whether they like or dislike a particular thing. I am saying that without domain knowledge, people can't generalize their preferences. Ok, so some text on their screen is hard for them to read. What's making it so? The fact that a font has serifs? Or maybe just that it's a particular kind of serif font? Or the font weight? Or the weight grade? Or the shape of the letterforms (how open the curves are, for instance, or the weight variability, perhaps due to which "optical size" is being used)? Or the color? Or the subpixel rendering settings? Or the kerning? Or the line spacing? Or the line length? Or the text-rendering CSS property setting? If you (the hypothetical-user you) don't know what most or all of those things are, then sure your preferences are real, but your opinion (generalized from those preferences) is worth jack squat.

In other words: "if no one likes it, then it is ipso facto bad"—yes, but what, exactly, is "it"? You're equivocating between two meanings, in that sentence! So, this is true:

“If no one likes <a particular specific thing>, then <that specific particular thing> is bad.”

Yes. Granted. But you seem to want to say something like:

“If no one likes <a particular specific thing>, then <things in a class that include that specific particular thing> are bad.”

But any particular thing belongs to many different classes, which intersect at the point defined by that thing! Obviously not all those classes are ipso facto bad, so which one(s) are we talking about?? We have no idea!

I am arguing that when the thing at issue is the experience of LW people, the experiences of LW people should not be dismissed.

Dismissed? No. Taken at anything even remotely resembling face value? Also no.

Come on, folks. This is just a rehash of the "people don't have direct access to their mental experience" debate. You know all of this already. Why suddenly forget it when it comes up in a new domain?

Comment author: gjm 19 September 2017 10:22:23PM *  0 points [-]

serif readability is superior to that of sans-serif

Do you have actual solid evidence for that? I'm guessing that if you did you'd have given it already in your earlier comments, and you haven't; but who knows? (One of the answers to that Quora question mentions a study that found a small advantage for serifs. It also remarks that the difference was not statistically significant, and calls into question the choice of typefaces used, and says it's not a very solid study. So I hope you have something better than that.)

On a ~96ppi display [...] the apparent [...] font size that you need to bring out the superiority of serif typefaces is no larger than the minimum size called for by other accessibility guidelines.

Again, I would be interested in more information about what evidence you have about the font size required "to bring out the superiority of serif typefaces". For the avoidance of doubt, that isn't a coded way of saying "I bet you're wrong"; I would just like to know what's known about this and how solidly. I do not have the impression that these issues are as settled as you are making them sound; but I may just be unaware of the relevant work.

What OS is this on?

One instance is Firefox on Windows; the other is Firefox on FreeBSD (which I expect is largely indistinguishable in this context from Firefox on Linux). I concur with your guess that the people responsible for LesserWrong have not done thorough testing of their site on a wide variety of platforms, though I would be surprised if no one involved uses either Windows or Linux.

Yes, it looks tiny and hard to read.

LesserWrong has what looks to me like a weird multiplicity of different text sizes. Some of the text is clearly too small (personally I like small text, but I am aware that my taste is not universally shared). However -- and I must stress again that here I am merely describing my own experience of the site -- if I go to, say, this post on the Unix box at my desk right now then (1) the size of the type at my typical viewing distance is about the same as that of a decently typeset paperback book at its typical viewing distance, and (2) I find the text ugly and harder to read than it should be because various features of the typeface (not only the serifs) are poorly represented -- for me, on that monitor, after rendering by my particular machine -- at the available resolution. (The text is very similar in size and appearance to that on readthesequences.com; LW2.0 appears to be using -- for me, etc., etc. -- ETBembo Roman LF at 19.2px actual size, whereas RTS is using GaramondPrmrPro at 21px actual size. ETBembo has a bigger x-height relative to its nominal size and most lowercase letters are almost exactly the same size in each.)

Other issues, like color

Yup, agreed. But I would say the same about readthesequences.com even though its body text is black.

If every LW user looks at the site and says that, then we can't still conclude anything about serifs from that,

I agree. (Though it would, despite their hypothetical ignorance, be evidence. Someone who says "this text is hard to read because of the serifs" may be wrong, but I claim they are more likely to say it in the face of text that's hard to read because of its serifs than of text that's hard to read for some other reason.)

Perhaps I left too much implicit in my earlier comment, so let me try to remedy that. I firmly agree that the mere fact that some LW users believe some proposition about serifs in webpage text is perfectly compatible with the falsehood of that proposition. Even if it's quite a lot of LW users. Even if they have a lot of karma.

But the thing that actually matters here is not the general proposition about serifs, but a more specific question about the type used on LesserWrong. I wasn't equivocating between this and the general claim about serifs, nor was I unaware of the difference; I was deliberately attempting to redirect discussion to the more relevant point.

(Not that the general question isn't interesting; it is.)

[EDITED to add:] Of course much of what I wrote before was about the general proposition. Whether I agree with you about that depends on exactly what version of the general proposition we're discussing -- I take it you would agree with me that many are possible, and some might be true while others are false. In particular, I am somewhat willing to defend the claim that there are otherwise reasonable choices of text size for which typical seriffed typefaces make for a worse reading experience than typical sans-serif typefaces for people using 100ish-ppi displays, and that while this can be mitigated somewhat by very careful choice of serif typefaces and careful working around the quirks of the different text rendering users on different platforms will experience, selecting sans-serif typefaces instead may well be the better option. I am also willing to be convinced to stop defending that claim, if there is really good evidence against it.

Comment author: SaidAchmiz 19 September 2017 11:28:26PM 0 points [-]

Do you have actual solid evidence for that?

Not close at hand. You may reasonably consider my claim to be undefended for now. When I have the time, I'll try to put together a bit of a lit survey on this topic.

LesserWrong has what looks to me like a weird multiplicity of different text sizes. Some of the text is clearly too small (personally I like small text, but I am aware that my taste is not universally shared). However -- and I must stress again that here I am merely describing my own experience of the site -- if I go to, say, this post on the Unix box at my desk right now then (1) the size of the type at my typical viewing distance is about the same as that of a decently typeset paperback book at its typical viewing distance, and (2) I find the text ugly and harder to read than it should be because various features of the typeface (not only the serifs) are poorly represented -- for me, on that monitor, after rendering by my particular machine -- at the available resolution. (The text is very similar in size and appearance to that on readthesequences.com; LW2.0 appears to be using -- for me, etc., etc. -- ETBembo Roman LF at 19.2px actual size, whereas RTS is using GaramondPrmrPro at 21px actual size. ETBembo has a bigger x-height relative to its nominal size and most lowercase letters are almost exactly the same size in each.)

Right you are. The 16px size is what I saw on the front page.

Even on my machines, ET Book (source) does not seem to render as well as Garamond Premier Pro (in a browser).

Though it would, despite their hypothetical ignorance, be evidence. Someone who says "this text is hard to read because of the serifs" may be wrong, but I claim they are more likely to say it in the face of text that's hard to read because of its serifs than of text that's hard to read for some other reason.

I think this is literally true but relevantly false; specifically, I think this is false once you condition on the cause of the text's unreadability not being some gross and obvious circumstance (like, it's neon purple on a fuchsia background, or it's set at 2px size, etc.)

I think that someone who is ignorant of typography is no more likely to blame serifs in the case of the serifs being to blame than in the case of the text rendering or line length being to blame.

But the thing that actually matters here is not the general proposition about serifs, but a more specific question about the type used on LesserWrong. I wasn't equivocating between this and the general claim about serifs, nor was I unaware of the difference; I was deliberately attempting to redirect discussion to the more relevant point.

Noted. I was responding to the general claim.

As to the specific question, the matter of serifs is moot, because (as with all specific design decisions), each designer decision should be comprehensively user-tested and environment-tested, and as much user choice should be offered as possible.

Of course much of what I wrote before was about the general proposition. Whether I agree with you about that depends on exactly what version of the general proposition we're discussing -- I take it you would agree with me that many are possible, and some might be true while others are false.

Indeed.

In particular, I am somewhat willing to defend the claim that there are otherwise reasonable choices of text size for which typical seriffed typefaces make for a worse reading experience than typical sans-serif typefaces for people using 100ish-ppi displays … I am also willing to be convinced to stop defending that claim, if there is really good evidence against it.

Nope, the claim is reasonable. Websites where information density is more important than long-form readability, or where text comes in small chunks and a user is expected not to read straight through but to extract those chunks, may be like this. For that use case, a smaller point size of "body" text may be called for, and a well-chosen sans font may be a better fit.

LessWrong is not such a website, though a hypothetical LessWrong community wiki may be (or it may not be; it depends on what sort of content it mostly contains).

(Aside: I somewhat object to speaking of "typical" serif typefaces, because that's hard to resolve nowadays. I suspect that you know that, and I know that, but in a public discussion it pays to be careful with language like this.)

However:

very careful choice of […] typefaces and careful working around the quirks of the different text rendering users on different platforms will experience

… is always advisable, regardless of typographic or other design choices.

Comment author: Alicorn 21 September 2017 10:42:53PM 0 points [-]

Now that I'm looking at it more closely: Quoted text in comments does not seem sufficiently set off. It's slightly bigger and indented but it would be easy on casual inspection to mistake it for part of the same comment.

Comment author: Kaj_Sotala 21 September 2017 03:42:09PM 0 points [-]

I think the font feels okay (though not great) when it's "normal" writing, but text in italics gets hard to read.

Comment author: Manfred 15 September 2017 07:47:53PM 7 points [-]

I also agree that HPMOR might need to go somewhere other than the front page. From a strategic perspective, I somehow want to get the benefits of HPMOR existing (publicity, new people finding the community) without the drawbacks (it being too convenient to judge our ideas by association).

Comment author: Habryka 16 September 2017 11:13:55PM 4 points [-]

I am somewhat conflicted about this. HPMOR has been really successful at recruiting people to this community (HPMOR is the path by which I ended up here), and according to last year's survey about 25% of people who took the survey found out about LessWrong via HPMOR. I am hesitant to hide our best recruitment tool behind trivial inconveniences.

One solution to this that I've been thinking about is to have a separate section of the page filled with rationalist art and fiction, which would prominently feature HPMOR, Unsong and some of the other best rationalist fiction out there. I can imagine that section of the page itself getting a lot of traffic, since fiction is a lot easier to get into than the usually more dry reading on LW and SSC, and if we set up a good funnel between that part of the site and the main discussion we might get a lot of benefits, without needing to feature HPMOR prominently on the frontpage.

Comment author: gjm 19 September 2017 11:55:32AM 5 points [-]

I am hesitant to hide our best recruitment tool behind trivial inconveniences.

HPMOR is an effective tool for getting people to find out about Less Wrong. But someone who is at the front page of the site has already found Less Wrong.

a separate section of the page filled with rationalist art and fiction

A separate section of the site, I suggest. It doesn't need to be on the front page.

Comment author: DragonGod 17 September 2017 09:35:43AM 1 point [-]

One solution to this that I've been thinking about is to have a separate section of the page filled with rationalist art and fiction, which would prominently feature HPMOR, Unsong and some of the other best rationalist fiction out there. I can imagine that section of the page itself getting a lot of traffic, since fiction is a lot easier to get into than the usually more dry reading on LW and SSC, and if we set up a good funnel between that part of the site and the main discussion we might get a lot of benefits, without needing to feature HPMOR prominently on the frontpage.

I think this is a great solution.

Comment author: SaidAchmiz 16 September 2017 11:32:52PM 3 points [-]

Are you sure that the set of people that are being recruited to the community via HPMOR, and the set of people whom we most want to recruit into the community, have a lot of overlap? Or are these, perhaps, largely disjoint sets? What about the set of people whom we most want to recruit, and the set of people who are repelled by HPMOR? Might there not be quite a bit of overlap there?

Numbers aren't everything!

I agree with the idea of having a separate rationalist fiction page. (Perhaps we might even make it so separate that it's actually a whole other site! A page / site section of "links to rationality-themed fiction" wouldn't be out of place, however.)

Comment author: Habryka 17 September 2017 02:35:44AM 4 points [-]

"Are you sure that the set of people that are being recruited to the community via HPMOR, and the set of people whom we most want to recruit into the community, have a lot of overlap?"

I agree that this is a concern to definitely think about, though in this case I feel like I have pretty solid evidence that there is indeed large amount of overlap. A lot of the best people that I've seen show up over the last few years seem to have been attracted by HPMOR (I would say more than 25%). It would be great to have some better formatted data on this, and for a long time I wanted someone to just create a spreadsheet for a large set of people in the rationalist community and codify their origin story, but until we have something like this, the data that I have from various surveys + personal experience + being in a key position to observe where people are coming from (working with CFAR and CEA for the last few years) I am pretty sure that there is significant overlap.

Comment author: arundelo 15 September 2017 06:51:45PM *  5 points [-]

Has the team explicitly decided to call it "LessWrong" (no space) instead of "Less Wrong" (with a space)?

The spaced version has more precedent behind it. It's used by Eliezer and by most of the static content on lesswrong.com, including the <title> element.

Comment author: Habryka 16 September 2017 11:35:15PM 5 points [-]

Being aware that this is probably the most bikesheddy thing in this whole discussion, I've actually thought about this a bit.

From skimming a lot of early Eliezer posts, I've seen all three uses "LessWrong", "Lesswrong" and "Less Wrong" and so there isn't a super clear precedent here, though I do agree that "Less Wrong" was used a bit more often.

I personally really like "Less Wrong", because it has two weirdly capitalized words, and I don't like brand names that are two words. It makes it sound too much like it wants to refer to the original meaning of the words, instead of being a pointer towards the brand/organization/online-community, and while one might think that is actually useful, it usually just results in a short state of confusion when I read a sentence that has "Less Wrong" in it, because I just didn't parse it as the correct reference.

I am currently going with "LessWrong" and "LESSWRONG", which is what I am planning to use in the site navigation, logos and other areas of the page. If enough people object I would probably change my mind.

Comment author: Viliam 19 September 2017 11:04:59PM 5 points [-]

I think "Less Wrong" was an appropriate name at the beginning, when the community around the website was very small. Now that we have grown, both in user count and in content size, we could simply start calling ourselves "Wrong". One word, no problems with capitalization or spacing.

Comment author: Kaj_Sotala 21 September 2017 03:39:38PM *  1 point [-]

Calling ourselves "Wrong" or "Wrongers" would also fix the problem of "rationalist" sounding like we'd claim to be totally rational!

Comment author: gjm 21 September 2017 04:25:22PM 1 point [-]

On the other hand, I think this might come across as too "cute" and be felt insincere.

Comment author: gjm 19 September 2017 11:57:47AM 4 points [-]

I personally really [dis]like "Less Wrong", because it has two weirdly capitalized words, and I don't like brand names that are two words.

"LessWrong" also has two weirdly capitalized words, but it's one notch weirder because they've been stuck together.

I agree that this is a super-bikesheddy topic and will try to avoid getting into an argument about this, but I would like to register a strong preference for "Less Wrong" as the default version of the name.

Comment author: arundelo 17 September 2017 08:06:54PM *  6 points [-]

I just used Wei Dai's lesswrong_user script to download Eliezer's posts and comments (excluding, last I knew, those that don't show up on his "OVERVIEW" page e.g. for karma reasons). This went back to late December 2009 before the network connection got dropped.

I counted his uses of "LessWrong" versus "Less Wrong". (Of course I didn't count things such as the domain name "lesswrong.com", the English phrase "less wrong", or derived words like "LessWrongers".)

"LessWrong": 1 2 3* 4*

"Less Wrong": 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20* 21 22* 23 24 25 26

Entries with asterisks appear in both lists. Of his four uses of "LessWrong", three are modifying another word (e.g., "LessWrong hivemind").

(For what it's worth, "LessWrongers": 1 2; "Less Wrongians": 1.)

Comment author: ESRogs 17 September 2017 09:20:23AM 1 point [-]

I personally really like "Less Wrong", because it has two weirdly capitalized words, and I don't like brand names that are two words.

Did you mean to write, 'dislike' "Less Wrong"'?

Comment author: Habryka 17 September 2017 07:05:40PM 3 points [-]

Wow... yes. This is the second time in this comment thread that I forgot to add a "dis" in front of a word.

Comment author: Elo 17 September 2017 01:58:52AM 1 point [-]

Irrelevant as to which. Just pick one and stick to. It.

Comment author: NancyLebovitz 15 September 2017 05:51:07PM 7 points [-]

I'm hoping there will be something like the feature at ssc to choose the time when the site considers comments to be new. It's frustrating to not be able to recover the pink borders on new comments on posts at LW.

Comment author: Benito 15 September 2017 09:25:17PM 10 points [-]

I agree - and we've built this feature! It's currently live on the beta site.

Comment author: Kaj_Sotala 15 September 2017 05:25:13PM 2 points [-]

To solve this bottleneck, here is the rough content structure that I am currently planning to implement on LessWrong:

I notice that this picture doesn't seem to include link posts. Will those still exist?

Comment author: Raemon 15 September 2017 06:27:16PM 8 points [-]

We have link post functionality but I think we're trying to shift away from it, and instead more directly solve the problem of people-posting-to-other-blogs (both by making it a better experience to post things here on your personal section, and to make it possible to post things to your blog that are auto-imported into LW)

Comment author: NancyLebovitz 15 September 2017 04:19:49PM *  10 points [-]

Thank you for developing this.

I'm reminded of an annoying feature of LW 1.0. The search function was pretty awful. The results weren't even in reverse chronological order.

I'm not sure how important better search is, but considering your very reasonable emphasis on continuity of discussion, it might matter a lot.

Requiring tags while offering a list of standard tags might also help.

Comment author: Benito 15 September 2017 09:38:30PM *  6 points [-]

We all thought search was very important, and so tried to make it very efficient and effective. Try out the search bar on the new site.

Added: I realise that comment links are currently broken - oops! We'll fix that before open beta.

Comment author: NancyLebovitz 16 September 2017 12:24:28AM 1 point [-]

I've tried it and it's very fast. I haven't come up with good ideas for testing it yet.

Comment author: ingres 15 September 2017 09:33:51PM *  1 point [-]

Better search is paramount in my opinion. Part of how academic institutions maintain a shared discussion is through a norm of checking for previous work in a space before embarking on new adventures. Combined with strong indexing this norm means that things which could be like so many forgotten Facebook discussions get many chances to be seen and read by members of the academic community.

http://www.overcomingbias.com/2007/07/blogging-doubts.html

Comment author: Habryka 16 September 2017 11:10:16PM 1 point [-]

Yeah, we do now have much better word-based search, but also still feel that we want a way to archive content on the site into more hierarchical or tag-based structures. I am very open to suggestions of existing websites that do this well, or maybe even physical library systems that work here.

I've been reading some information architecture textbooks (http://shop.oreilly.com/product/0636920034674.do) on this, but still haven't found a great solution or design pattern that doesn't feel incredibly cumbersome and adds a whole other dimension to the page that users need to navigate.

Comment author: SaidAchmiz 17 September 2017 12:19:42AM 3 points [-]

… [we] still feel that we want a way to archive content on the site into more hierarchical or tag-based structures. I am very open to suggestions of existing websites that do this well…

This is a slightly odd comment, if only because "hierarchical or tag-based structures" describes almost all extant websites that aggregate / archive / collect content in any way! You would, I think, be somewhat hard-pressed to find a site that does not use either a hierarchical or a tag-based structure (or, indeed, both!).

But here are some concrete examples of sites that both do this well, and where it plays a critical role:

  • Wikipedia. MediaWiki Categories incorporate both tag-based and hierarchical elements (subcategories).
  • Other Wikis. TVTropes, which uses a modified version of the PmWiki engine, is organized primarily by placing all pages into one or more indexes, along many (often orthogonal) categories. The standard version of PmWiki offers several forms of hierarchical (groups, SiteMapper) and tag-based (Categories, pagelists in general) structures and navigation schemes.
  • Blogs, such as Wordpress. Tags are a useful way to find all posts on a subject.
  • Tumblr. I have much beef with Tumblr, but tags are a sensible feature.
  • Pinboard. Tags, including the ability to list intersections of tag-based bookmark sets, is key to Pinboard's functionality.
  • Forums, such as ENWorld. The organization is hierarchical (forum groups contain forums contain subforums contain threads contain posts) and tag-based (threads are prefixed with a topic tag). You can search by hierarchical location or by tag(s) or by text or by any combination of those.
Comment author: Habryka 17 September 2017 02:51:10AM 1 point [-]

Thanks for the recommendations!

"This is a slightly odd comment, if only because "hierarchical or tag-based structures" describes almost all extant websites that aggregate / archive / collect content in any way!"

Well, the emphasis here was on the "more". I.e. there are more feed based architectures, and there are more taxonomy/tagging based architectures. There is a spectrum, and reddit very much leans towards the feed direction, which is what LessWrong has historically been. And wiki's very much lean towards the taxonomy spectrum. I feel we want to be somewhere in between, but I don't know where yet.

Comment author: morganism 18 September 2017 11:19:53PM 1 point [-]

How bout a circular hierarchy, with different color highlights for posts, comments, articles, wiki, tags,and links.

http://yed.yworks.com/support/manual/layout_circular.html

you could have upvotes contribute to weighting , and just show a tag cloud like connection diagram.

Comment author: SaidAchmiz 17 September 2017 04:08:01AM *  3 points [-]

Certainly there is variation, but I actually don't think that viewing that variation as a unidimensional spectrum is correct. Consider:

I have a blog. It functions just like a regular (wordpress) blog—it's sequential, it even has the usual RSS feed, etc. But it runs on pmwiki. So every page is a wikipage (and thus pages are organized into groups; they have tags and are viewable by group, by tag, by custom pagelist, etc.)

So what is that? Feed-based, or tag-based, or hierarchical, or... what? I think these things are much more orthogonal than you give them credit for. Tag-based structure can overlay hierarchical structure without affecting it; custom pagelist/index structure, ditto; and you can serve anything you like as a feed by simply applying an ordering (by timestamp is the obvious and common one, but there are many other possibilities), and you can have multiple feeds, custom feeds, dynamic feeds, etc.; you can subset (filter) in various ways…

(Graph-theoretic interpretations of this are probably obvious, but if anyone wants me to comment on that aspect of it, I will)

P.S.: I think reddit is a terrible model, quite honestly. The evolution of reddit, into what it is today, makes it fairly obvious (to me, anyway) that it's not to be emulated.

Edit: To be clear, the scenario above isn't hypothetical—that is how my actual blog works.

Edit2: Consider also https://readthesequences.com. (It, too, runs on pmwiki.) There's a linear structure (it's a book; the linear navigation UI takes you through the content in order), but it would obviously be trivial to apply tags to pages, and the book/sequence structure is hierarchical already.

Comment author: 9eB1 17 September 2017 03:13:06AM 0 points [-]

That is very interesting. An exception might be "Google search pages." Not only is there no hierarchical structure, there is also no explicit tag structure and the main user engagement model is search-only. Internet Archive is similar but with their own stored content.

With respect to TV Tropes, I'd note that while it is nominally organized according to those indexes, the typical usage pattern is as a sort of pure garden path in my experience.

Comment author: SaidAchmiz 17 September 2017 04:11:51AM 1 point [-]

With respect to TV Tropes, I'd note that while it is nominally organized according to those indexes, the typical usage pattern is as a sort of pure garden path in my experience.

I have encountered a truly shocking degree of variation in how people use TVTropes, to the extent that I've witnessed several people talking to each other about this were each in utter disbelief (to the point of anger) that the other person's usage pattern is a real thing.

Generalizations about TVTropes usage patterns are extremely fraught.

Comment author: 9eB1 17 September 2017 02:47:31PM 0 points [-]

Sure.

Since then I've thought of a couple more sites that are neither hierarchical nor tag-based. Facebook and eHow style sites.

There is another pattern that is neither hierarchical, tag-based nor search-based, which is the "invitation-only" pattern of a site like pastebin. You can only find content by referral.

Comment author: SaidAchmiz 17 September 2017 05:56:55PM 1 point [-]

It is therefore not a coincidence that Facebook is utterly terrible as a content repository. (I am unfamiliar with eHow.)

Comment author: ozymandias 15 September 2017 03:55:59PM *  13 points [-]

Thank you for making this website! It looks really good and like someplace I might want to crosspost to.

If I may make two suggestions:

(1) It doesn't seem clear whether Less Wrong 2.0 will also have a "no politics" norm, but if it doesn't I would really appreciate a "no culture war" tag which alerts the moderators to nuke discussion of race, gender, free speech on college campuses, the latest outrageous thing [insert politician here] did, etc. I think that culture war stuff is salacious enough that people love discussing it in spite of its obvious unimportance, and it would be good to have a way to dissuade that. Personally, I've tended to avoid online rationalist spaces where I can't block people who annoy me, because culture war stuff keeps coming up and when interacting with certain people I get defensive and upset and not in a good frame for discussion at all.

(2) Some inconspicuous way of putting in assorted metadata (content warnings, epistemic statuses, that sort of thing) so that interested people can look at them but they are not taking up the first 500 words of the post.

Comment author: Viliam 19 September 2017 10:26:39PM *  1 point [-]

A big problem with culture wars is that they usually derail debates on other topics. At least my reaction to seeing them is often like: "if you want to debate a different topic, make your own damned thread!"

For example, I would be okay with having a debate about <insert topic>, as long as it happens in a thread called "<the topic>". If someone is not interested, they can ignore the thread. People can upvote or downvote the thread to signal how they feel about an importance of debating the topic on LW.

But when such debates start in a different topic... well, sometimes it seems like there should be no problem with having some extra comments in a thread (the comment space is unlimited, you can just collapse the whole subthread), but the fact is that it still disrupts attention of people who would otherwise debate about the original topic.

There are also other aspects, like people becoming less polite, becoming obsesses with making their faction win, etc.

And the thing that having political debates on a websites sometimes attracts people who come here only for the political debates. I don't usually have a problem with LW regulars discussing X, but I have a problem with fans of X coming to LW to support their faction.

Not sure what to conclude, though. Banning political debates completely feels like going too far. I would prefer having the political debates separately from other topics. But separate political debates is probably what would most attract the fans of X. (One quick idea is to make it so that positive karma gained in explicitly political threads is not counted towards the user total, but the negative one is. Probably a bad idea anyway, just based on prior probabilities. Or perhaps to prevent users younger than 3 months from participating, i.e. both commenting and voting in the political threads.)

Comment author: Vaniver 15 September 2017 09:22:40PM 8 points [-]

I expect the norm to be "no culture war" and "no politics" but there to be some flexibility. I don't want to end up with a LW where, say, this SSC post would be banned, and banning discussions of the rationality community that might get uncomfortable seems bad, and so on, but also I don't want to end up with a LW that puts other epistemic standards in front of rationality ones. (One policy we joked about was "no politics, unless you're Scott," and something like allowing people to put it on their personal page but basically never promoting it accomplishes roughly the same thing.)

Comment author: ozymandias 16 September 2017 01:25:38AM 7 points [-]

Sorry, this might not be clear from the comment, but as a prospective writer I was primarily thinking about the comments on my posts. Even if I avoid culture war stuff in my posts, the comment section might go off on a tangent. (This is particularly a concern for me because of course my social-justice writing is the most well-known, so people might be primed to bring it up.) On my own blog, I tend to ban people who make me feel scared and defensive; if I don't have this capability and people insist on talking about culture-war stuff in the comments of my posts anyway, being on LW 2.0 will probably be unpleasant and aversive enough that I won't want to do it. Of course, I'm just one person and it doesn't make sense to set policy based on luring me in specific; however, I suspect this preference is common enough across political ideologies that having a way to accommodate it would attract more writers.

Comment author: Vaniver 16 September 2017 01:50:37AM 3 points [-]

Got it; I expect the comments to have basically the same rules as the posts, and for you to be able to respond in some low-effort fashion to people derailing posts with culture war (by, say, just flagging a post and then the Sunshine Regiment doing something about it).

Comment author: philh 15 September 2017 05:25:16PM 7 points [-]

I would really appreciate a "no culture war" tag which alerts the moderators to nuke discussion of race, gender, free speech on college campuses, the latest outrageous thing [insert politician here] did, etc.

To clarify: you want people to be able to apply this tag to their own posts, and in posts with it applied, culture war discussion is forbidden?

I approve of this.

I also wonder if it would be worth exploring a more general approach, where submitters have some limited mod powers on their own posts.

Comment author: Jiro 19 September 2017 06:06:47PM 1 point [-]

What do you do to people who

1) include culture war material in their own posts, and use this to prevent anyone from criticizing them, or

2) include things in their own posts that are not culture war, but to which a cultural war reference is genuinely relevant (sometimes to the point where they are saying something that can't be properly refuted without one)?

Comment author: philh 20 September 2017 11:24:43AM 0 points [-]

Play it by ear, but my instinctive reaction is to downvote (1). Options for (2) include "downvote", "ignore", and "try to tactfully suggest that you think they've banned discussion that would be useful, and between you try to work out a solution to this problem". Maybe they'll allow someone to create a CW-allowed discussion thread for that post and then to summarise the contents of that thread, so they don't actually have to read it.

It partly depends whether their posts are attracting attention or not.

Comment author: ozymandias 15 September 2017 06:42:00PM 5 points [-]

Yes, that was my intent.

I believe the plan is to eventually allow some trusted submitters to e.g. ban people from commenting on their posts, but I would hope the "no culture war" tag could be applied even by people whom the mod team doesn't trust with broader moderation powers.

Comment author: Bakkot 15 September 2017 05:22:13PM 15 points [-]

I would strongly support just banning culture war stuff from LW 2.0. Those conversations can be fun, but they require disproportionately large amounts of work to keep the light / heat ratio decent (or indeed > 0), and they tend to dominate any larger conversation they enter. Besides, there's enough places for discussion of those topics already.

(For context: I moderate /r/SlateStarCodex, which gets several thousand posts in its weekly culture war thread every single week. Those discussions are a lot less bad than culture war discussions on the greater internet, I think, and we do a pretty good job keeping discussion to that thread only, but maintaining both of these requires a lot of active moderation, and the thread absolutely affects the tone of the rest of the subreddit even so.)

Comment author: ozymandias 15 September 2017 06:55:16PM *  5 points [-]

I'm not sure if I agree with banning it entirely. There are culture-war-y discussions that seem relevant to LW 2.0: for instance, people might want to talk about sexism in the rationality community, free speech norms, particular flawed studies that touch on some culture-war issue, dating advice, whether EAs should endorse politically controversial causes, nuclear war as existential risk, etc.

OTOH a policy that people should post this sort of content on their own private blogs seems sensible. There are definite merits in favor of banning culture war things. In addition to what you mention, it's hard to create a consensus about what a "good" culture war discussion is. To pick a fairly neutral example, my blog Thing of Things bans neoreactionaries on sight while Slate Star Codex bans the word in the hopes of limiting the amount they take over discussion; the average neoreactionary, of course, would strongly object to this discriminatory policy.

Comment author: Bakkot 15 September 2017 09:21:37PM 3 points [-]

I think - I hope - we could discuss most of those without getting into the more culture war-y parts, if there were sufficiently strong norms against culture war discussions in general.

Maybe just opt-in rather than opt-out would be sufficient, though. That is, you could explicitly choose to allow CW discussions on your post, but they'd be prohibited by default.

Comment author: Jiro 19 September 2017 06:04:08PM 0 points [-]

Please, no.

The SSC subreddit cultural war thread is basically run under the principle of "make the cultural war thread low quality so people will go away". All that gets you is a cultural war thread that is low quality.

Comment author: Regex 15 September 2017 04:22:08PM 4 points [-]

How culture war stuff is dealt with on the various discord servers is having a place to dump it all. This is often hidden to begin with and opt-in only, so people only become aware of it when they start trying to discuss it.

Comment author: Habryka 16 September 2017 10:55:28PM 4 points [-]

I've also been thinking quite a bit about certain tags on posts requiring a minimum karma for commenters. The minimum karma wouldn't have to be too high (e.g. 10-20 karma might be enough), but it would keep out people who only sign up to discuss highly political topics.

Comment author: DragonGod 15 September 2017 03:45:31PM 2 points [-]

This sounds very promising. The UI looks like a site from 2017 as well (as opposed to the previous 2008 feel). The design is very aesthetically pleasing.

I'm very excited about the personal blog feature (posting our articles to our page is basically like a blog).

How long would the open beta last?

Comment author: Manfred 15 September 2017 07:49:32PM 1 point [-]

The only thing I don't like about the "2017 feel" is that it sometimes feel like you're just adrift in the text, with no landmarks. Sometimes you just want guides to the eye, and landmarks to keep track of how far you've read!

Comment author: IlyaShpitser 15 September 2017 02:28:41PM *  4 points [-]

(a) Thanks for making the effort!

(b)

"I am currently experimenting with a karma system based on the concept of eigendemocracy by Scott Aaronson, which you can read about here, but which basically boils down to applying Google’s PageRank algorithm to karma allocation."

This won't work, for the same reason PageRank did not work, you can game it by collusion. Communities are excellent at collusion. I think the important thing to do is making toxic people (defined in a socially constructed way as people you don't want around) go away. Ranking posts from best to worst in folks who remain I don't think is that helpful. People will know quality without numbers.

Comment author: ESRogs 17 September 2017 09:35:57AM 7 points [-]

Ranking posts from best to worst in folks who remain I don't think is that helpful. People will know quality without numbers.

Ranking helps me know what to read.

The SlateStarCodex comments are unusable for me because nothing is sorted by quality, so what's at the top is just whoever had the fastest fingers and least filter.

Maybe this isn't a problem for fast readers (I am a slow reader), but I find automatic sorting mechanisms to be super useful.

Comment author: Kaj_Sotala 17 September 2017 10:55:12AM 7 points [-]

This. SSC comments I basically only read if there are very few of them, because of the lack of karma; on LW even large discussions are actually readable, thanks to karma sorting.

Comment author: IlyaShpitser 17 September 2017 03:13:13PM 1 point [-]

That's an illusion of readability though, it's only sorting in a fairly arbitrary way.

Comment author: Dustin 17 September 2017 05:50:15PM *  5 points [-]

Over the years I've gone through periods of time where I can devote the effort/time to thoroughly reading LW and periods of time where I can basically just skim it.

Because of this I'm in a good position to judge the reliability of karma in surfacing content for its readability.

My judgement is that karma strongly correlates with readability.

Comment author: ESRogs 17 September 2017 05:33:12PM 8 points [-]

As long as it's not anti-correlated with quality, it helps.

It doesn't matter if the top comment isn't actually the very best comment. So long as the system does better than random, I as a reader benefit.

Comment author: Habryka 17 September 2017 12:02:18AM *  10 points [-]

"This won't work, for the same reason PageRank did not work"

I am very confused by this. Google's search vastly outperformed its competitors with PageRank and is still using a heavily tweaked version of PageRank to this day, delivering by far the best search on the market. It seems to me that PageRank should widely be considered to be the most successful reputation algorithm that has ever been invented, having demonstrated extraordinary real-world success. In what way does it make sense to say "PageRank did not work"?

Comment author: ZorbaTHut 17 September 2017 12:19:54PM 4 points [-]

FWIW, I worked at Google about a decade ago, and even then, PageRank was basically no longer used. I can't imagine it's gotten more influence since.

It did work, but I got the strong sense that it no longer worked.

Comment author: IlyaShpitser 17 September 2017 01:08:59AM *  4 points [-]

Google is using a much more complicated algorithm that is constantly tweaked, and is a trade secret -- precisely because as soon as it became profitable to do so, the ecosystem proceeded to game the hell out of PageRank.

Google hasn't been using PageRank-as-in-the-paper for ages. The real secret sauce behind Google is not eigenvalues, it's the fact that it's effectively anti-inductive, because the algorithm isn't open and there is an army of humans looking for attempts to game it, and modifying it as soon as such an attempt is found.

Comment author: Wei_Dai 17 September 2017 02:04:03AM 8 points [-]

Given that, it seems equally valid to say "this will work, for the same reason that PageRank worked", i.e., we can also tweak the reputation algorithm as people try to attack it. We don't have as much resources as Google, but then we also don't face as many attackers (with as strong incentives) as Google does.

I personally do prefer a forum with karma numbers, to help me find quality posts/comments/posters that I would likely miss or have to devote a lot of time and effort to sift through.

Comment author: IlyaShpitser 17 September 2017 03:05:01PM 2 points [-]

It's not PageRank that worked, it's anti-induction that worked. PageRank did not work, as soon as it faced resistance.

Comment author: John_Maxwell_IV 18 September 2017 07:54:42AM 0 points [-]

You really are a "glass half empty" kind of guy aren't you.

Comment author: IlyaShpitser 18 September 2017 01:51:58PM *  3 points [-]

I am not really trying to be negative for the sake of being negative here, I am trying to correctly attribute success to the right thing. People get "halo effect" in their head because "eigenvalues" sound nice and clean.

Reputation systems, though, aren't the type of problem that linear algebra will solve for you. And this isn't too surprising. People are involved with reputation systems, and people are far too complex for linear algebra to model properly.

Comment author: Lumifer 19 September 2017 07:36:36PM 3 points [-]

people are far too complex for linear algebra to model properly

True, but not particularly relevant. Reputation systems like karma will not solve the problem of who to trust or who to pay attention to -- but they are not intended to. Their task is to be merely helpful to humans navigating the social landscape. They do not replace networking, name recognition, other reputation measures, etc.

Comment author: Vaniver 15 September 2017 09:15:54PM 5 points [-]

Oli and I disagree somewhat on voting systems. I think you get a huge benefit from doing voting at all, a small benefit from doing simple weighted voting (including not allowing people below ~10 karma to vote), and then there's not left from complicated vote weighting schemes (like eigenkarma or so on). Part of this is because more complicated systems don't necessarily have more complicated gaming mechanics.

There are empirical questions involved; we haven't looked at, for example, the graph of what karma converges to if you use my simplistic vote weighting scheme vs. an eigenkarma scheme, but my expectation is a very high correlation. (I'd be very surprised if it were less than .8, and pretty surprised if it were less than .95.)

I expect the counterfactual questions--"how would Manfred have voted if we were using eigenkarma instead of simple aggregation?"--to not make a huge difference in practice, altho they may make a difference for problem users.

Comment author: IlyaShpitser 15 September 2017 09:18:40PM 1 point [-]

What's the benefit? Also, what's the harm? (to you)

Comment author: Vaniver 15 September 2017 11:17:13PM 8 points [-]

Main benefits to karma are feedback for writers (both informative and hedonic) and sorting for attention conservation. Main costs are supporting the underlying tech, transparency / explaining the system, and dealing with efforts to game it.

(For example, if we just clicked a radio button and we had eigenkarma, I would be much more optimistic about it. As is, there are other features I would much rather have.)

Comment author: SaidAchmiz 15 September 2017 07:31:16PM 4 points [-]

Strongly seconded. I think there should be no karma system.

I commented on LW 2.0 itself about another reason why a karma system is bad.

Comment author: IlyaShpitser 15 September 2017 08:00:32PM 1 point [-]

Yeah I agree that people need to weigh experts highly. LW pays lipservice to this, but only that -- basically as soon as people have a strong opinion experts get discarded. Started with EY.

Comment author: Vaniver 16 September 2017 01:42:41AM 2 points [-]

My impression of how to do this is to give experts an "as an expert, I..." vote. So you could see that a post has 5 upvotes and a beaker downvote, and say "hmm, the scientist thinks this is bad and other people think it's good."

Multiple flavors lets you separate out different parts of the comment in a way that's meaningfully distinct from the Slashdot-style "everyone can pick a descriptor;" you don't want everyone to be able to say "that's funny," just the comedians.

This works somewhat better than simple vote weighting because it lets people say whether they're doing this as just another reader or 'in their professional capacity;' I want Ilya's votes on stats comments to be very highly weighted and I want his votes on, say, rationality quotes to be weighted roughly like anyone else's.

Of course, this sketch has many problems of its own. As written, I lumped many different forms of expertise into "scientist," and you're trusting the user to vote in the right contexts.

Comment author: SaidAchmiz 16 September 2017 04:01:04AM 3 points [-]

If you have a more-legible quality signal (in the James C. Scott sense of "legibility"), and a less-legible quality signal, you will inevitably end up using the more-legible quality signal more, and the less-legible one will be ignored—even if the less-legible one is tremendously more accurate and valuable.

Your suggestion is not implausible on its face, but the devil is in the details. No doubt you know this, as you say "this sketch has many problems of its own". But these details and problems conspire to make such a formalized version of the "expert's vote" either substantially decoupled from what it's supposed to represent, or not nearly as legible as the simple "people's vote". In the former case, what's the point? In the latter case, the result is that the "people's vote" will remain much more influential on visibility, ranking, inclusion in canon, contribution to a member's influence in various ways, and everything else you might care to use such formalized rating numbers for.

The question of reputation, and of whose opinion to trust and value, is a deep and fundamental one. I don't say it's impossible to algorithmize, but if possible, it is surely quite difficult. And simple karma (based on unweighted votes) is, I think, a step in the wrong direction.

Comment author: ingres 16 September 2017 04:17:28AM 1 point [-]

As far as an algorithm for reputation goes, academia seems to have something that sort of scales in the form of citations and co-authors:

http://www.overcomingbias.com/2017/08/the-problem-with-prestige.html

It's certainly a difficult problem however.

Comment author: IlyaShpitser 17 September 2017 02:05:36AM *  0 points [-]

Vaniver, I sympathize with the desire to automate figuring out who experts are via point systems, but consider that even in academia (with a built-in citation pagerank), people still rely on names. That's evidence about pagerank systems not being great on their own. People game the hell out of citations.


Probably should weigh my opinion of rationality stuff quite low, I am neither a practitioner nor a historian of rationality. I have gotten gradually more pessimistic about the whole project.

Comment author: Vaniver 19 September 2017 06:34:40PM 0 points [-]

Vaniver, I sympathize with the desire to automate figuring out who experts are via point systems

To be clear, in this scheme whether or not someone had access to the expert votes would be set by hand.

Comment author: Manfred 15 September 2017 05:27:50PM 8 points [-]

I think votes have served several useful purposes.

Downvotes have been a very good way of enforcing the low-politics norm.

When there's lots of something, you often want to sort by votes, or some ranking that mixes votes and age. Right now there aren't many comments per thread, but if there were 100 top-level comments, I'd want votes. Similarly, as a new reader, it was very helpful to me to look for old posts that people had rated highly.

Comment author: cousin_it 15 September 2017 10:30:09AM *  9 points [-]

What will happen with existing LW posts and comments? I feel strongly that they should all stay accessible at their old URLs (though perhaps with new design).

Comment author: Habryka 15 September 2017 11:52:19AM 23 points [-]

All old links will continue working. I've put quite a bit of effort into that, and this was one of the basic design requirements we built the site around.

Comment author: Vaniver 15 September 2017 09:11:38PM 11 points [-]

"Basic design requirements" seems like it's underselling it a bit; this was Rule 0 that would instantly torpedo any plan where it wasn't possible.

It's also worth pointing out that we've already done one DB import (lesserwrong.com has all the old posts/comments/etc. as of May of this year) and will do another DB import of everything that's happened on LW since then, so that LW moving forward will have everything from the main site and the beta branch.

Comment author: Jiro 19 September 2017 06:21:53PM 0 points [-]

I just tried lesserwrong.com. Neither IE nor Firefox would do anything when I clicked "login". I had to use Chrome. Even using Chrome, I tried to sign in and had no feedback when I used a bad user and password, making it unclear whether the values were even submitted to the server.

Comment author: ChristianKl 20 September 2017 04:22:57PM 0 points [-]

That sounds like it just isn't a development priority to give feedback when there's a bad user/password.

Comment author: Elo 19 September 2017 07:16:01PM 0 points [-]

I believe that is on purpose. Login is not open yet.

Comment author: Jiro 19 September 2017 08:34:04PM 0 points [-]

People are clearly posting things there that postdate the DB import, so they must be logging in. Also, that doesn't explain it working better on Chrome than on other browsers.

Comment author: SaidAchmiz 19 September 2017 08:59:28PM 1 point [-]

A private beta has been ongoing, clearly.

Comment author: Jiro 20 September 2017 04:02:41PM 0 points [-]

That can't explain it, unless the private beta is accessed by going somewhere other than lesserwrong.com. The site isn't going to know that someone is a participant in the private beta until they've logged in. And the problems I described happen prior to logging in.

Comment author: Raemon 20 September 2017 07:00:56PM 0 points [-]

Not 100% I understand your description, but currently the expected behavior when you attempt to login (if not already a part of the beta) is nothing happening when you click "submit" (although in the browser console there'll be an error message)

This is simply because we haven't gotten to that yet, but it should be something we make sure to fix before the open-beta launch later today so people have a clear sense of whether it's working.

Comment author: Jiro 20 September 2017 07:35:05PM 0 points [-]

And the expected behavior when using IE or Firefox is that you can't even get to the login screen? I find that unlikely.

Comment author: SaidAchmiz 20 September 2017 05:58:38PM 0 points [-]

Good point!

In that case, I'm not sure what the problem is (though, I, too, see a similar problem to yours, now that I just tried it in a different browser (Firefox 55.0.3, Mac OS 10.9) than my usual one (Chrome)). I suspect, as another commenter said, that login just isn't fully developed yet.

Comment author: Kaj_Sotala 15 September 2017 10:18:18AM *  6 points [-]

Thank you for doing this!

Not a comment on the overview, but on LW2.0 itself: are you intentionally de-emphasizing comment authorship by making the author names show up in a smaller font than the text of the comment? Reading the comments under the roadmap page, it feels slightly annoying that the author names are small enough that my brain ignores them instead of registering them automatically, and then I have to consciously re-focus my attention to see who wrote a comment, each time that I read a new comment.

Comment author: Habryka 16 September 2017 11:23:17PM 4 points [-]

That was indeed intentional, but after playing around with it a bit, I actually think it had a negative effect on the skimmability of comment threads, and I am planning to try out a few different solutions soon. In general I feel that I want to increase the spacing between different comments and make it easier to identify the author of a comment.

Comment author: Elo 17 September 2017 01:56:33AM 1 point [-]

I think I would prefer information density. I am annoyed by the classic mybb type forum of low density of comments and prefer the more, "Facebook" style density but it will shorten comments to go that dense. So a balance close to the current density would be my suggestion.

Comment author: casebash 15 September 2017 09:54:04AM *  7 points [-]

Firstly, well done on all your hard work! I'm very excited to see how this will work out.

Secondly, I know that this might be best after the vote, but don't forget to take advantage of community support.

I'm sure that if you set up a Kickstarter or similar, that people would donate to it, now that you've proven your ability to deliver.

I also believe that, given how many programmers we have here, many people will want to make contributions to the codebase. My understanding was that this wasn't really happening before: a) Because the old code base was extremely difficult to get up and running/messy b) Because it wasn't clear who to talk to if you wanted to know if your changes were likely to be approved if you made them.

It looks like a) has been solved, if you also improve b), then I expect a bunch of people will want to contribute.

Comment author: ingres 15 September 2017 10:40:04PM 1 point [-]

I'm going to write a top level post at some point (hopefully soon) but in the meantime I'd like to suggest the content in the original post and comments be combined into a wiki. There's a lot of information here about LW 2.0 which I wasn't previously aware of and significantly boosted my confidence in the project.

Comment author: Habryka 16 September 2017 11:26:35PM 1 point [-]

A wiki feels too high of a barrier to entry to me, though maybe there are some cool new wiki softwares that are better than what I remember.

For now I feel like having an about page on LessWrong that has links to all the posts, and tries to summarize the state of discussion and information is the better choice, until we reach the stage where LW gets a lot more open-source engagement and is being owned more by a large community again.

Comment author: ingres 17 September 2017 12:32:21AM 3 points [-]

Seconding SaidAchmiz on pmwiki, it's what we use for our research project on effective online organizing and it works wonders. It's also how I plan to host and edit the 2017 survey results.

As far as the high barrier to entry goes, I'll repeat here my previous offer to set up a high quality instance of pmwiki and populate it with a reasonable set of initial content - for free. I believe this is sufficiently important that if the issue is you just don't have the capacity to get things started I'm fully willing to help on that front.

Comment author: SaidAchmiz 16 September 2017 11:51:22PM 3 points [-]

http://www.pmwiki.org/ is a cool new wiki softwares that is better than most things

Comment author: John_Maxwell_IV 15 September 2017 09:00:11AM *  12 points [-]

Sounds great!

Is there anything important I missed

This analysis found that LW's most important issue is lack of content. I think there are two models that are most important here.

There's the incentives model: making it so good writers have a positive hedonic expectation for creating content. There's a sense in which an intellectual community online is much more fragile than an intellectual community in academia: academic communities can offer funding, PhDs, etc. whereas internet discussion is largely motivated by pleasure that's intrinsic to the activity. As a concrete example, the way Facebook lets you see the name of each person who liked your post is good, because then you can take pleasure in each specific person who liked it, instead of just knowing that X strangers somewhere on the internet liked it. Contrast with academia, which plods on despite frequently being hellish.

And then there's the chicken-and-egg model. Writers go where the readers are and readers go where the writers are. Interestingly, sometimes just 1 great writer can solve this problem and bootstrap a community: both Eliezer and Yvain managed to create communities around their writing single-handedly.

The models are intertwined, because having readers is a powerful incentive for writers.

My sense is that LW currently performs poorly according to both models, and although there's a lot of great stuff here, it's not clear to me that any of the proposed actions are going to attack either of these issues head on.

Comment author: Habryka 16 September 2017 11:07:09PM 5 points [-]

Thanks! :)

I agree with the content issue, and ultimately having good content on the page is one of primary goals that guided all the modeling in the post. Good content is downstream from having a functioning platform and an active community that attracts interesting people and has some pointers on how to solve interesting problems.

I like your two models. Let me think about both of them...

The hedonic incentive model is one that I tend to use quite often, especially when it comes to the design of the page, but I didn't go into too much in this post because talking about it would inevitably involve a much larger amount of details. I've mentioned "making sure things are fun" a few times, but going into the details on how I am planning to achieve this would require me talking about the design of buttons, and animations and notification systems, each of which I could write a whole separate 8000 word post filled with my own thoughts. That said, it is also a ton of fun for me, and if anyone ever wants to discuss the details of any design decision on the page, I am super happy to do that.

I do feel that there is still a higher level of abstraction in the hedonic incentives model that in game design would be referred to as "the core game loop" or "the core reward loop". What is the basic sequence of actions that a user executes when he comes to your page that reliably results in positive hedonic reward? (on Facebook there are a few of those, but the most dominant one is "Go to frontpage, see you have new notifications, click notifications, see that X people have liked your content") And I don't think I currently have a super clear answer to this. I do feel like I have an answer on a System 1 level, but it isn't something I have spent enough time thinking about, and haven't clarified super much, and this comment made me realize that this is a thing I want to pay more attention to.

We hope to bootstrap the chicken-and-egg model by allowing people to practically just move their existing blogs to the LessWrong platform, either via RSS imports or by directly using their user-profile as a blog. My current sense is that in the larger rationality diaspora we have a really large amount of content, and so far almost everyone I've talked to seemed very open to having their content mirrored on LessWrong, which makes me optimistic about solving that aspect.

Comment author: Viliam 19 September 2017 10:32:22PM 0 points [-]

The lack of content is related to the other issues. For example, it can quite reduce my willingness to write a post for LW when I remember that Eugine can single-handledly dominate the whole discussion with his sockpuppets, if he decides so. And I imagine that if Yvain posted his political articles here, he wouldn't like the resulting debate.

Comment author: WhySpace 15 September 2017 08:29:07AM *  6 points [-]

I'm not really sure how shortform stuff could be implemented either, but I have a suggestion on how it can be used: jokes!

Seriously. If you look at Scott's writing, for example, one of the things which makes it so gripping is the liberal use of amusing phrasing, and mildly comedic exaggerations. Not the sort of thing that makes you actually laugh, but just the sort of thing that is mildly amusing. And, I believe he specifically recommended it in his blog post on writing advice. He didn't phrase his reasoning quite like this, but I think of it as little bits of positive reinforcement to keep your system 1 happy while your system 2 does the analytic thinking stuff to digest the piece.

Now, obviously this could go overboard, since memetics dictates that short, likeable things will get upvoted faster than long, thoughtful things, outcompeting them. But, I don't think we as a community are currently at risk of that, especially with the moderation techniques described in the OP.

And, I don't mean random normal "guy walks into a bar" jokes. I mean the sort of thing that you see in the comments on old LW posts, or on Weird Sun Twitter. Jokes about Trolley Problems and Dust Specks and Newcomb-like problems and negative Utilitarians. "Should Pascal accept a mugging at all, if there's even a tiny chance of another mugger with a better offer?" Or maybe "In the future, when we're all mind-uploads, instead of arguing about the simulation argument we'll worry about being mortals in base-level reality. Yes, we'd have lots of memories of altering the simulation, but puny biological brains are error-prone, and hallucinate things all the time."

I think a lot of the reason social media is so addictive is the random dopamine injections. People could go to more targeted websites for more of the same humor, but those get old quickly. The random mix of serious info intertwined with joke memes provides novelty and works well together. The ideal for a more intellectual community should probably be more like 90-99% serious stuff, with enough fun stuff mixed in to avoid akrasia kicking in and pulling us toward a more concentrated source.

The implementation implications would be to present short-form stuff between long-form stuff, to break things up and give readers a quick break.

Comment author: richard_reitz 15 September 2017 04:47:53AM *  12 points [-]

if you’ve read all of a sequence you get a small badge that you can choose to display right next to your username, which helps people navigate how much of the content of the page you are familiar with.

Idea: give sequence-writers the option to include quizzes because this (1) demonstrates a badgeholder actually understands what the badge indicates they understand (or, at least, are more likely to) and (2) leverages the testing effect.

I await the open beta eagerly.

Comment author: ciphergoth 15 September 2017 09:24:35PM 8 points [-]

Also I have already read them all more than once and don't plan to do so again just to get the badge :)

Comment author: Raemon 15 September 2017 06:47:51AM 2 points [-]

leverages the which?

In any case, I like the idea, although it may be in the backlog for awhile.

Comment author: richard_reitz 15 September 2017 12:33:16PM 2 points [-]

Testing effect.

(At this point, I should really know better than to trust myself to write anything at 1 in the morning.)

Comment author: Raemon 15 September 2017 06:48:42AM 9 points [-]

In any case, I like the idea, although it may be in the backlog for awhile.

Although, it occurs to me that the benefit of an open source codebase that actually is reasonable to learn is that anyone that wants something like this to happen can just make it happen.

Comment author: gbear605 15 September 2017 04:23:12AM 4 points [-]

I'd love to see achieved the goal of an active rationalist-hub and I think this might be a method that can lead to it.

Ironically, after looking at the post you made on lesserwrong that combines various Facebook posts, Eliezer unknowingly demonstrates the exact issue: "because of that thing I wrote on FB somewhere" On one of his old LW posts, he would have linked to it. Instead, the explanation is missing for those who aren't up to date on his entire FB feed.

Thanks for the work that you've put into this.

Comment author: philh 15 September 2017 12:01:24PM 8 points [-]

(As it happens, that particular post ("why you absolutely need 4 layers of conversation in order to have real progress") was un-blackholed by Alyssa Vance: https://rationalconspiracy.com/2017/01/03/four-layers-of-intellectual-conversation/)

Comment author: Benito 15 September 2017 04:56:07AM 16 points [-]

We've actually talked a bit with Eliezer about importing his past and future facebook and tumblr essays to LW 2.0, and I think this is a plausible thing we'll do after launch. I think it will be good to have his essays be more linkable and searchable (and the people I've said this to tend to excitedly agree with me on this point).

(I'm Ben Pace, the other guy working full time on LW 2.0)

Comment author: ingres 15 September 2017 09:53:44PM *  11 points [-]

Please do this. This alone would be enough to get me to use and link LW 2.0, at least to read stuff on it.

UPDATE (Fri Sep 15 14:56:28 PDT 2017): I'll put my money where my mouth is. If the LW 2.0 team uploads at least 15 pieces of content authored by EY of a length at least one paragraph each from Facebook, I'll donate 20 dollars to the project.

Preferably in a way where I can individually link them, but just dumping them on a public web page would also be acceptable in strict terms of this pledge.

Comment author: ciphergoth 15 September 2017 03:53:27AM *  18 points [-]

Thank you all so much for doing this!

Eigenkarma should be rooted in the trust of a few accounts that are named in the LW configuration. If this seems unfair, then I strongly encourage you not to pursue fairness as a goal at all - I'm all in favour of a useful diversity of opinion, but I think Sybil attacks make fairness inherently synonymous with trivial vulnerability.

I am not sure whether votes on comments should be treated as votes on people. I think that some people might make good comments who would be bad moderators, while I'd vote up the weight of Carl Schulman's votes even if he never commented.

The feature map link seems to be absent.

Comment author: DragonGod 15 September 2017 03:42:29PM 2 points [-]

What are Sybil attacks?

Comment author: KaynanK 15 September 2017 04:51:25PM 4 points [-]
Comment author: Habryka 15 September 2017 04:25:56AM 2 points [-]

Feature roadmap link fixed!

Comment author: Waltus 26 September 2017 10:05:35PM 0 points [-]

I would favor the option to hide comments' scores while retaining their resultant organization (best/popular/controversial/etc). I have the sense that I'm biased toward comments with higher scores even before I've read them, which is counterproductive to my ability to evaluate arguments on their own merit.

Comment author: NancyLebovitz 20 September 2017 01:01:42PM 0 points [-]

LW2.0 doesn't seem to be live yet, but when it is, will I be able to use my 1.0 username and password?

Comment author: DragonGod 20 September 2017 08:01:25AM 0 points [-]

On StackExchange upvotes and downvotes from accounts with less than 15 rep are recorded but don't count (presumably until the account gains more than 15 rep). LW may decide to set her bar lower (10 rep?) or higher (>= 20 rep?), but I think the core insight is very good and would be a significant improvement if applied to LW.

Comment author: username2 18 September 2017 05:10:47PM 0 points [-]

People sure like to talk about meta topics.