On Wednesday I had lunch with Raph Levien, and came away with a picture of how a website that fostered the highest quality discussion might work.

Principles:

  • It’s possible that the right thing is a quick fix to Less Wrong as it is; this is about exploring what could be done if we started anew.
  • If we decided to start anew, what the software should do is only one part of what would need to be decided; that’s the part I address here.
  • As Anna Salamon set out, the goal is to create a commons of knowledge, such that a great many people have read the same stuff. A system that tailored what you saw to your own preferences would have its own strengths but would work entirely against this goal.
  • I therefore think the right goal is to build a website whose content reflects the preferences of one person, or a small set of people. In what follows I refer to those people as the “root set”.
  • A commons needs a clear line between the content that’s in and the content that’s out. Much of the best discussion is on closed mailing lists; it will be easier to get the participation of time-limited contributors if there’s a clear line of what discussion we want them to have read, and it’s short.
  • However this alone excludes a lot of people who might have good stuff to add; it would be good to find a way to get the best of both worlds between a closed list and an open forum.
  • I want to structure discussion as a set of concentric circles. 
  • Discussion in the innermost circle forms part of the commons of knowledge all can be assumed to be familiar with; surrounding it are circles of discussion where the bar is progressively lower. With a slider, readers choose which circle they want to read.
  • Content from rings further out may be pulled inwards by the votes of trusted people.
  • Content never moves outwards except in the case of spam/abuse.
  • Users can create top-level content in further-out rings and allow the votes of other users to move it closer to the centre. Users are encouraged to post whatever they want in the outermost rings, to treat it as one would an open thread or similar; the best content will be voted inwards.
  • Trust in users flows through endorsements starting from the root set.

More specifics on what that vision might look like:

  • The site gives all content (posts, top-level comments, and responses) a star rating from 0 to 5 where 0 means “spam/abuse/no-one should see”.
  • The rating that content can receive is capped by the rating of the parent; the site will never rate a response higher than its parent, or a top-level comment higher than the post it replies to.
  • Users control a “slider” a la Slashdot which controls the level of content that they see: set to 4, they see only 4 and 5-star content.
  • By default, content from untrusted users gets two stars; this leaves a star for “unusually bad” (eg rude) and one for “actual spam or other abuse”.
  • Content ratings above 2 never go down, except to 0; they only go up. Thus, the content in these circles can grow but not shrink, to create a stable commons.
  • Since a parent’s rating acts as a cap on the highest rating a child can get, when a parent’s rating goes up, this can cause a child’s rating to go up too.
  • Users rate content on this 0-5 scale, including their own content; the site aggregates these votes to generate content ratings.
  • Users also rate other users on the same scale, for how much they are trusted to rate content.
  • There is a small set of “root” users whose user ratings are wholly trusted. Trust flows from these users using some attack resistant trust metric.
  • Trust in a particular user can always go down as well as up.
  • Only votes from the most trusted users will suffice to bestow the highest ratings on content.
  • The site may show more trusted users with high sliders lower-rated content specifically to ask them to vote on it, for instance if a comment is receiving high ratings from users who are one level below them in the trust ranking. This content will be displayed in a distinctive way to make this purpose clear.
  • Votes from untrusted users never directly affect content ratings, only what is shown to more trusted users to ask for a rating. Downvoting sprees from untrusted users will thus be annoying but ineffective.
  • The site may also suggest to more trusted users that they uprate or downrate particular users.
  • The exact algorithms by which the site rates content, hands trust to users, or asks users for moderation would probably want plenty of tweaking. Machine learning could help here. However, for an MVP something pretty simple would likely get the site off the ground easily.

New to LessWrong?

New Comment
42 comments, sorted by Click to highlight new comments since: Today at 10:18 AM

(I prefer to ignore the specific details of the implementation and only discuss the idea in general. Because debating the details feels like "too soon; I need to be sure about the higher level first".)

There is a profound difference between building "commons of knowledge" and a "discussion website". In some sense, they are the opposite of each other. Discussion websites, by their nature, attract people who have a preference for discussion. Curiosity seeks to annihilate itself, but social interaction seeks to grow and persist. And if people who feel that the debate is no longer productive start leaving, well, those who don't mind will become the new norm.

But how do you build the commons of knowledge without having the discussion first? How would people contribute? How would they point out the mistakes? How would they conclude whether the supposed "mistakes" are actual mistakes or not? How would they coordinate in time, so that the published piece of knowledge is reviewed now, and the author can read the responses and update the text now? (as opposed to e.g. someone randomly pointing out an error ten years later, and the original author no longer being active)

Seems to me that successful knowledge-oriented websites, such as Wikipedia and Stack Exchange, solve this problem by separating the knowledge from the discussion. There are different ways to do it.

On Wikipedia, the article presents the "final answer", and the whole discussion happens on the Talk page. (There is typically one Talk page per article, but if the discussion grows unusually long, the older parts are gradually moved into separate read-only Archive pages.) If someone believes they have found a mistake in the article, they edit the article, and optionally explain the reason on the Talk page. If someone else believes they were wrong, they revert the changes in the article, and optionally explain their reasons on the Talk page. This sometimes leads to an "edit war", when various solutions are implemented, for example the page can be "locked" so the ordinary users can't edit it anymore, and all they can do is argue for their case on the Talk page; only privileged remain able to edit the article.

On Stack Exchange, each user can provide their own answer to the question, and all answers are displayed below the question; the ones with the most votes are displayed first. There is further an option to write a short comment below the question or an answer, and to upvote the comment. (The comment structure is linear, not hierarchical.) To prevent the growth of the discussion, only a limited amount of text is displayed immediately; displaying the remaining text requires further user action. For example, if there are more than N answers, only the highest upvoted N answers are displayed below the question; the remaining ones are moved to page 2 or more. Similarly, if there are more than M comments below a question or an answer, only the highest upvoted M comments are displayed, unless the user explicitly clicks "show more comments". Together this means that regardless of the number of answers and the number of comments, the user clicking on the question still receives a relatively short page, containing the most relevant things. Furthermore, debate that is not strictly on-topic is discouraged, the content of such debate is frequently removed, and the users are advised to debate their opinions on a chat, outside the question-and-answers area.

We can't copy either model directly. Less Wrong is neither an encyclopedia nor a Q&A website. (We discuss things that sometimes don't have official names yet; and we often provide insights and opinions without being asked first.) We are more like a blog, or maybe a news site. But blogs are personal (worked well while Eliezer was writing the original Sequences) or for a predefined group of authors; they don't scale well. A the purpose of news is typically to maximize paperclips... ahem, pageviews... which is best achieved by techniques contrary to the goal of developing and spreading rationality. (I suspect that the next step in the online news business will be websites with fake news generated automatically using machine learning and A/B testing. Using thousands of different domain names, to make it impossible to write a domain-name filter; generating enough content to keep all domains active and seemingly different. Also, automatically generated comments below the articles. And automatically generated replies to humans who get fooled by the system.)

Seems to me these would be reasonable guidelines for a good solution:

  • Keep the "final product" and the "intermediate products" unambiguously separated, in a way that clearly communicates to the reader's System 1. Preferably, each of them on a different domain name, with different design. Separate "LW, the sacred tome of rationalist knowledge" from "LW, where highly intelligent people procrastinate".

  • The change from the "intermediate" to the "final" state must always be done by a conscious decision of a group of trusted people. (It is not necessary to make all of them approve each change, more like: any two or three of them may promote the content, and then any one of them can veto the change, and if that happens, then all of them will debate behind the closed doors until they reach a consensus.) No amount of votes is enough to automatically promote an article; and no amount of karma is enough to automatically promote a person to the trusted circle. On the other hand, members of the trusted circle are publicly known. And within the trusted circle, they can review all other other's actions and votes.

  • Changing the article from the "intermediate" to the "final" state may include asking the original author to rewrite it (or to consent with someone else's rewrite). Thus we would avoid the dilemma of "some part of this article are low-quality and lenghty, but here are a few really valuable points, so maybe the cost-benefit analysis suggests we promote it anyway". Nope, just rewrite it to keep the good parts and remove the bad parts; it will take you an afternoon, and the updated version will be there displayed to everyone for years.

  • Promoting the article to the "final" state does not mean promoting comments below it to the "final" states. First, no comments should get promoted automatically; and second, most comments should not be promoted at all. So, the "final" website should just display the article without the comments (but maybe with a link to the original discussion). If some comments are considered worthy enough to be canonized along with the article, they could become a part of the article itself, e.g. either by fixing the mistake, or adding a footnote. Paraphrasing Dijkstra, we should not regard LW comments as "lines produced" but as "lines spent".

  • New visitors to the website should be directed to the "final" version foremost, and to the "intermediate" version only as a secondary option. So unless they have a specific plan, they will read some parts of the "final" version before they start participating in the "intermediate" section. In other words, having people debating on LW without reading the Sequences first is a result of bad web design.

EDIT:

In some sense, having LW separated into forum and wiki is a partial step in this direction, but it differs from this proposal in many important ways:

The highest-value content stays in the forum section, along with the low-value content. (It is merely linked from the wiki.) Thus, at least to my System 1, the good articles and the bad articles are connected more than any of them is with the wiki. Also, all comments, whether good or bad, remain connected with the good articles.

When new people visit the page for the first time, the first thing they see is... a description that seems like a Wikipedia article stub; four recent articles; and four non-recent articles. Not good. The title page should contain at least 95% of nicely designed hand-picked best content, and only a small link towards the discussion somewhere in the corner.

On the other hand, a technically simple implementation of my suggestion would be to create another website -- which could be technically a wiki, but it absolutely shouldn't look like one; all the wiki-related buttons should be hidden from the average observer, and only available for the "inner circle" -- and copy the hand-picked best articles there. For the average visitor, there would be no way to post anything; only to click the links and read the contents. Occassionally, a link would point them towards a relevant discussion on LW, but to their System 1 that would be obviously a link to an external website, with a different design. (The "inner circle" would have a Slack debate somewhere, where they would talk about which articles deserve to be copied. All contributors to LW would have to consent in advance with the possibiity of having their article copied to the other website -- and removed from LW, to avoid having duplicates.)

I think your ideas are very compatible with my existing proposal!

I agree about the "too soon" aspect, but this basically came to me fully formed, and it wasn't clear to me that teasing out a part of it to present instead of presenting it all was the right thing. Maybe I should have held off on proposing solutions.

Well, I am guilty of proposing a solution soon, too. But it's interesting to see (ignoring the minor details), where we agree, and where our proposals differ. This is a quick comparision:

Common points:

  • software change is necessary;
  • creating a personalized "bubble" is not the direction we want to go, because we want to create commons;
  • but a democracy where all randos from internet have an equal vote is also not the way (even ignoring the issue of sockpuppets);
  • so we needs a group of people with trusted opinion;
  • a binary decision of what content is "really in" could help people who only want to spend short time online (and we should optimize for these, or at least not actively against them);
  • a lot of good new content is likely to appear in the "experimental" part (that's where all the new talents are), from where it should be promited to the "in" part;
  • this promotion of content should be done only by the people who are already "in" (but the opinion of others can be used as a hint).

Differences:

  • scalable N-tiered system (reflecting familiarity and compatibility with the existing commons); or
    • just a "user tier" and "admin tier" with quite different rules;
  • content filtered by a slider ("in" or "in + somewhat less in" etc.); or
    • the "in" content and the "experimental / debate" content separated from each other as much as possible;
  • a voting system where votes from higher tiers can override the votes from lower tiers; or
    • probably something pretty similar, except that the "promote or not" decision would be "two or three admins say yes, and either no admin says no, or the dictator overrides".

The summary of our differences seems to be that you want to generalize to N tiers, which all use the same algorithm; while I assume two groups using dramatically different rules. (Most of the other differences are just further consequences of this one.)

The reason for my side is that I assume that the "trusted group" will be relatively small and busy, for various reasons. (Being instrumentally rational correlates with "having a lot of high-priority work other than voting in LW debates". Some of the trusted users will also be coders who will be busy fixing and contributing to LW code. And they will have to solve other problems that appear.) I imagine something like about 20 people, of whom only about 5 will actually be active at any given week, and 3 of those will be solving some technical or other problem. In other words, a group too small to need have their mutual communication solved by an algorithm. (And in case of admin conflict, we have the dictator anyway.)

Hi Villiam, your idea sounds like an academic community around rationality. You can think of the discussions as like the events at a conference or workshop where half-baked ideas are thrown about. And you can think of the "final" tome of knowledge as the proceedings of the journal: when an idea has been workshopped enough, it is revised and then published in the journal as Currently Definitive Knowledge.

This framing suggests having a rotating board of editors and a formal peer review system as is common in academic journals.

Seems like a "covergent evolution" of ideas. Many people faced similar problems, and devised similar solutions.

This is a really great summary. Maybe we should Skype or something to drill down further on our disagreement? Maybe when I'm in London, and so closer to you in timezone?

Generally yes; details in PM.

This is a really really good comment. Check out https://arbital.com and let me know what you think. :)

Interesting proposal. Can you say something about who would be able to see the individual ratings of comments and users? What do you see are the pros and cons of this proposal vs other recent ones.

the site will never rate a response higher than its parent, or a top-level comment higher than the post it replies to

What's the reason for this? It seems to lead to some unfortunate incentives for commenting. Suppose someone posts a new and exciting idea and you find a subtle but fatal flaw with it. If you comment right away then people will realize the flaw and not rate the post as high as they otherwise would, which would limit the rating of your comment, so the system encourages you to wait until the post is rated higher before commenting.

More generally this feature seems to discourage people from commenting early, before they're sure that the post/comment they're responding to will be rated highly.

Content ratings above 2 never go down, except to 0; they only go up. Thus, the content in these circles can grow but not shrink, to create a stable commons.

This seems to create an opening for attack. If an attacker gets a high enough rating to unilaterally push content from 2 to 3 stars, they can sprinkle a lot of spam throughout the site, rate them to 3 stars, and all of that spam would have to be individually marked as such even if the attacker's rating is subsequently reduced.

Trust flows from these users using some attack resistant trust metric.

Can you point to an intro to attack resistant trust metrics. I think a lot of people are not familiar with them.

Downvoting sprees from untrusted users will thus be annoying but ineffective.

Why would it be annoying? I'm not sure I understand what would happen with such a downvoting spree.

Can you say something about who would be able to see the individual ratings of comments and users?

Only people who police spam/abuse; I imagine they'd have full DB access anyway.

What do you see are the pros and cons of this proposal vs other recent ones.

An excellent question that deserves a longer answer, but in brief: I think it's more directly targeted towards the goal of creating a quality commons.

What's the reason for this?

Because I don't know how else to use the attention of readers who've pushed the slider high. Show them both the comment and the reply? That may not make good use of their attention. Show them the reply without the comment? That doesn't really make sense.

Note that your karma is not simply the sum or average of the scores on your posts; it depends more on how people rate you than on how they rate your posts.

This seems to create an opening for attack.

Again, the abuse team really need full DB access or something very like it to do their jobs.

Can you point to an intro to attack resistant trust metrics

The only adequate introduction I know of is Raph Levien's PhD draft which I encourage everyone thinking about this problem to read.

Why would it be annoying?

When an untrusted user downvotes, a trusted user or two will end up being shown that content and asked to vote on it; it thus could waste the time of trusted users.

Thanks for the clarifications.

Only people who police spam/abuse [would be able to see the individual ratings of comments and users]

That would make it hard to determine which users I should rate highly. Is the idea that the system would find users who rate similarly to me and recommend them to me, and I would mostly follow those recommendations?

Because I don't know how else to use the attention of readers who've pushed the slider high.

Slashdot shows all the comments in collapsed mode and auto expands the comments that are higher than the filter setting. We can do that, or have a preference setting that let's the user choose what to do, whether to do that or just hide comments that reply to something rated lower than their filter setting.

You should rate highly people whose judgment you would trust when it differed from yours. We can use machine learning to find people who generate similar ratings to you, if the need arises.

I thought about the Slashdot thing, but I don't think it makes the best use of people's time. I'd like people reading only the innermost circle to be able to basically ignore the existence of the other circles. I don't even want a prompt that says "7 hidden comments".

You should rate highly people whose judgment you would trust when it differed from yours. We can use machine learning to find people who generate similar ratings to you, if the need arises.

It would be much harder to decide whose judgment I would trust, if I couldn't see how they rated in the past. I'd have to do it only based on their general reputation and their past posts/comments, but what if some people write good comments but don't rate the way I would prefer (for example they often downvote others who disagree with them)? The system would also essentially ignore ratings from lurkers, which seems wasteful.

If we use ML to find people who generate similar ratings, that seems to generate bad incentives. When your user rating is low, you're incentivized to vote the same way as others, so that ML would pick you to recommend to people, then when your rating is high, you'd switch to voting based on your own opinions, which might be totally untrustworthy, but people who already rated you highly wouldn't be able to tell that they should no longer trust you.

I thought about the Slashdot thing, but I don't think it makes the best use of people's time.

Aside from the issue of weird incentives I talked about earlier, I would personally prefer to have the option of viewing highly rated comments independent of parent ratings, since I've found those to be valuable to me in other systems (e.g., Slashdot and the current LW). Do you have an argument why that shouldn't be allowed as a user setting?

It's hard to be attack resistant and make good use of ratings from lurkers.

The issues you mention with ML are also issues with deciding who to trust based on how they vote, aren't they?

It's hard to make a strong argument for "shouldn't be allowed as a user setting". There's an argument for documenting the API so people can write their own clients and do whatever they like. But you have to design the site around the defaults. Because of attention conservation, I think this should be the default, and that people should know that it's the default when they comment.

The issues you mention with ML are also issues with deciding who to trust based on how they vote, aren't they?

If everyone can see everyone else's votes, then when someone who was previous highly rated starts voting in an untrustworthy manner, that would be detectable and the person can at least be down-rated by others who are paying attention. On the other hand, if we had a pure ML system (without any manual trust delegation) then when someone starts deviating from their previous voting patterns the ML algorithm can try to detect that and start discounting their votes. The problem I pointed out seems especially bad in a system where people can't see others' votes and depend on ML recommendations to pick who to rate highly, because then neither the humans nor ML can respond to someone changing their pattern of votes after getting a high rating.

This is seeking a technological solution to a social problem.

The proposed technological solution is interesting, complicated, and unlikely to ever be implemented. It's not hard to see why the sorts of people who read LW want to talk about interesting and complicated things, especially interesting and complicated things that don't require much boring stuff like research -- but I highly doubt that anyone is going to sit down and do the work of implementing it or anything like it, and in the event that anyone ever does, it'll likely take so long that many of the people who'd otherwise use LW or its replacement will lose interest in the interim, and it'll likely be so confusing that many more people are turned off by the interface and never bother to participate.

If we want interesting, complicated questions that don't require a whole lot of research, here's one: what exactly is LW trying to do? Once this question has been answered, we can go out and research similar groups, find out which ones accomplished their goals (or goals similar to ours, etc.) and which ones didn't, and try to determine the factors that separate successful groups from failed ones.

If we want uninteresting, uncomplicated questions that are likely to help us achieve our goals, here's one: do we have any managers in the audience? People with successful business experience, muaybe in change management or something of that nature? I'm nowhere near old or experienced enough to nominate myself, or even to name the most relevant subdomains of management with any confidence, but I've still seen a lot of projects that failed due to nonmanagers' false assumption that management is trivial, and a few projects in the exact same domain that succeeded due to bringing in one single competent manager.

As Anna Salamon set out, the goal is to create a commons of knowledge, such that a great many people have read the same stuff.

There's already a lot of stuff from the post-LW fragmentation that a great many people have read. How about identifying and compiling that? And since many of these things will be spread out across Tumblr/Twitter/IRC/etc. exchanges rather than written up in one single post, we could seed the LW revival with explanations of them. This would also give us something more interesting and worthwhile to talk about than what sort of technological solution we'd like to see for the social problem that LW can't find anything more interesting and worthwhile to talk about than what sort of technological solution we'd like to see for the social problem that LW can't find anything interesting or worthwhile enough to get people posting here.

This is seeking a technological solution to a social problem.

It is still strange to me that people say this as if it were a criticism.

People have been building communities with canons since the compilation of the Torah.

LW, running on the same Reddit fork it's on today, used to be a functional community with a canon. Then... well, then what? Interesting content moved offsite, probably because 1) people get less nervous about posting to Tumblr or Twitter than posting an article to LW 2) LW has content restrictions that elsewhere doesn't. So people stopped paying attention to the site, so the community fragmented, the barrier to entry was lowered, and now the public face of rationalists is Weird Sun Twitter and Russian MRAs from 4chan who spend their days telling people to kill themselves on Tumblr. Oops!

(And SSC, which is a more active community than LW despite running on even worse software.)

(This is my first post so please kindly point me to my misconceptions if there are any)

This is seeking a technological solution to a social problem.

It is still strange to me that people say this as if it were a criticism.

It is not that strange when dealing with technological solutions to problems that we haven't yet understood. You define your goal as creating a "commons of knowledge”. Consider a few points:

[1] There seems to be a confusion between information and knowledge. I know that the LW community is attempting to provide a rational methodology towards knowledge but I have not seen this been done in any way that is substantially different. It is discussion as always with more commitment towards rationality (which is great!).

[2] We do not have an efficent way of representing arguments. Argument mapping is an attempt to that direction. I personally tend to use a numbering convention inspired by Wittgenstein (I am using it here as an example). The bottom line is that discussions tend to be quite unordered and opinions tend to be conflated with truths (see [1]).

[3] From [1] and [2] as an outsider I do not understand what the root group represents. Are these the people that are more rational? Who has decided that?

So maybe that is what Plethora meant. I am myself really interested in this problem and have been thinking about it for some time. My recommendation would be to focus first in smaller issues such as how to represent an argument in a way that can extract a truth rating. But even that is too ambitious at the moment. How about a technological solution for representing arguments with clarity so that both sides:

  • can see what is being said in clearly labeled propositions.
  • can identify errors in logic and mark them down.
  • can weed out opinions from experimentally confirmed scientific facts.
  • can link to sources and have a way to recursively examine their 'truth rating’ down to the most primary source.

These are just a few indicative challenges. There are also issues with methods for source verification exemplified by the ongoing scandals with data forging in psychology and neuroscience and the list goes on..

I would like to vote up this recommendation:

How about a technological solution for representing arguments with clarity so that both sides:

  • can see what is being said in clearly labeled propositions.
  • can identify errors in logic and mark them down.
  • can weed out opinions from experimentally confirmed scientific facts.
  • can link to sources and have a way to recursively examine their 'truth rating’ down to the most primary source.

This is an un-explored area, and seems to me like it would have a higher ROI than a deep dive into variations on voting/rating/reputation systems.

Interesting proposal.

I would suggest one modification : a "probation" period for contents, changing the rule "Content ratings above 2 never go down, except to 0; they only go up." to "Once a content staid for long enough (two days? one week?) at level 2 or above, it can never go down, only up" to make the system less vulnerable to the order in which content gets rated.

That makes sense. I'd like people to know when what they're seeing is out of probation, so I'd rather say that even if you have set the slider to 4, you might still see some 3-rated comments that are expected to go to 4 later, and they'll be marked as such, but that's just a different way of saying the same thing.

I don't really understand the reasons behind a lot of the proposed site mechanics, but I've been toying around with an idea similar to your slider, but for a somewhat different purpose.

Consider this paradox:

  1. As far as I can tell, humor and social interaction is crucial to keeping a site fun and alive. People have to be free to say whatever is on their mind, without worrying too much about social repercussions. They have to feel safe and be able to talk freely.

  2. This is, to some extent, at odds with keeping quality high. Having extremely high standards is one of the things that makes LW valuable, and gives it such a high signal to noise.

So, how do we cope with this dichotomy? One way is to allow users to either submit a comment/post to the outer circles, or to an inner one. I think this is part of what we were going for with the Discussion/Main dichotomy, but no one posted to Main, so people don't even check it anymore. But, because of our quality standards for Discussion, people also hadn't felt comfortable posting there, until recently when things have started picking up with a lot of good interesting articles. So, most of the actual discussion got pushed to the weekly open threads, or off the site entirely.

One way around this would be to have 2 "circles" as you call them. Users tag their own comments and submissions as either "cannon" or "non-cannon", based on epistemic status, whether they've thought about it for at least 5 min, whether it's informative or just social, whether they've read the Sequences yet or are a newbie, etc. You could, of course, add more circles for more granularity, but 2 is the minimum.

Either way, it's extremely important that the user's self-rating is visible, alongside the site's rating, so that people aren't socially punished for mediocre or low quality content if they made no claim to quality in the first place. This allows them to just toss ideas out there without having totally refined potential diamonds in the rough.

An interesting thing you could do with this, to discourage overconfidence and encourage the meek, would be to show the user their calibration curve. That is, if they routinely rank their own comments as outer circle quality, but others tend to vote them up to inner quality status, the system will visually show a corrected estimate of quality when they slide the bar on their own comment.

Maybe even autocorrect it, so that if someone tries to rate a comment with 1 star, but their average 1 star comment is voted to 3 stars, then the system will start it at 3 stars instead. Probably best to let people rate them themselves, though, since the social pressure of having to live up to the 3 star quality might cause undue stress, and lead to less engagement.

Users tag their own comments and submissions as either "cannon" or "non-cannon", based on epistemic status

I think it's a good idea to have a tag dictionary that allows (or maybe even forces) posters to tag their posts with things like "shower thought", "rant", "wild guess", "exploratory digging", "this is 100% true, I promise", etc.

It would be awesome to convert these tags to a cannon scheme where "did I post this? I must have been drunk" corresponds to a wooden cannon, a decent post would be a bronze 42-pounder, and an instant classic would get a Big Bertha symbol. Accordingly the users themselves could be classified by the Royal Navy's rating scheme. Pipsqueaks would be unrated vessels and then we'd go up all the way to the 1st rate ships of the line with over a hundred guns on board.

I also like the idea of lots of tags on content, both from submitters and from others. Who tagged what with what is public, not part of the ratings system, just a way to comment on things without commenting. Like Facebook's reaction emoji, except not mutually exclusive.

Thinking about it, I'd rather not make the self-rating visible. I'd rather encourage everyone to assume that the self-rating was always 2, and encourage that by non-technical means.

Making the self-rating visible for the purpose you state has real value. Will think about that.

BTW it's "canon" not "cannon" - cheers!

BTW it's "canon" not "cannon" - cheers!

Thanks for the correction. I always worry that I'll make similar mistakes in more formal writing.

[-][anonymous]7y00

Thinking about it, I'd rather not make the self-rating visible. I'd rather encourage everyone to assume that the self-rating was always 2, and encourage that by non-technical means.

[This comment is no longer endorsed by its author]Reply

I agree with Viliam that there is a big difference between trying to be a narrow-domain Wikipedia ("commons of knowledge") and trying to be a discussion site.

However I have another concern: incentives. From a regular-user point of view this looks like a system in which a cabal of superusers runs the show and the point is to collect a set of sacred texts which the newcomers are expected to study diligently if they want to hang around. So, why would I want to contribute? I might come and take a look at these sacred texts, but playing in this sandbox is a different matter. Sure, the high priesthood will labour on the commentaries and the commentaries on the commentaries, but why would I want to spend time here and write useful things?

If you don't want a cabal of super users running the show, you won't like anything I propose I think :) But lots of people comment on SSC, or in other forums where one person is basically in charge and will delete what they don't like. If adding content to this site turns out to be a good way to get smart people to comment interestingly on your content, that will be a strong incentive.

But lots of people comment on SSC, or in other forums where one person is basically in charge and will delete what they don't like.

Sure, but SSC is very clearly a discussion site and is not in the business of collecting a canonical set of texts. Moreover, there is no karma system and Scott's dictatorship is generally hands-off except for when he's enforcing politeness or wants to prevent a particular shift-by-evaporation (the reign of terror against NRx).

My point is rather that your proposal doesn't consider incentives seriously enough. It basically says: we have users who make posts and comments, here's how we should organize them. But there is a deeper problem: how do you get smart users who want to make posts and comments in the first place? In fact, the current failure mode of LW looks exactly like the guardian of the canon: there are the Sacred Sequences but... people just don't seem to be terribly motivated to coalesce around them any more.

I dunno man. Like, the statement that a lot of the highest quality discussion takes place on closed mailing lists is suspect to me. I've been on a few closed mailing lists, and they tend to be wastelands, because all the talking is taking place on reddit.

The most convenient/casual platform 'wins', in terms of getting most user discussion.

This is kind of why I want to achieve a "best of both worlds" effect - this creates something like a closed discussion group inside a convenient/casual Reddit, and good discussion can be pulled from the latter into the former.

I feel like you are kind of solving the wrong problem? Like, just make a lesswrong slack, put a link to it in a discussion post and at the top of the main page, and see if it becomes the place where all the talk happens.

I should like to see a monthly "magazine" of the most interesting posts and comments, and a compilation of the science links and topics under sub headings, like "life extension", cancer and health, seti and cosmology, sim and modeling, etc.

You would then be able to graze articles and hit highlights of your favorite science , and be able to send a link to friends you might think are interested in some included topic, and they would get some exposure to related info too.

Top articles could be published here in discussion to hammer out any problems, then finished articles would be consolidated into the 'magazine", and loaded to the wiki, and/or front page, if relevant.

How do you select (or deselect) the root set?

The site has to have a clear owner, and they decide on the root set. Technically, it's part of the site configuration, and you need admin access to the site to configure it.

[-][anonymous]7y00

What do you mean by 'content never moves outward'?

The same as "Content ratings above 2 never go down, except to 0", once content as been promoted to level 3 (or 4 or 5) once, it'll never go lower than that.

Yes, exactly. I don't think I've done as good a job of being clear as I'd like, so I'm glad you were able to parse this out!

Spammity spam spam.