On Wednesday I had lunch with Raph Levien, and came away with a picture of how a website that fostered the highest quality discussion might work.
Principles:
- It’s possible that the right thing is a quick fix to Less Wrong as it is; this is about exploring what could be done if we started anew.
- If we decided to start anew, what the software should do is only one part of what would need to be decided; that’s the part I address here.
- As Anna Salamon set out, the goal is to create a commons of knowledge, such that a great many people have read the same stuff. A system that tailored what you saw to your own preferences would have its own strengths but would work entirely against this goal.
- I therefore think the right goal is to build a website whose content reflects the preferences of one person, or a small set of people. In what follows I refer to those people as the “root set”.
- A commons needs a clear line between the content that’s in and the content that’s out. Much of the best discussion is on closed mailing lists; it will be easier to get the participation of time-limited contributors if there’s a clear line of what discussion we want them to have read, and it’s short.
- However this alone excludes a lot of people who might have good stuff to add; it would be good to find a way to get the best of both worlds between a closed list and an open forum.
- I want to structure discussion as a set of concentric circles.
- Discussion in the innermost circle forms part of the commons of knowledge all can be assumed to be familiar with; surrounding it are circles of discussion where the bar is progressively lower. With a slider, readers choose which circle they want to read.
- Content from rings further out may be pulled inwards by the votes of trusted people.
- Content never moves outwards except in the case of spam/abuse.
- Users can create top-level content in further-out rings and allow the votes of other users to move it closer to the centre. Users are encouraged to post whatever they want in the outermost rings, to treat it as one would an open thread or similar; the best content will be voted inwards.
- Trust in users flows through endorsements starting from the root set.
More specifics on what that vision might look like:
- The site gives all content (posts, top-level comments, and responses) a star rating from 0 to 5 where 0 means “spam/abuse/no-one should see”.
- The rating that content can receive is capped by the rating of the parent; the site will never rate a response higher than its parent, or a top-level comment higher than the post it replies to.
- Users control a “slider” a la Slashdot which controls the level of content that they see: set to 4, they see only 4 and 5-star content.
- By default, content from untrusted users gets two stars; this leaves a star for “unusually bad” (eg rude) and one for “actual spam or other abuse”.
- Content ratings above 2 never go down, except to 0; they only go up. Thus, the content in these circles can grow but not shrink, to create a stable commons.
- Since a parent’s rating acts as a cap on the highest rating a child can get, when a parent’s rating goes up, this can cause a child’s rating to go up too.
- Users rate content on this 0-5 scale, including their own content; the site aggregates these votes to generate content ratings.
- Users also rate other users on the same scale, for how much they are trusted to rate content.
- There is a small set of “root” users whose user ratings are wholly trusted. Trust flows from these users using some attack resistant trust metric.
- Trust in a particular user can always go down as well as up.
- Only votes from the most trusted users will suffice to bestow the highest ratings on content.
- The site may show more trusted users with high sliders lower-rated content specifically to ask them to vote on it, for instance if a comment is receiving high ratings from users who are one level below them in the trust ranking. This content will be displayed in a distinctive way to make this purpose clear.
- Votes from untrusted users never directly affect content ratings, only what is shown to more trusted users to ask for a rating. Downvoting sprees from untrusted users will thus be annoying but ineffective.
- The site may also suggest to more trusted users that they uprate or downrate particular users.
- The exact algorithms by which the site rates content, hands trust to users, or asks users for moderation would probably want plenty of tweaking. Machine learning could help here. However, for an MVP something pretty simple would likely get the site off the ground easily.
Only people who police spam/abuse; I imagine they'd have full DB access anyway.
An excellent question that deserves a longer answer, but in brief: I think it's more directly targeted towards the goal of creating a quality commons.
Because I don't know how else to use the attention of readers who've pushed the slider high. Show them both the comment and the reply? That may not make good use of their attention. Show them the reply without the comment? That doesn't really make sense.
Note that your karma is not simply the sum or average of the scores on your posts; it depends more on how people rate you than on how they rate your posts.
Again, the abuse team really need full DB access or something very like it to do their jobs.
The only adequate introduction I know of is Raph Levien's PhD draft which I encourage everyone thinking about this problem to read.
When an untrusted user downvotes, a trusted user or two will end up being shown that content and asked to vote on it; it thus could waste the time of trusted users.
Thanks for the clarifications.
That would make it hard to determine which users I should rate highly. Is the idea that the system would find users who rate similarly to me and recommend them to me, and I would mostly follow those recommendations?
Slashdot shows all the comments in collapsed mode and auto expands the comments that are higher than the filter sett... (read more)