Thanks to the reaction to this article and some conversations, I'm convinced that it's worth trying to renovate and restore LW. Eliezer, Nate, and Matt Fallshaw are all on board and have empowered me as an editor to see what we can do about reshaping LW to meet what the community currently needs. This involves a combination of technical changes and social changes, which we'll try to make transparently and non-intrusively.

Technical Changes

Changes will be tracked as issues on the LW issue tracker here. Volunteer contributions very welcome and will be rewarded with karma, and if you'd like to be paid for spending a solid block of high-priority time on this get in touch with me. If you'd like to help, for now I recommend setting up a dev environment (as laid out here and here).

Some technical changes (links to the issues in the issue tracker):

 

A tangential note on third-party technical contributions to LW (if that's a thing you care about): the uncertainty about whether changes will be accepted, uncertainty about and lack of visibility into how that decision is made or even who makes it, and lack of a known process for making pull requests or getting feedback on ideas are incredibly anti-motivating.

--Nick_Tarleton

This is something I care about quite a bit! Ideally, the three people above would scrutinize every change and determine whether or not it's worthwhile. In practice, they're all extremely busy, and as I'm only very busy I've been deputized to handle whether or not change will be accepted. If you're unsure about a change, talk to me.

Trike still maintains the site, and so it's still a Trike dev's call when a change will make its way to production (or if it's too buggy to accept). We've got a turnaround time guarantee from Matt for any time-sensitive changes (which I imagine few changes will be).

Social Changes

The rationalist community is a different beast than it was years ago, and many people have shifted away from Less Wrong. Bringing them back needs to involve more than asking nicely, or the same problems will appear again.

Epistemic rationality will remain a core focus of LessWrong, and the sorts of confusion that you find elsewhere will continue to not fly here. But the forces that push people from Main to Discussion to Open Threads to other sites need to be explicitly counteracted.

One aspect is that just like emotion is part of rationality, informality is part of the rationalist community. 

At some point I lost sight of what things were "rationality things" and what things were just "things, that I happened to want to talk about with rationalists, because those are the cool people"; and in the presence of this confusion I defaulted to categorizing everything as the latter - because it was easy; I live here now; I can go weeks without interacting with anybody who isn't at least sort of rationalist-adjacent. If I want to talk to rationalists about a thing I can just bring it up the next time I'm at a party, or when my roommates come downstairs; I don't have to write an essay and subject it to increasingly noisy judgment about whether it is in the correct section/website/universe.

--Alicorn

Another aspect is dealing with the deepening and specializing interests of the community.

A third aspect is focusing on effective communication. One of the core determinants of professional and personal success is being able to communicate challenging topics and emotions effectively with other humans. The applications for both instrumental and epistemic rationality are clear, and explicitly seeking to cultivate this skill without losing the commitment to rationality will both make LW a more pleasant place to visit and (one hopes) allow LWers to win more in their lives. But this is a long project, whose details this paragraph is too short to contain. I don't have a current anticipated date for when I'll be ready to talk more about this.

 


 

I expect to edit this post over the coming days, and as I do, I'll make comments to highlight the changes. Thanks for reading!

New Comment
106 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings
[-][anonymous]360

Per some recent discussions with Elo and others, I'm working on a mockup of some new Home page designs. The current one has the following issues:

  1. "About" is hard to find.
  2. The question "Why should I care?" isn't answered until several links in.
  3. There are potential good contributors who are probably being driven away from posting because the first link to the materials is a huge, intimidating list with idiosyncratic or academic titles.
  4. Who's going to look at the "Sequences" if they don't know what the "Sequences" are, already?
  5. There needs to be a "New User" section that is EASY to find from the landing page. Most of this content is already in the about page, so the about page also needs to be easy to find.
  6. The rationalist blogroll needs to be easier to find, to loop in the diaspora'd community.

I had my spouse and some friends look at it, because they fulfill a few conditions: They have never seen the site before, and they are the type of person I'd like to encourage to contribute (smart, good writers, thoughtful). Their feedback was discouraging. They all indicated confusion or intimidation. Several rationalist-adjacent... (read more)

9Vaniver
Thanks for working on this! I've looked at redesigning the home page a few times but I don't have the design chops or the access to outsiders to do a good job of it, and so I'm glad that you're attacking this important problem. Agreed. I think that there's quite a bit of value in having a place for people just finding this rationality thing, that manages to plug them into both the things the community has built up over time and the things the community is doing now. Compare, say, a booster rocket and a space elevator. The two serve similar but different purposes, and use very different mechanisms.

A for effort, but please satisfy my curiousity: what ARE the actual changes planned?

Count me as equal parts hopeful and skeptical.

I think the best part of LW was the content—articles by EY and the dude who writes at SSC being on the top of that list. Oh, and Luke wrote some cool stuff, too. There have been others, but the main consistent top posters are out as far as I can tell. If you can find good content, you will win in this LW reboot mission, even if no other changes are made.

Otherwise, I think you'll need huge changes to Make LW Great Again™. It's basically a good rationality/math-y reddit sub with an AI and EA focus. There's nothing wrong with that, but it's not terribly novel or special either.

The pure cynic in me says almost nothing has demonstrably changed in the 3+ years (maybe more?) I've been reading here (other than the decline in good content) and I've no reason to believe this effort will yield anything.

Anyway, sincere kudos to you for your efforts. I like LW and support common sense efforts to improve it.

Couple ideas off the top off my head:

  • Come up with a 2.0 karma system. Reddit-style karma is cool and functional, but I bet the finest minds at LW could come up with something that fosters even more rational discussion. Maybe a drop-down box wit

... (read more)
7Vaniver
My impression is that Slashdot had a system like this, but it didn't work very well and wasn't copied by many other places. One thing that seems likely to happen is karma weighting. StackOverflow does a similar thing, where new users can't vote, and users with sufficient karma can. One can take this further and give higher karma weights to more established users; if, say, Eliezer upvotes something, that should probably result in more than one upvote. But this assumes that every Eliezer upvote is the same, which probably isn't correct. An alternate idea is to talk about what part of the voter is approving or disapproving of a comment. If someone says "I, as a technical expert, think this comment is good," that conveys useful information in a way that "this comment is good because it is technical" doesn't, and it's easier to control who has access to what buttons than whether people are using those buttons correctly. I think this is a huge part of the LW value proposition. I agree that it's broken but there are two important constraints to keep in mind when modifying it: 1. Don't break links to old LW articles. 2. Make the desired level of scrutiny for a post obvious. Doing the former is a question of how the codebase is set up (but it looks like both main and discussion articles have the /lw/___/article_name/ structure, so this should be mostly okay). The latter looks to me like it's better accomplished by something like tags and background colors / textbox borders than separate subreddits. My current thought is that a good solution to this problem also attacks the specialization problem and uses tags to a big degree; it would be neat if someone could use LW as something like an RSS feed, where the tags for a post or link modified its karma ("show me all posts with at least 10 karma, give high-scrutiny posts an extra 5 karma, give animal-rights posts an extra 3 karma, and give math posts negative 10 karma" results in them still seeing exceptional math post
3OrphanWilde
Hm. I like the elimination of the Main/Discussion dichotomy - my historic recommendations have been more division between categories, but it seems more useful separating content out by upvotes and upvote/downvote percentages, to produce three categories (high-upvote, high-variability, and undistinguished), using the high-upvote group as the splash page for new users. As for the drop-down - I'm inclined to say "No." Anything that makes upvoting/downvoting more tedious would just discourage it. Making it easy to cross-link content, and having a prominent place for a link to the author's blog/tumblr/whatever, might encourage cross-posted content, particularly high-quality cross-posted content, which could (bootstrapping problem) reward high-quality posts with increased traffic to their blogs, tumblrs, or favored causes (say, EA).
7Viliam
It could be implemented in a way that doesn't make it more tedious. For example, the first click could be upvote or downvote. The vote would be counted, and it would display a list of additional icons (different lists for upvotes and downvotes). The optional second click could choose one of those icons. But even if you skip the second step, your vote still counts; the second click can only add more "flavor". If many people click the same secondary icon, it will be displayed next to the comment karma. So before voting the icons would be like: (of course, pictures instead of words) [upvote] [downvote] And after clicking on "upvote", the row would change to: [UPVOTE] [downvote] -- [interesting] [funny] [well-researched] ... And after clicking on "downvote", the row would change to: [upvote] [DOWNVOTE] -- [incorrect] [offensive] ...
1gwillen
I think one of the biggest opportunities with this would be to give more weight to votes that come with a reason. (In fact, I'd be tempted to design such a system to silently ignore votes made with no reason -- let the user make them, display them in the interface, just don't use them for anything.)

I'm very happy about the prospect of reviving LW and maybe having some of the diaspora return!

Is there anything I can do to help with this? I love engaging in the comments, and I would love to (be able to) write posts that people appreciate, but I can't really do either for hardcore technical discussions of e.g. decision theory. I would be happy to start posting "things I'd love to discuss with rationalists", if & when a new social/thematic focus is agreed on. Examples: theories of e.g. biology or history from popular books I read; programming & tech topics; many subjects that would be at home in SSC posts (except in lower quality, obviously).

I could submit patches, but in practice I don't have the time to do so. I would happily donate some hundreds of dollars (e.g. for someone else to develop those patches), to increase the chance of reviving LW or to speed it up. I'm guessing it wouldn't be very useful unless others donate too, and you'll probably tell me to donate to MIRI instead, but I'm throwing it out there. What else can I do?

Just my personal opinion, but I think that you should write about "your" topics, e.g. if you are interested in biology and you know things that (a) can be interesting for intelligent laymen, (b) are not obvious for most people, and (c) you feel sufficiently certain that the information is correct, go ahead!

In my experience when people try to force themselves to write or talk about topics they don't deeply understand, but they feel they have some kind of "rationalist duty" to write yet another article or make yet another lecture about Kahneman or Bayes, it often ends badly.

The valuable part is your expertise, whether it is your profession or merely a hobby.

7DanArmak
My expertise, strictly speaking, is in programming and closely related technological topics; those are the only subjects where I can personally vouch for the correctness etc. of the content instead of relying on references. Everywhere else I'd be repeating what other people write, or at best providing anecdotal evidence. But programming is an immensely wide subject. Most of the LW regulars are themselves programmers or have experience with programming (in the Israeli LW meetup there's only one 'core' / regular member who's not a programmer). I fear that most technical things I might write would be obvious to some and incomprehensible or irrelevant to others, and only useful and interesting to a minority in the middle. On the other hand, I read a lot in subjects which I think would interest many people. I could write about those, but the epistemic status would be "I read this somewhere, here's the reference, I can't verify this for myself and I probably don't even have a good prior and neither do you unless you're already an expert in the subject." For example, at a recent Israeli LW meetup I presented a summary of biologist Nick Lane's books on the role of mitochondria in the evolution of other properties of eukaryotes. I'm not sure the two people in the audience with formal biological education liked it as much as everyone else did; one of them said something to the effect of "even if it's not necessarily true, it's a nice story".
1Viliam
In general, having articles accessible for everyone is nice, but it isn't necessary. We already had very math-heavy LW articles. Just as well we could have anything-else-heavy articles, as long as enough people here understand the topic, so they can express their opinions on whether it makes sense or is a bullshit. You can always ask in the open thread: "Hi, I am going to write a topic about X (be quite specific here), what do you think about it?" Add options like "I wouldn't understand it", "I already know that", "I would like to read it", and maybe "I would understand it, but I'm not actually interested in reading it" and let people vote. But essentially the only part you care about it how many people will answer "I would like to read it". The example with the mitochondria seems okay if the formally educated people didn't find obvious mistakes. I was rather thinking about some bad examples where either the author was saying repeatedly "uhm... I actually don't understand this... so... maybe..." and just kept confusing everyone, or where the author was making obvious mistakes that made all present experts roll their eyes, but the author didn't care about feedback and continued to dig deeper. If you are not an expert, what made you think that Nick Lane's books are better than random garbage? I see two options: either you trust his credentials, or you have already read enough books in biology that there is a chance that you would notice something off. Preferably both. That seems okay to me. (What I would hate to see is the approach like "I don't know anything about this, but here is this cool youtube video about how quantum thinking allows you to do magic with your mind", that is neither a reliable source, nor sufficient background to distinguish signal from garbage.)
6DanArmak
Let's run with this, not because I need to decide this case, but as an example: He has reasonable credentials: he's Reader in Evolutionary Biochemistry (the exact subject of his books), University College, London. He has written several books on the subject with high Amazon rank. Most things he says aren't original to him and he's careful to cite the origins of each idea. But with all that, I have no real idea of how other people in his field perceive him or his theories. All the reviews I found with Google were positive but also weren't by experts in that particular field. I also don't have a sense of how much counterevidence there might be that he doesn't mention. I have read many tens of books popularizing biology, paleontology, history of evolution, etc. And I studied undergrad biology for three semesters. But I don't have a professional understanding, and I haven't read technical literature relevant to Lane's theories. I do sometimes notice 'something being off'. But in most of these cases I don't think I'm capable of distinguishing the author being wrong from myself being wrong. ---------------------------------------- I'm afraid of spreading misinformation that only an expert would notice, because there may not be any experts in the audience (and also my reputation would take a big hit). So the question is: how (meta-)certain should I be before publishing something on LW?
5Viliam
Well, I plan to publish articles with much less evidence. I am not a scientist, and I usually don't check the references. I'd say go ahead.
4Gram_Stone
On this, Nate Soares' Confidence All the Way Up is worth reading.

Nice to see someone taking the lead! I've been looking for something to work on, and I'd be proud to help rebuild LW. I'll send you a message.

[-][anonymous]110

Please stop me if I'm getting spammy (This will be my last non-reply comment on this post) but I just found this: http://lesswrong.com/lw/929/less_wrong_mentoring_network/

while I was looking through the FAQ for things to incorporate on the home page. I think this is still a great idea. I actually have some experience with mentoring programs, and would be willing to assist with a more formal process.

I'm interested in setting up the dev environment. But I'm running into technical issues setting up the VM etc. I expect more such questions will come up. What is the right place to discuss these? Perhaps a channel on the slack? Or do we want something more permanent to help new contributors?

8Elo
we now have a slack channel to talk technical things.
6Vaniver
As well as the Slack channel, I'd talk about version issues with the VM here.

Would you mind putting links to the issue tracker items as you update this post, and editing/updating the post to keep track of which issues have open/no assignment?

5Vaniver
Will do.

Added some details about technical changes, but not at the detail where someone could start producing code (for anything besides linkposts). My hope is to put more atomic things as issues; if there are, say, three changes to make to the tagging system, my current thought is that each of those deserves an issue (unless they depend on each other in a deep way).

The overarching theme with tags is this: we don't want to silo content for people, because that's a recipe for people missing out on content ("Wait, there's a main section?"). But we also wan... (read more)

6DanArmak
I think the appropriate historical allusion here is to the Blog Planet) aggregator and its lookalikes. Not that I'm proposing this as a technical solution, necessarily, but it's a user experience I remember fondly.

I'm really excited about these changes but I do tend to lean toward optimism bias, so take it with a grain of salt.

My primary interest for the changes would be to make LW more accessible and welcoming to newbies, so lots of the changes proposed here are good. I'd especially welcome a prominent section for new people engaging with LW to guide them through the process well. Perhaps a specific project could be guiding newbies into engaging with LW well by tracking points where they might fall off?

It looks like new accounts are set up with a comment and post threshold of around -3 so that heavily downvoted posts are not displayed in the Discussion thread list and comments in threads are minimized unless you choose to expand them. However, it doesn't look like the default view when looking at the website without an account does the same. If we want to minimize exposure of heavily downvoted threads, it could be good to set up the website so that it doesn't display them on the Discussion thread list for a non-user.

2Vaniver
Excellent suggestion. Added as an issue here.

At some point, when there is a clear vision of what is going to be built I'd suggest a crowd funding campaign in order to raise funds for development.

4Vaniver
Now that the MIRI and CFAR fundraisers are over, that's a possibility. I don't know exactly what the funding needs will look like yet, and if they're small enough I can cover them. It would be good to know how valuable LW is to users, though, and any unspent funds could be passed on to CFAR, MIRI, and FHI.

It looks like the development VM is up and running, thanks to changes just merged into the main repo. Thanks brendanlong and wezm!

To celebrate, I opened four new issues. Each is a fairly small feature that will help in some fashion:

  • Allowing linkposts seems like an easy way to "bring the diaspora back" to LW while still allowing everyone to maintain their own branding and control (and, if they insist on it, distance). It might also be easy!

  • Shutting down new posts to Main is a precursor to new content organization (which may be subreddits sepa

... (read more)
-1OrphanWilde
If Eugine Nier didn't exist, we would have to invent him. What are we calling retributive downvoting, incidentally? That seems a bit of a fuzzy term, and we should probably have a solid definition as we move into being able to respond to it.
1Vaniver
The targeted harassment of one user by another user to punish disagreement; letting disagreements on one topic spill over into disagreements on all topics. That is, if someone has five terrible comments on politics and five mediocre comments on horticulture, downvoting all five politics comments could be acceptable but downvoting all ten is troubling, especially if it's done all at once. (In general, don't hate-read.) Another way to think about this is that we want to preserve large swings in karma as signals of community approval or disapproval, rather than individuals using long histories to magnify approval or disapproval. It's also problematic to vote up everything someone else has written because you really like one of their recent comments, and serial vote detection algorithms also target that behavior. We typically see this as sockpuppets instead of serial upvoters, because when someone wants to abuse the karma system they want someone else's total / last thirty days to be low, and they want a particular comment's karma to be high, and having a second account upvote everything they've ever done isn't as useful for the latter.
1OrphanWilde
Taking another tack - human beings are prone to failure. Maybe the system should accommodate some degree of failure, as well, instead of punish it. I think one obvious thing would be caps on the maximum percent of upvotes/downvotes a given user is allowed to be responsible for, vis a vis another user, particularly over a given timeframe. Ideally, just prevent users from upvoting/downvoting further on that user's posts or their comments past the cap. This would help deal with the major failure mode of people hating one another. Another might be, as suggested somewhere else, preventing users from downvoting responses to their own posts/comments (and maybe prevent them from upvoting responses to those responses). That should cut off a major source of grudges. (It's absurdly obvious when people do this, and they do this knowing it is obvious. It's a way of saying to somebody "I'm hurting you, and I want you to know that it's me doing it.") A third would be - hide or disable user-level karma scores entirely. Just do away with them. It'd be painful to do away with that badge of honor for longstanding users, but maybe the emphasis should be on the quality of the content than the quality (or at least the duration) of the author anyways. Sockpuppets aren't the only failure mode. A system which encourages grudge-making is its own failure.
0Vaniver
I agree with you that grudge-making should be discouraged by the system. Hmm. I think downvoting a response to one's material is typically a poor idea, but I don't yet think that case is typical enough to prevent it outright. I am curious now about the interaction between downvoting a comment and replying to it. If Alice posts something and Bob responds to it, a bad situation from the grudge-making point of view is Alice both downvoting Bob's comment and responding to it. If it was bad enough to downvote, the theory goes, that means it is too bad to respond to. So one could force Alice to choose between downvoting and replying to the children of posts she makes, in the hopes of replacing a chain of -1 snipes with either a single -1 or a chain of discussion at 0.
0Lumifer
I have a personal policy of either replying to a comment or downvoting it, not both. The rationale is that downvoting is a message and if I'm bothering to reply, I can provide a better message and the vote is not needed. I am not terribly interested in karma, especially karma of other people. Occasionally I make exceptions to this policy, though.
-1OrphanWilde
I make rare exceptions. About the only time I do it is when I notice my opponent is doing it. (Not because I care they're doing it to me or about karma, but I regard it as a moral imperative to defect against defectors, and if they care about karma enough to try it against me, I'm going to retaliate on the grounds that it will probably hurt them as much as they hoped it would hurt me.)
-1OrphanWilde
I think it's sufficient to just prevent voting on children of your own posts/comments. The community should provide what voting feedback is necessary, and any voting you engage in on responses to your material probably isn't going to be high-quality rational voting anyways.
0Vaniver
Blocking downvoting responses I could be convinced of, but blocking upvoting responses seems like a much harder sell.
1OrphanWilde
My argument is symmetry, but the form that argument would take would be... extremely weak, once translated into words. Roughly, however... you risk defining new norms, by treating downvotes as uniquely bad as compared to upvotes. We already have an issue where neutral karma is regarded by many as close-to-failure. It would accentuate that problem, and make upvotes worth less.
1OrphanWilde
Begging your pardon, but I know the behavior you're referring to; what concerns me with the increased ability to detect this behavior is the lack of a concrete definition for what the behavior is. That's a recipe for disaster. A concrete definition does enable "rule-lawyering", but then we can have a fuzzy area at the boundary of the rules, which is an acceptable place for fuzziness, and narrow enough that human judgment at its worst won't deviate too far from fair. I/e, for a nonexistent rule, we could make a rule against downvoting more than ten of another user's comments in an hour, and then create a trigger that goes off at 8 or 9 (at which point maybe the user gets flagged, and sufficient flags trigger a moderator to take a look), to catch those who rule-lawyer, and another that goes off at 10 and immediately punishes the infractor (maybe with a 100 karma penalty) while still letting people know what behavior is acceptable and unacceptable. To give a specific real-world case, I had a user who said they were downvoting every comment I wrote in a particular post, and encouraged other users to do the same, on the basis that they didn't like what I had done there, and didn't want to see anything like it ever again. (I do not want something to be done about that, to be clear, I'm using it as an example.) Would we say that's against the rules, or no? To be clear, nobody went through my history or otherwise downvoted anything that wasn't in that post - but this is the kind of situation you need explicit rules for. Rules should also have explicit punishments. I think karma penalties are probably fair in most cases, and more extreme measures only as necessary.
8Nornagest
Speaking as someone that's done some Petty Internet Tyrant work in his time, rules-lawyering is a far worse problem than you're giving it credit for. Even a large, experienced mod staff -- which we don't have -- rarely has the time and leeway to define much of the attack surface, much less write rules to cover it; real-life legal systems only manage the same feat with the help of centuries of precedent and millions of man-hours of work, even in relatively small and well-defined domains. The best first step is to think hard about what you're incentivizing and make sure your users want what you want them to. If that doesn't get you where you're going, explicit rules and technical fixes can save you some time in common cases, but when it comes to gray areas the only practical approach is to cover everything with some variously subdivided version of "don't be a dick" and then visibly enforce it. I have literally never seen anything else work.
4OrphanWilde
Not to insult your work as a tyrant, but you were managing the wrong problem if you were spending your time trying to write ever-more specific rules. Rough rules are good; "Don't be a dick" is perhaps too rough. You don't try to eliminate fuzzy edges; legal edge cases are fractal in nature, you'll never finish drawing lines. You draw approximately where the lines are, without worrying about getting it exactly right, and just (metaphorically) shoot the people who jump up and down next to the line going "Not crossing, not crosssing!". (Rule #1: There shall be no rule lawyering.) They're not worth your time. For the people random-walking back and forth, exercise the same judgment as you would for "Don't be a dick", and enforce it just as visibly. (It's the visible enforcement there that matters.) The rough lines aren't there so rule lawyers know exactly what point they can push things to: They're so the administrators can punish clear infractions without being accused of politicizing, because if the administrators need to step in, odds are there are sides forming if not formed, and a politicized punishment will only solidify those lines and fragment the community. (Eugine Nier is a great example of this.)
2Nornagest
Standing just on this side of a line you've drawn is only a problem if you have a mod staff that's way too cautious or too legalistic, which -- judging from the Eugine debacle -- may indeed be a problem that LW has. For most sites, though, that's about the least challenging problem you'll face short of a clear violation. The cases you need to watch out for are the ones that're clearly abusive but have nothing to do with any of the rules you worked out beforehand. And there are always going to be a lot of those. More of them the more and stricter your rules are (there's the incentives thing again).
1OrphanWilde
I'm aware there are ways of causing trouble that do not involve violating any rules. I can do it without even violating the "Don't be a dick" rule, personally. I once caused a blog to explode by being politely insistent the blog author was wrong, and being perfectly logical and consistently helpful about it. I think observers were left dumbfounded by the whole thing. I still occasionally find references to the aftereffects of the event on relevant corners of the internet. I was asked to leave, is the short of it. And then the problem got infinitely worse - because nobody could say what exactly I had done. A substantial percentage of the blog's readers left and never came back. The blog author's significant other came in at some point in the mess, and I suspect their relationship ended as a result. I would guess the author in question probably had a nervous breakdown; it wouldn't be the first, if so. You're right in that rules don't help, at all, against certain classes of people. The solution is not to do away with rules, however, but to remember they're not a complete solution.
1Nornagest
I'm not saying we should do away with rules. I'm saying that there needs to be leeway to handle cases outside of the (specific) rules, with more teeth behind it than "don't do it again". Rules are helpful. A ruleset outlines what you're concerned with, and a good one nudges users toward behaving in prosocial ways. But the thing to remember is that rules, in a blog or forum context, are there to keep honest people honest. They'll never be able to deal with serious malice on their own, not without spending far more effort on writing and adjudicating them than you'll ever be able to spend, and in the worst cases they can even be used against you.
2Vaniver
My impression is that the primary benefit of a concrete definition is easy communication; if my concrete definition aligns with your concrete definition, then we can both be sure that we know, the other person knows, and both of those pieces of information are mutually known. So the worry here is if a third person comes in and we need to explain the 'no vote manipulation' rule to them. I am not as impressed with algorithmic detection systems because of the ease of evading them with algorithms, especially if the mechanics of any system will be available on Github. I remember that case, and I would put that in the "downvoting five terrible politics comments" category, since it wasn't disagreement on that topic spilling over to other topics. My current plan is to introduce karma weights, where we can easily adjust how much an account's votes matter, and zero out the votes of any account that engages in vote manipulation. If someone makes good comments but votes irresponsibly, there's no need to penalize their comments or their overall account standing when we can just remove the power they're not wielding well. (This also makes it fairly easy to fix any moderator mistakes, since disenfranchised accounts will still have their votes recorded, just not counted.)
5OrphanWilde
All security ultimately relies on some kind of obscurity, this is true. But the first pass should deal with -dumb- evil. Smart evil is its own set of problems. You would. Somebody else would put it somewhere else. You don't have a common definition. Literally no matter what moderation decision is made in a high-enough profile case like that - somebody is going to be left unsatisfied that it was politics that decided the case instead of rules.
0Nornagest
The cheapest technical fix would probably be to prohibit voting on a comment after some time has passed, like some subreddits do. This would prevent karma gain from "interest" on old comments, but that probably wouldn't be too big a deal. More importantly, though, it wouldn't prevent ongoing retributive downvoting, which Eugine did (sometimes? I was never targeted) engage in -- only big one-time karma moves. If we're looking for first steps, though, this is a place to start.
0Lumifer
If you want to reward having a long history of comments, you could prohibit only downvoting of old comments. I doubt you could algorithmically distinguish between downvoting a horticulture post because of disagreements about horticulture and downvoting a horticulture post because of disagreements about some other topic. But I suspect voting rate limiters should keep the problem in check.
1Lumifer
What bad guys do. There is an occasionally quoted heuristic: "Vote up what you'd like to see more of; vote down what you'd like to see less of". When good guys do that it's called karma system working as intended. When bad guys do that it's called abuse of the karma system.
1gjm
This is simply untrue. What gets called "retributive voting" is when you vote something down not because of its own (de)merits but because of its author. That's bad for LW no matter who does it. Someone who does it much is (I suggest) ipso facto not a good guy any more. I have never seen anyone defending such behaviour as "karma system working as intended", so I'm not seeing the hypocrisy you complain of. Can you point to a couple of examples? (It's also an abuse of the karma system if you systematically vote someone's comments up because you approve of that person. I've no idea whether that's a thing that happens -- aside from the case where the voter and the beneficiary are really the same person, which is an abuse of the system for other reasons -- because it's harder to notice: most people's karma, most of the time, goes up rather than down, and the main way retributive downvoting gets spotted is when someone notices that they've suddenly lost a lot of karma.)
3OrphanWilde
Actually, let's take this in another direction: Suppose the moderator(s) (Is Nancy the only one left) are out on vacation, and Eugine shows up again, and has already farmed enough karma to begin downvoting. Would it be a Good Guy act, or a Bad Guy act, to downvote all of his karma-farming comments?
0gjm
I'm not keen on this sort of binary classification. But: I don't think I would do it in most versions of this scenario, though I dare say some other reasonable people would. What's interesting to me about your choice of scenario is that it's one in which an "identity-based" sanction has already been applied: Eugine, specifically, is not supposed to be active here any more. It would not be so very surprising if that provided an exception to the general principle that voting should be content-based rather than identity-based.
3OrphanWilde
That's the modern Less Wrongian perspective. Prior to Eugine's ban, there was, in fact, some general support for the idea of getting rid of persistently bad users via user-based downvotes via the karma system. The Overton Window was shifted by Eugine's ban (and his subsequent and repeated reappearances, complete with the same behaviors). You're either newer than I thought, or didn't pay attention. There was a -lot- of defense of this during Eugine's ban by people worried that Less Wrong would be destroyed by bad users. (They by and large supported Eugine's ban, as they objected to the automation of it, and also I think didn't want to die on the hill of defending an extremely unpopular figure.)
4gjm
My memory is very far from perfect, but I don't remember there ever being much support for downvoting "bad" users into oblivion. Do you have a couple of links, perhaps? In any case, what Lumifer wrote was "When good guys do that it's called karma system working as intended" and not "A few years ago, some people on LW were in favour of good guys doing that", which seems to me a very different proposition indeed. I'm just looking through the comments to the announcement of Eugine's ban. There are a lot of comments. So far, the only instance I can find of someone defending mass-downvoting in some cases is ... Lumifer. ... OK, there are a couple more: wedrifid suggesting it might be an appropriate treatment for trollish sockpuppets and MugaSofer not actually defending mass-downvoting but saying that some (unspecified) people think it is sometimes justified. ... And, having now finished (though I confess I skimmed some subthreads that didn't seem likely to contain opinions on this point), that's all I found. So we have Lumifer defending his right (in principle) to mass-downvote someone he thinks is a hopeless case; wedrifid suggesting that mass-downvoting might be an appropriate sanction for trollish sockpuppets and the like; and MugaSofer saying that some people think mass-downvoting is sometimes OK; and that's it. That's in a thread of hundreds of comments, a large fraction of which either explicitly say what an ugly thing mass-downvoting is or implicitly agree with the general sentiment. That doesn't look to me like "a -lot- of defense". Maybe I looked in the wrong place. Again, do you have a link or two?
3OrphanWilde
I cannot provide links, unfortunately, no, because most of it happened in background threads, although MugaSofer's comment can be taken as confirmation that this was, in fact, being talked about. This was a... semi-popular topic on how Less Wrong could be improved around that time, when I happened to be unusually active, although I left in disgust right before Eugine's ban, IIRC, over the fact that my most upvoted comments were what I considered basic-level social sanity, and the stuff I wrote that I expected to be taken seriously tended to get downvoted (later I realized that Less Wrong is just incredibly socially inept, but relatively skilled in the areas I expected to be taken seriously, so comparative advantage went overwhelmingly in favor of my social skills, which happened to be considerably better than I had thought at the time). Eugine didn't invent the idea of mass-downvoting, he merely implemented what was being discussed.
0gjm
It seems that all we have here is your recollection of how much support the idea had ("semi-popular" or "a -lot-"; I'm not sure what the intersection of those two is) versus mine (scarcely any). I'm not sure we can make much further progress on that basis, but it really doesn't matter because the actual question at issue was about opinions now; do you think there is currently any support to speak of on LW for constructive mass-downvoting?
0OrphanWilde
Yeah, that's what I'm afraid of.

I think it would be good to have a limit of posts/per/day for people with karma 10 votes cast on their posts.

2Nornagest
I'm sympathetic, but I do note that this would further incentivize retributive downvoting.
0Vaniver
I think we need to fix the retributive downvoting problem anyway, and so a feature that depends on it being fixed is alright.
1Gram_Stone
IAWYC, although one might consider waiting until the problem is actually solved.
0Vaniver
Agreed. That's why I didn't add it to the issue list yet.
[-][anonymous]10

I'd like notifications of big changes in karma on particular comments

[-]gjm00

What's the current state of play on this? Is anything still moving?

0OrphanWilde
I have a little more free time coming up (my weekly tabletop RPG session is more-or-less dead, and I was the DM, so I don't have to do prepwork or actually run it anymore; a couple of shorter-time projects have intervened, unfortunately), so I should at some point in the near future be able to put some development time in.

Looks interesting. I may try writing an article or two to help the environment along.

Personally I'm of the ilk that the wrong change is better than no changes at all, because at least things are moving again. Excited to see what happens!

On a tangential note: it would be cute for LW to acquire a collection of resident chat bots, preferably ones which could be dissected and rewired by all and sundry. Erecting defences against chat bots run amok would also be enlightening :-)

6gjm
Cute, but I think probably a really terrible idea.
0Lumifer
The collection would come with a per-user flag which, when set to "Go Away", will make the existence of chatbots and all their doings entirely invisible to the user. Or you can sandbox the chatbots in a subreddit of their own. LW needs some fun and games -- chatbots would provide a good playground.
0Gunslinger
Maybe if it was a particularly philosophical one. (typical 'fun in moderation' comment here)
5Richard_Kennaway
Watson can already philosophize at you from TED talks. Someone needs to develop a chat bot based on it, and have it learn from the Sequences. Actually, that could be huge. Rationality blogs generated by bots! Self-improvement blogs generated by bots! Gosh-wow science writing generated by bots! At present, most bot-written books are pretty obviously junk, but instead of going for volume and long tails, you could hire human editors to make the words read more as if they were originated by a human being. They'd have to have a good command of English, though, so the stereotypical outsourcing to Bangalore wouldn't be good enough. Ideally, you'd want people who were not just native speakers, but native to American culture, smart, familiar with the general area of the ideas, and good with words. Existing bloggers, that is. Offer this to them as a research tool. It would supply a blogger with a stream of article outlines and the blogger would polish them up. Students with essays to write could use it as well, and since every essay would be different, you wouldn't be able to detect it wasn't the student's work by googling phrases from it. This is such a technologically good idea that it must happen within a few years.
3Lumifer
LOL. Wake up and smell the tea :-) People who want to push advertising into your eyeballs now routinely construct on-demand (as in, in response to a Google query) websites/blogs/etc. just so that you'd look at them and they get paid for ad impressions. See e.g. recent Yvain: Now, you say you want to turn this to the light side..?
1Val
There is an interesting article about how and why people are susceptible to such things. It is also an interesting experiment in how many times one can include the word "bullshit" into a serious, peer-reviewed article.
1Richard_Kennaway
I'm just saying it's so technologically cool, someone will do it as soon as it's possible. Whether it would actually be good in the larger scheme of things is quite another matter. I can see an arms race developing between drones rewriting bot-written copy and exposers of the same, together with scandals of well-known star bloggers discovered to be using mechanical assistance from time to time. There would be a furious debate over whether using a bot is actually a legitimate form of writing. All very much like drugs and sport. Bot-assisted writing may make the traditional essay useless as a way of assessing students, perhaps to be replaced by oral exams in a Faraday cage. On Facebook, how will you know whether your friends' witticisms are their own work, especially the ones you've never been face to face with?
1Lumifer
Ahem. ELIZA, the chat bot, was made in mid-1960s. And...:
0Richard_Kennaway
I'm aware of ELIZA, and of Yvain's post. ELIZA's very shallow, and the interactive setting gives it an easier job than coming up with 1000 words on "why to have goals" or "5 ways to be more productive". I do wonder whether some of the clickbait photo galleries are mechanically generated.
0V_V
Here.
0Lumifer
I guess I just think of chatbots as "old tech" and not as "new and cool" :-/ ELIZA, as you mention, is extremely simple, and still was able to tap into emotional responses. Nowadays we have Siri and Cortana, the Japanese virtual girlfriends, etc.etc. I am also not sure that the ability to generate coherent text (as opposed to generating original, meaningful, useful content) is that valuable nowadays. The intertubes are already clogged with mediocre-to-awful blog posts -- there are enough humans for that.
-2Old_Gold
Are these things going to fool any actual human, or just Google's algorithms, i.e., that people see it in Google's searches, possibly click, but don't look at it any closer.
1Lumifer
Yes, I think so, at least for a while. These actual humans will probably be old, not terribly smart, uncomfortable with that weird world of internet, somewhat gullible or at least prone to putting a bit too much trust into printed word...
0Vaniver
So, early on people were excited about machine translation--yeah, it wasn't great, but you could just have human translators start from the machine translation and fix the mess. The human translators hated it, because it moved from engaging intellectual work to painful copyediting. I think a similar thing will be true for article writers.
2Richard_Kennaway
The talented ones, yes, but there will be a lot of temptation for the also rans. You've got a blogging deadline and nothing is coming together, why not fire up the bot and get topical article ideas? "It's just supplying facts and links, and the way it weaves them into a coherent structure, well I could have done that, of course I could, but why keep a dog and bark myself? The real creative work is in the writing." That's how I see the slippery slope starting, into the Faustian pact.
9Vaniver
I... have never heard this idiom before, and now want to use it all the time.
0Gram_Stone
The pitch generator and story generator on TVTropes is sort of like this, although far less sophisticated.

I'm disappointed that the details listed about social changes are so vague.

I would love to see some kind of Less Wrong council that meets regularly and discusses future directions. One problem at the moment is the lack of transparency about decisions - we generally don't know if an idea has even been considered or why they have been rejected.

I'm disappointed that the details listed about social changes are so vague.

I committed somewhere to have a post on this out on Tuesday, so I went with what I had ready at the time. Details will follow.

I would love to see some kind of Less Wrong council that meets regularly and discusses future directions. One problem at the moment is the lack of transparency about decisions - we generally don't know if an idea has even been considered or why they have been rejected.

What sort of medium do you think is best for this? A Slack chat? A regular thread here?

For almost everything, I'm happy with increased transparency. Whether we should move towards a more StackOverflow-like karma model where voting is an earned privilege is an example of something where an open discussion would be welcome, so everyone can get a sense of the pro and con arguments. But I can't guarantee transparency about all decisions, because there are some things that are much easier to discuss in private. For example, consider hg00's comment calling for the bans of VoiceOfRa (who was banned) and Lumifer (who isn't banned). It seems to me that the number of cases where a ban decision will be swayed by public discussion is nowhere near large enough to justify the costs of public discussions of ban decisions.

2Lumifer
The first, ahem, detail that needs clarification is the goals. The "social changes" aim to change LW in which direction? "Better" is not a good answer. What do you want to grow, what do you want to kill, what do you want to transplant? By which metrics will you decide whether you're getting closer to your goals? Not chat -- you want something slower and more deliberate. Maybe a set of forum threads, one per issue.
2Vaniver
I don't think there's a good short answer to this, because if I try to point at individual shifts and say "shifts like those shifts" I need to give many examples to give a clear picture, and if I try to point at principles guiding the shifts and say "shifts that follow those principles" I need to give many details about the principles to give a clear picture. So I'll give a long answer, but that'll take time. (Maybe in the course of writing a long answer I'll discover the short version.)
0casebash
I was imagining some people works be elected/appointed and that they'd Skype and then write up their decisions.
0Lumifer
That doesn't look terribly transparent to me.
0casebash
It'd still be a massive improvement on what we have now and I assume that'd discuss interesting submissions
0Vaniver
It looks to me like there's a progression from "no input or explanation" to "explanation but no input" to "input and explanation." I'd say something is still transparent in the second case, but oftentimes what people are really interested in is input. (Me talking at you more isn't as helpful as me listening to you more!)
3Lumifer
Well, assuming the existence of some High Council, the transparency ranking would go like this: 1. You see a witch being burned at the stake. You assume the High Council ordered this. 2. The High Council proclaims "Let the witch be burned!". 3. The High Council proclaims "Let the witch be burned for she consorted with Clippy!". 4. The High Council proclaims "Let the witch be burned for reasons A, B, and C which we deem more important then the mitigating circumstances X and Y." 5. The High Council proclaims "Let the witch be burned, and, by the way, here are the minutes of the meeting where we decided she is to be burned." 6. The High Council proclaims "Behold the witch! We will now debate in public whether she ought to be burned. What makes you think she is a witch?" A LessWronger: "She turned me into a newt!" The public: "A newt?!" The Lesswronger: "I got better".
0MalcolmOcean
Regarding "Less Wrong council" and StackOverflow... What about meta.lesswrong.com? :P a LW to talk about LW? Or are we already meta enough...
5username2
More than enough. Meta is already more popular than everything else.
2gwillen
Even if we are already meta enough, I think a meta subreddit is a great idea. Giving a particular topic a specialized and dedicated location does serve to promote that topic, but it can also serve to remove it from more general locations, especially if that is requested or enforced, which can be a feature. (For example, discussion of stackoverflow is not allowed on stackoverflow; it is relegated to meta where people who don't want it can ignore it.)
[-][anonymous]100

Disclaimer: I'm not a voice of authority, I'm just participating in the conversation and helping a little.

Social change is a lot "fuzzier" than technical change. Not only that, it requires looking at what makes a community successful, which Less Wrong communities ARE successful, and how we can continue to use this site to generate more successful communities. That's a time commitment.

Sometimes, technical changes ARE social changes. It's not the hill I'm dying on, by any means, but I really do think that changes to the voting structure and the home page will help people participate. A section of the site that is for "rationalists talking to rationalists" rather than "rationalists talking ABOUT rationality" may also be helpful.

-2[anonymous]
Count me in as disappointed about social changes, I expected more concrete changes. Perhaps a date or time period as a Schelling point when we'd expect people to be more active or put more effort towards revitalizing LW.