Comment author: John_Maxwell_IV 14 June 2016 09:38:32AM *  2 points [-]

He tried to move people to /r/SlateStarCodex, but that didn't work. We'd want to understand why. (Some hypotheses: it wasn't actually on SSC, where people go directly; posts there don't pop up in their RSS readers; people have an aversion to comment systems with voting; people have an aversion to reddit specifically.)

I think a big explanation is that /r/SlateStarCodex was not advertised sufficiently, and people never developed the habit of visiting there. I imagine that if Scott chose to highlight great comments or self posts from /r/SlateStarCodex each week, the subreddit would grow faster, for instance.

Online communities are Schelling points. People want to be readers in the community where all the writers are, and vice versa. Force of habit keeps people visiting the same places over and over again, but if they don't feel reinforced through interesting content to read / recognition of their writing, they're liable to go elsewhere. The most likely explanation for why any online community fails, including stuff like /r/RationalistDiaspora and /r/SlateStarCodex, is that it never becomes a Schelling point. My explanation for why LW has lost traffic: there was a feedback loop involving people not being reinforced for writing and LW gradually losing its strength as a Schelling point.

Edit: also, subreddits are better suited to link sharing than original posts IMO.

I'm not sure that "writes good posts" and "would make a good moderator" are sufficiently correlated for this to work. A lot of people like Eliezer's writing but dislike his approach to moderation.

Acknowledged, but as long as the correlation is above 0, I suspect it's a better system than what reddit has, where ability to vote is based on possession of a warm body.

It also creates weird incentives, like: "I liked this post that was highly critical of our community, but I don't want the author to be a mod".

Concrete example: Holden Karnofsky's critical post was liked by many people. Holden has posted other stuff too, and his karma is 3689. That would give him about 1% of Eliezer's influence, 4% of Yvain's influence, or 39% of my influence. This doesn't sound upsetting to me and I doubt it would upset many others. If Holden was able to, say, collect mucho karma by writing highly upvoted rebuttals of every individual sequence post, then maybe he should be the new LW moderator-in-chief.

But even if you're sure this is a problem, it'd be simple to add another upvote option that increases visibility without bestowing karma. I deliberately kept my proposal simple because I didn't want to take away the fun of hashing out details from other people :) I'm in favor of giving Scott Alexander "god status" (ability to edit the karma for every person and post) until all the incentive details are worked out, and maybe even after that. In the extreme, the system I describe is simply a tool to lighten Scott's moderation load.

(This is the problem that Scott Aa points to of "this system can only improve on ordinary democracy if the trust network has some other purpose" - I worry that voting-for-comment-scores isn't a sufficiently strong purpose to outweigh voting-for-moderators.)

So I guess the analogy here would be if I want a particular user to have more influence, I'd vote up a post of theirs that I didn't think was very good in order to give them that influence? I guess this is a problem that would need to be dealt with. Some quick thoughts on solutions: Anonymize posts before they're voted on. Give Scott the ability to "punish" everyone who voted up a particularly bad post and lessen their moderation abilities.

Another system to consider would be to do it based on the way people administer votes, not the way they remove them. If your votes tend to correlate with others', they have more weight in future. If posts you flag tend to get removed, your flags count for more. (I'm not convinced that this works either.)

I'm not convinced it works either.

A related idea that might work better: Make it so downvotes work to decrease the karma score of everyone who upvoted a particular thing. This incentivizes upvoting things that people won't find upsetting, which works against the sort of controversy the rest of the internet incentivizes. But there's no Keynesian beauty contest because you can never gain points through upvoting, only lose them. This also creates the possibility that there will be a cost associated with upvoting a thing, which makes karma a bit more like currency (not necessarily a bad thing).

Comment author: John_Maxwell_IV 28 June 2016 01:47:02AM *  0 points [-]

The Less Wrong diaspora demonstrates that the toughest competition for online forums may be individual personal blogs. By writing on your personal blog, you build up you own status & online presence. To be more competitive with personal blogs, it might make sense to give high-karma users of a hypothetical SSC forum the ability to upvote their own posts multiple times, in addition to those of others. That way if I have a solid history of making quality contributions, I'd also have the ability to upvote a new post of mine multiple times if it was an idea I really wanted to see get out there, in the same way a person with a widely read personal blog has the ability to really get an idea out there. The mechanism I outlined above (downvotes taking away karma from the people who upvoted a thing) could prevent abuse of self-upvoting: if I self-upvote my own post massively, but it turns out to be lousy, other people will downvote it, and I'll lose some of the karma that gave me the ability to self-upvote massively.

Comment author: gjm 19 June 2016 12:47:37AM -1 points [-]

I have no special insider knowledge. My impression, which I will gladly have corrected by those who know more, is that

  • MIRI (formerly SIAI) was founded in Berkeley because that's where Eliezer was.
  • Most of the rationalists in Berkeley are not MIRI employees.
  • Most of the rationalists in Berkeley did not move to Berkeley because of MIRI or Eliezer or other rationalists.

But, again, this is vague impressions and guesswork and assumptions rather than actual knowledge. So let's assume for a moment that I'm entirely wrong and the Berkeley rationalist community is a consequence of MIRI. MIRI was founded about 16 years ago, and I think it's only in the last few years that the Berkeley rationalist community has been a big thing. That would suggest that the "build a rationalist community by starting an institution there" strategy takes 10 years or so to work.

If so, then good places to consider might be places that already have kinda-MIRI-like institutions. Perhaps Oxford (home of the Future of Humanity Institute, and also of Giving What We Can if you're the EA sort of rationalist) and to a lesser extent Cambridge (home of the Centre for the Study of Existential Risk). I think the FHI and the CSER are the nearest non-MIRI things to MIRI.

Comment author: John_Maxwell_IV 26 June 2016 11:28:43AM *  1 point [-]

I believe Eliezer was one of the last SIAI employees to move to Berkeley. My guess is SIAI originally moved from Santa Clara to Berkeley because some SIAI employees had rationalist community friends in Berkeley, and when visiting those friends, they noticed they liked Berkeley better than Santa Clara. (I've lived in both places--IMO Santa Clara is dystopian and suburban, but Berkeley is lively and interesting.)

I don't believe there was significant community buildup in Santa Clara before the move. So maybe the takeaway is to make your HQ a place where people want to live for reasons other than just being part of your community?

Comment author: James_Miller 19 June 2016 04:19:39PM *  4 points [-]

I know that negative prescription drug interactions are a big deal. Should I be worried about negative interactions among legal supplements I take like Coq10, Vit D, glucosamine, curcumin, desiccated liver, Bacopa Monnieri, or NAC?

Comment author: John_Maxwell_IV 26 June 2016 10:39:35AM 2 points [-]

One thing I do sometimes is search for the name of 2 different supplements on Amazon and see if I can find a commercial stack where they appear together, then check for 1 star reviews of the stack that describe a negative interaction. (This works best for stacking supplements that have similar purposes, e.g. if you're taking 2 sleep aid supplements at once, it's plausible that they both appear in a commercial stack that someone put together.)

Comment author: John_Maxwell_IV 26 June 2016 09:18:16AM 0 points [-]
Comment author: John_Maxwell_IV 18 June 2016 08:46:41AM 4 points [-]

Dive In by Nate Soares

Comment author: philh 13 June 2016 05:08:15PM 5 points [-]

(I've mostly only skimmed.)

It can be hard to find good content in the diaspora. Possible solution: Weekly "diaspora roundup" posts to Less Wrong. I'm too busy to do this, but anyone else is more than welcome to (assuming both people reading LW and people in the diaspora want it).

This is what /r/RationalistDiaspora was intended to do. It never really got traction, and is basically dead now, but it still strikes me as a good solution. If that's not going to revive though, I agree that a weekly thread on LW is worth trying. By default, I'll make one later this week. (I'm not currently sure I'll have anything to post in it myself, I'll be asking people to post links in the comments.)

Go tell Scott Alexander you'll build an online forum to his specification, with SSC community feedback, to provide a better solution for his overflowing open threads.

He tried to move people to /r/SlateStarCodex, but that didn't work. We'd want to understand why. (Some hypotheses: it wasn't actually on SSC, where people go directly; posts there don't pop up in their RSS readers; people have an aversion to comment systems with voting; people have an aversion to reddit specifically.)

As Scott features more and more posts, he gains a moderation team full of people who wrote posts that were good enough to feature.

I'm not sure that "writes good posts" and "would make a good moderator" are sufficiently correlated for this to work. A lot of people like Eliezer's writing but dislike his approach to moderation.

(On the other hand: maybe, if we want Eliezers to stick around, we need them to be able to shape the community? Even if that means upsetting people who don't write much.)

It also creates weird incentives, like: "I liked this post that was highly critical of our community, but I don't want the author to be a mod". (This is the problem that Scott Aa points to of "this system can only improve on ordinary democracy if the trust network has some other purpose" - I worry that voting-for-comment-scores isn't a sufficiently strong purpose to outweigh voting-for-moderators.)

Another system to consider would be to do it based on the way people administer votes, not the way they remove them. If your votes tend to correlate with others', they have more weight in future. If posts you flag tend to get removed, your flags count for more. (I'm not convinced that this works either.)

Comment author: John_Maxwell_IV 14 June 2016 09:38:32AM *  2 points [-]

He tried to move people to /r/SlateStarCodex, but that didn't work. We'd want to understand why. (Some hypotheses: it wasn't actually on SSC, where people go directly; posts there don't pop up in their RSS readers; people have an aversion to comment systems with voting; people have an aversion to reddit specifically.)

I think a big explanation is that /r/SlateStarCodex was not advertised sufficiently, and people never developed the habit of visiting there. I imagine that if Scott chose to highlight great comments or self posts from /r/SlateStarCodex each week, the subreddit would grow faster, for instance.

Online communities are Schelling points. People want to be readers in the community where all the writers are, and vice versa. Force of habit keeps people visiting the same places over and over again, but if they don't feel reinforced through interesting content to read / recognition of their writing, they're liable to go elsewhere. The most likely explanation for why any online community fails, including stuff like /r/RationalistDiaspora and /r/SlateStarCodex, is that it never becomes a Schelling point. My explanation for why LW has lost traffic: there was a feedback loop involving people not being reinforced for writing and LW gradually losing its strength as a Schelling point.

Edit: also, subreddits are better suited to link sharing than original posts IMO.

I'm not sure that "writes good posts" and "would make a good moderator" are sufficiently correlated for this to work. A lot of people like Eliezer's writing but dislike his approach to moderation.

Acknowledged, but as long as the correlation is above 0, I suspect it's a better system than what reddit has, where ability to vote is based on possession of a warm body.

It also creates weird incentives, like: "I liked this post that was highly critical of our community, but I don't want the author to be a mod".

Concrete example: Holden Karnofsky's critical post was liked by many people. Holden has posted other stuff too, and his karma is 3689. That would give him about 1% of Eliezer's influence, 4% of Yvain's influence, or 39% of my influence. This doesn't sound upsetting to me and I doubt it would upset many others. If Holden was able to, say, collect mucho karma by writing highly upvoted rebuttals of every individual sequence post, then maybe he should be the new LW moderator-in-chief.

But even if you're sure this is a problem, it'd be simple to add another upvote option that increases visibility without bestowing karma. I deliberately kept my proposal simple because I didn't want to take away the fun of hashing out details from other people :) I'm in favor of giving Scott Alexander "god status" (ability to edit the karma for every person and post) until all the incentive details are worked out, and maybe even after that. In the extreme, the system I describe is simply a tool to lighten Scott's moderation load.

(This is the problem that Scott Aa points to of "this system can only improve on ordinary democracy if the trust network has some other purpose" - I worry that voting-for-comment-scores isn't a sufficiently strong purpose to outweigh voting-for-moderators.)

So I guess the analogy here would be if I want a particular user to have more influence, I'd vote up a post of theirs that I didn't think was very good in order to give them that influence? I guess this is a problem that would need to be dealt with. Some quick thoughts on solutions: Anonymize posts before they're voted on. Give Scott the ability to "punish" everyone who voted up a particularly bad post and lessen their moderation abilities.

Another system to consider would be to do it based on the way people administer votes, not the way they remove them. If your votes tend to correlate with others', they have more weight in future. If posts you flag tend to get removed, your flags count for more. (I'm not convinced that this works either.)

I'm not convinced it works either.

A related idea that might work better: Make it so downvotes work to decrease the karma score of everyone who upvoted a particular thing. This incentivizes upvoting things that people won't find upsetting, which works against the sort of controversy the rest of the internet incentivizes. But there's no Keynesian beauty contest because you can never gain points through upvoting, only lose them. This also creates the possibility that there will be a cost associated with upvoting a thing, which makes karma a bit more like currency (not necessarily a bad thing).

Comment author: Gram_Stone 12 June 2016 10:22:07PM 8 points [-]

Well, this is discouraging to someone who had the opposite reaction to ingres' recent survey analysis. I heard, "Try to solve the object-level problem and create content that meets the desiderata that are implicit in the survey results."

I was going to start writing about feelings-as-information theory; Kaj Sotala introduced moods as information in Avoid misinterpreting your emotions, lukeprog mentions it briefly in When Intuitions Are Useful (which Wei Dai thought might be relevant to metaphilosophy), and gwern mentions related work on processing fluency here. There are simple but interesting twists on classic, already-simple heuristics and biases experiments that everyone here's familiar with, debiasing implications, stuff about aesthetics, stuff on how we switch between Type 1 and Type 2 processing, which is relevant to the stuff lukeprog was getting into with Project guide: How IQ predicts metacognition and philosophical success, and what Kaj Sotala was getting into with his summaries of Stanovich's What Intelligence Tests Miss.

I was just about to write another post about how thinking of too many alternative outcomes to historical events can actually make hindsight bias worse, with explanations of the experimental evidence, like my most recent post. I don't know how to do more for the audience than do things like warn them about how debiasing hindsight can backfire.

And there's other stuff I could think to write about after all of that.

There are quite a number of people coordinating to fulfill the goal of revitalizing LW, and I wonder if something like this couldn't have waited. I mean, everyone just told everyone exactly what everyone's doing wrong.

Comment author: John_Maxwell_IV 13 June 2016 07:10:35AM *  6 points [-]

I'm sorry for discouraging you. I think writing the posts you described is a great idea. I hope that if you write them, people who've read this will be more inclined to upvote them if they like them, given increased awareness of the incentives problem I described.

Another option is to pursue multiple angles of attack in parallel. My angle requires a programmer or two to volunteer their time (may as well contact Scott now if you're interested!); your angle requires people who have ideas to write them up. My guess is that these requirements don't funge against each other very much. Plus, even if the community ultimately decides to go elsewhere, I'm sure your ideas will be welcomed in that new place if you just post whatever you were going to post to LW there, and that will be a valuable kickstart.

I also agree that having people repeatedly say "LW is dying" can easily become a self-fulling prophecy. Even if LW is no longer a check-once-a-day kind of place, it can still be a perfectly fine check-once-a-week kind of place. I probably should have been more careful in my phrasing.

Comment author: root 12 June 2016 02:10:55PM 0 points [-]

Two questions:

  1. Can anyone who is a user for a significant amount of time give links to anything that wasn't deemed worhy of the sequences but is a worthy read? I have no idea when the sequences were collected but if LW was really great in the past, there would've been a bunch of other high-quality posts that are easily missed. This could also double as proof that LW was, indeed, as great as advertised.

  2. What do other places have that LW doesn't? If LW is dedicated to human rationality, is it truly doing that?

  3. Am I a complete dumbass for typing this? In hindsight, it doesn't take a special variation of Godwin's law to think 'someone probably posted a similar question before'.

Comment author: John_Maxwell_IV 12 June 2016 08:38:32PM 0 points [-]

Here is a list of lists of archive posts.

Comment author: ingres 12 June 2016 07:38:13PM *  2 points [-]

I am honored that my survey writeup produced this level of quality discussion and endorse this post.

(Though not necessarily its proposed upvote scheme, sounds kind of flaky to me. I'm personally very skeptical of upvotes and community curation as cure-alls.)

Comment author: John_Maxwell_IV 12 June 2016 08:23:40PM 0 points [-]

I am honored that my survey writeup produced this level of quality discussion and endorse this post.

Thanks!

(Though not necessarily its proposed upvote scheme, sounds kind of flaky to me. I'm personally very skeptical of upvotes and community curation as cure-alls.)

What's your favored solution?

Comment author: John_Maxwell_IV 12 June 2016 07:40:58AM *  5 points [-]

Nice work! Seriously, I downloaded your survey analysis code and took a quick look--you deserve a huge thank you from all of us for the amount of effort you put in to this project. Please don't anyone forget to upvote this.

I started writing some commentary, but it got too long so I made a discussion post here.

View more: Prev | Next