You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

ZacHirschman comments on Less Wrong lacks direction - Less Wrong Discussion

9 Post author: casebash 25 May 2015 02:53PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (34)

You are viewing a single comment's thread. Show more comments above.

Comment author: estimator 25 May 2015 03:31:12PM 6 points [-]

Agreed that LW is in a kind of stagnation. However, I think that just someone writing a series of high-quality posts would suffice to fix it. Now, the amount of discussion in comments is quite good, the problem is that there aren't many interesting posts.

If a group said that they thought A was an important issue and the solution was X, most members would pay more attention than if a random individual said it. No-one would have to listen to anything they say, but I imagine that many would choose to. Furthermore if the exec were all actively involved in the projects, I imagine they'd be able to complete some themselves, especially if they choose smaller ones.

It isn't quite a good thing; many people noticed that LW is somewhat like Eliezer's echo chamber. Actually, we should endorse high-quality opinions different from LW mainstream.

Comment author: ZacHirschman 25 May 2015 08:18:01PM 1 point [-]

What are your heuristics for telling whether posts/comments contain "high-quality opinions," or "LW mainstream"? Also, what did you think of Loosemore's recent post on fallacies in AI predictions?

Comment author: estimator 25 May 2015 09:50:28PM -1 points [-]

It's just my impression; I don't claim that it is precise.

As for the recent post by Loosemore, I think that it is sane and well-written, and clearly required a substantial amount of analysis and thinking to write. I consider it a central example of high-quality non-LW-mainstream posts.

Having said that, I mostly disagree with its conclusions. All the reasoning there is based on the assumption that the AGI will be logic-based (CLAI, following the post's terminology), which I find unlikely. I'm 95% certain that if the AGI is going to be built anytime soon, it will be based on machine learning; anyway, the claim that CLAI is "the only meaningful class of AI worth discussing" is far from being true.