You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

hairyfigment comments on The Market for Lemons: Quality Uncertainty on Less Wrong - Less Wrong Discussion

8 Post author: signal 18 November 2015 10:06PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (43)

You are viewing a single comment's thread. Show more comments above.

Comment author: signal 19 November 2015 01:36:41PM *  0 points [-]

I am not sure of the point here. I read it as "I can imagine a perfect world and LW is not it". Well, duh.

No. I think all the points indicate that a perfect world is difficult to achieve as rationalist forums are in part self-defeating (maybe not impossible though, most also would not have expected for Wikipedia to work out as well as it does). At the moment, Less Wrong may be the worst form of forum, except for all the others. My point in other words: I was fascinated by LW and thought it possible to make great leaps towards some form of truth. I now consider that unwarranted exuberance. I met a few people whom I highly respect and whom I consider aspiring rationalists. They were not interested in forums, congresses, etc. I now suspect that many of our fellow rationalists are and have an advantage to be somewhat of lone wolves and the ones we see are a curious exceptions.

There are also a lot of words (like "wrong") that the OP knows the meaning of, but I do not. For example, I have no idea what are "wrong opinions" which, apparently, rational discussions have a tendency to support. Or what is that "high relevancy" of missing articles -- relevancy to whom?

High relevancy to the reader who is an aspiring rationalist. The discussion of AI mostly end, where they become interesting. Assuming that AI is an existential risk, shall we enforce a police state? Shall we invest in surveillance? Some may even suggest to seek a Terminator-like solution trying to stop scientific research (which I did not say is feasible. Those are the kinds of questions that inevitably come up and I have seen them discussed nowhere, but in the last chapter of Superintelligence in like 3 sentences and somewhat in SSC's Moloch (maybe you find more sources, but its surely not mainstream). In summary: If Musks $10M constitute a significant share of humanities effort to reduce the risk of AI some may view that as evidence of progress and some as evidence for the necessity of other, and maybe more radical, approaches. The same in EA, if you truly think there is an Animal Holocaust (which Singer does), the answer may not be donating $50 to some animal charity. Wrong opinions: If, as just argued, not all the relevant evidence and conclusions are discussed, it follows that opinions are more likely to be less than perfect. There are some examples in the article.

And, um, do you believe that your postings will be free from that laundry list of misfeatures you catalogued?

No. Nash probably wouldn't cooperate, even though he understood game theory and I wouldn't blame him. I may simply stop posting (which sounds like a cop-out or threat, but I just see it as one logical conclusion).

Comment author: hairyfigment 20 November 2015 09:27:42PM 1 point [-]

None of us could "enforce a police state". It's barely possible even in principle, since it would need to include all industrialized nations (at a minimum) to have much payoff against AGI risk in particular. Worrying about "respected rational essayists" endorsing this plan also seems foolish.

"Surveillance" has similar problems, and your next sentence sounds like something we banned from the site for a reason. You do not seem competent for crime.

I'm trying to be charitable about your post as a whole to avoid anti-disjunction bias. While it's common to reject conclusions if weak arguments are added in support of them, this isn't actually fair. But I see nothing to justify your summary.