You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

amcknight comments on The Singularity Institute's Arrogance Problem - Less Wrong Discussion

63 Post author: lukeprog 18 January 2012 10:30PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (307)

You are viewing a single comment's thread. Show more comments above.

Comment author: XiXiDu 19 January 2012 05:24:10PM 27 points [-]

I think a lot of SIAI's "arrogance" is simply made up by people who have an instinctive alarm for "trying to accomplish goals beyond your social status" or "trying to be part of the sacred magisterium", etc., and who then invent data to fit the supposed pattern.

Some quotes by you that might highlight why some people think you/SI is arrogant :

I tried - once - going to an interesting-sounding mainstream AI conference that happened to be in my area. I met ordinary research scholars and looked at their posterboards and read some of their papers. I watched their presentations and talked to them at lunch. And they were way below the level of the big names. I mean, they weren't visibly incompetent, they had their various research interests and I'm sure they were doing passable work on them. And I gave up and left before the conference was over, because I kept thinking "What am I even doing here?" (Competent Elites)

More:

I don't mean to bash normal AGI researchers into the ground. They are not evil. They are not ill-intentioned. They are not even dangerous, as individuals. Only the mob of them is dangerous, that can learn from each other's partial successes and accumulate hacks as a community. (Above-Average AI Scientists)

Even more:

I am tempted to say that a doctorate in AI would be negatively useful, but I am not one to hold someone's reckless youth against them - just because you acquired a doctorate in AI doesn't mean you should be permanently disqualified. (So You Want To Be A Seed AI Programmer)

And:

If you haven't read through the MWI sequence, read it. Then try to talk with your smart friends about it. You will soon learn that your smart friends and favorite SF writers are not remotely close to the rationality standards of Less Wrong, and you will no longer think it anywhere near as plausible that their differing opinion is because they know some incredible secret knowledge you don't. (Eliezer_Yudkowsky August 2010 03:57:30PM)

Comment author: amcknight 19 January 2012 08:25:55PM *  2 points [-]

FWIW, I'm not sure why you added the 2nd quote and the 3rd is out of context. Also, remember that we're talking about 700+ blog posts and other articles. Just be careful you're not cherry-picking.

Comment author: [deleted] 20 January 2012 01:01:06PM 14 points [-]

This isn't a useful counterargument when the subject at hand is public relations. Several organizations have been completely pwned by hostile parties cherry-picking quotes.

Comment author: [deleted] 20 January 2012 09:17:05PM 2 points [-]

The point was "you may be quote mining" which is a useful thing to tell a LWer, even if it doesn't mean a thing to "the masses".

Comment author: amcknight 20 January 2012 08:38:14PM 1 point [-]

Good point.