You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

paper-machine comments on The Singularity Institute's Arrogance Problem - Less Wrong Discussion

63 Post author: lukeprog 18 January 2012 10:30PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (307)

You are viewing a single comment's thread. Show more comments above.

Comment author: XiXiDu 19 January 2012 05:24:10PM 27 points [-]

I think a lot of SIAI's "arrogance" is simply made up by people who have an instinctive alarm for "trying to accomplish goals beyond your social status" or "trying to be part of the sacred magisterium", etc., and who then invent data to fit the supposed pattern.

Some quotes by you that might highlight why some people think you/SI is arrogant :

I tried - once - going to an interesting-sounding mainstream AI conference that happened to be in my area. I met ordinary research scholars and looked at their posterboards and read some of their papers. I watched their presentations and talked to them at lunch. And they were way below the level of the big names. I mean, they weren't visibly incompetent, they had their various research interests and I'm sure they were doing passable work on them. And I gave up and left before the conference was over, because I kept thinking "What am I even doing here?" (Competent Elites)

More:

I don't mean to bash normal AGI researchers into the ground. They are not evil. They are not ill-intentioned. They are not even dangerous, as individuals. Only the mob of them is dangerous, that can learn from each other's partial successes and accumulate hacks as a community. (Above-Average AI Scientists)

Even more:

I am tempted to say that a doctorate in AI would be negatively useful, but I am not one to hold someone's reckless youth against them - just because you acquired a doctorate in AI doesn't mean you should be permanently disqualified. (So You Want To Be A Seed AI Programmer)

And:

If you haven't read through the MWI sequence, read it. Then try to talk with your smart friends about it. You will soon learn that your smart friends and favorite SF writers are not remotely close to the rationality standards of Less Wrong, and you will no longer think it anywhere near as plausible that their differing opinion is because they know some incredible secret knowledge you don't. (Eliezer_Yudkowsky August 2010 03:57:30PM)

Comment author: [deleted] 20 January 2012 10:06:09PM 2 points [-]

(So You Want To Be A Seed AI Programmer)

I hadn't seen that before. Was it written before the sequences?

I ask because it all seemed trivial to my sequenced self and it seemed like it was not supposed to be trivial.

I must say that writing the sequences is starting to look like it was a very good idea.

Comment author: katydee 21 January 2012 04:32:53PM 1 point [-]

I believe so; I also believe that post is now considered obsolete.