bogus comments on How about testing our ideas? - Less Wrong

31 [deleted] 14 September 2012 10:28AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (113)

You are viewing a single comment's thread. Show more comments above.

Comment author: TheOtherDave 14 September 2012 06:38:48PM 3 points [-]

Are they so deluded to seriously believe they will be the first to build an AGI? I hope not.

What I've inferred from statements of key SI folk (most especially Luke) is that they don't think this likely, but they think the possible futures in which it happens are vastly superior to the ones in which it doesn't, so they're working towards it anyway.

the best they can do is to disseminate the results of their research in a way that will maximize the number of AI researchers who will notice it and take it seriously

Yeah, this seems pretty plausible to me as well. (Though also pretty unlikely.)

FWIW, my understanding of SI's original chosen strategy for making AI researchers take LW's ideas about Friendliness seriously was to publicize the Sequences, which would improve the general rationality of people everywhere (aka "raise the sanity waterline"), which would improve the rationality of AI researchers (and those who fund them, etc), which would increase the chances of AI researchers embracing the importance of Friendliness, which would increase the chances of FAI being developed before UFAI, which would save the world.

From what I can tell, SI has since them moved on to other strategies for saving the world, like publishing the Sequences in book form, publishing popular fiction, holding minicamps, etc., but all built on the premise that "raising the sanity waterline" among the most easily reached people is a more viable approach than attempting to reach specific audiences like professional researchers.

Comment author: bogus 14 September 2012 07:04:25PM *  0 points [-]

My admittedly incomplete understanding is that "raising the sanity waterline" activities have now been spun off to the Center for Applied Rationality, which is either planning to incorporate as a non-profit or already incorporated. This would then leave SIAI as focusing on the strictly AGI- and Friendliness-related stuff.

Comment author: TheOtherDave 14 September 2012 07:07:24PM *  1 point [-]

Ah. I'm aware of the SI/CFAR split, but haven't paid much attention to what activities are owned by which entity, or how separate their staffs and resources actually are. E.g., I haven't a clue which entity sponsors LW, if either, or even whether it's possible to distinguish one condition from the other.

Comment author: V_V 16 September 2012 10:39:22AM 0 points [-]

From the information available on their websites, it seems that LW is still operated by SI.

I suggest splitting it off an operating it as a charity separate from both SI and CFAR.