woodchuck64 comments on Thoughts on the Singularity Institute (SI) - Less Wrong

256 Post author: HoldenKarnofsky 11 May 2012 04:31AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (1270)

You are viewing a single comment's thread. Show more comments above.

Comment author: Richard_Loosemore 10 May 2012 07:11:15PM 1 point [-]

Holden, I think your assessment is accurate ... but I would venture to say that it does not go far enough.

My own experience with SI, and my background, might be relevant here. I am a member of the Math/Physical Science faculty at Wells College, in Upstate NY. I also have had a parallel career as a cognitive scientist/AI researcher, with several publications in the AGI field, including the opening chapter (coauthored with Ben Goertzel) in a forthcoming Springer book about the Singularity.

I have long complained about SI's narrow and obsessive focus on the "utility function" aspect of AI -- simply put, SI assumes that future superintelligent systems will be driven by certain classes of mechanism that are still only theoretical, and which are very likely to be superceded by other kinds of mechanism that have very different properties. Even worse, the "utility function" mechanism favored by SI is quite likely to be so unstable that it will never allow an AI to achieve any kind of human-level intelligence, never mind the kind of superintelligence that would be threatening.

Perhaps most important of all, though, is the fact that the alternative motivation mechanism might (and notice that I am being cautious here: might) lead to systems that are extremely stable. Which means both friendly and safe.

Taken in isolation, these thoughts and arguments might amount to nothing more than a minor addition to the points that you make above. However, my experience with SI is that when I tried to raise these concerns back in 2005/2006 I was subjected to a series of attacks that culminated in a tirade of slanderous denunciations from the founder of SI, Eliezer Yudkowsky. After delivering this tirade, Yudkowsky then banned me from the discussion forum that he controlled, and instructed others on that forum that discussion about me was henceforth forbidden.

Since that time I have found that when I partake in discussions on AGI topics in a context where SI supporters are present, I am frequently subjected to abusive personal attacks in which reference is made to Yudkowsky's earlier outburst. This activity is now so common that when I occasionally post comments here, my remarks are very quickly voted down below a threshold that makes them virtually invisible. (A fate that will probably apply immediately to this very comment).

I would say that, far from deserving support, SI should be considered a cult-like community in which dissent is ruthlessly suppressed in order to exaggerate the point of view of SI's founders and controllers, regardless of the scientific merits of those views, or of the dissenting opinions.

Comment author: MarkusRamikin 10 May 2012 08:02:54PM *  4 points [-]

However, my experience with SI is that when I tried to raise these concerns back in 2005/2006 I was subjected to a series of attacks that culminated in a tirade of slanderous denunciations from the founder of SI, Eliezer Yudkowsky.

I am frequently subjected to abusive personal attacks in which reference is made to Yudkowsky's earlier outburst

Link to the juicy details cough I mean evidence?

Comment author: woodchuck64 10 May 2012 08:37:32PM -1 points [-]

I strongly suspect the rationality of the internet would improve many orders of magnitude if all arguments about arguments were quietly deleted.

Comment author: woodchuck64 10 May 2012 08:46:35PM *  0 points [-]

Okay, make that: I strongly suspect the rationality of the rational internet would improve many orders of magnitude if all arguments about arguments were quietly deleted

Comment author: khafra 11 May 2012 04:53:39PM *  3 points [-]

Every time I try to think about that, I end up thinking about logical paradoxes instead.

edit for less subtlety in reponse to unexplained downvote: That argument is self-refuting.