SarahC comments on Intellectual Hipsters and Meta-Contrarianism - Less Wrong

147 Post author: Yvain 13 September 2010 09:36PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (323)

You are viewing a single comment's thread. Show more comments above.

Comment author: Will_Newsome 14 September 2010 01:44:01AM *  9 points [-]

Sorry, I didn't mean to assume the conclusion. Rather than do a disservice to the arguments with a hastily written reply, I'm going to cop out of the responsibility of providing a rigorous technical analysis and just share some thoughts. From what I've seen of your posts, your arguments were that the current nominally x-risk-reducing organizations (primarily FHI and SIAI) aren't up to snuff when it comes to actually saving the world (in the case of SIAI perhaps even being actively harmful). Despite and because of being involved with SIAI I share some of your misgivings. That said, I personally think that SIAI is net-beneficial for their cause of promoting clear and accurate thinking about the Singularity, and that the PR issues you cite regarding Eliezer will be negligible in 5-10 years when more academics start speaking out publically about Singularity issues, which will only happen if SIAI stays around, gets funding, keeps on writing papers, and promotes the pretty-successful Singularity Summits. Also, I never saw you mention that SIAI is actively working on the research problems of building a Friendly artificial intelligence. Indeed, in a few years, SIAI will have begun the endeavor of building FAI in earnest, after Eliezer writes his book on rationality (which will also likely almost totally outshine any of his previous PR mistakes). It's difficult to hire the very best FAI researchers without money, and SIAI doesn't have money without donations.

Now, perhaps you are skeptical that FAI or even AGI could be developed by a team of the most brilliant AI researchers within the next, say, 20 years. That skepticism is merited and to be honest I have little (but still a non-trivial amount of knowledge) to go on besides the subjective impressions of those who work on the problem. I do however have strong arguments that there is a ticking clock till AGI, with the clock binging before 2050. I can't give those arguments here, and indeed it would be against protocol to do so, as this is Less Wrong and not SIAI's forum (despite it being unfortunately treated as such a few times in the past). Hopefully at some point someone, at SIAI or no, will write up such an analysis: currently Steve Rayhawk and Peter de Blanc of SIAI are doing a literature search that will with luck end up in a paper of the current state of AGI development, or at least some kind of analysis besides "Trust us, we're very rational".

All that said, my impression is that SIAI is doing good of the kind that completely outweighs e.g. aid to Africa if you're using any kind of utilitarian calculus. And if you're not using anything like utilitarian calculus, then why are you giving aid to Africa and not e.g. kittens? FHI also seems to be doing good, academically respectable, and necessary research on a rather limited budget. So if you're going to donate money, I would first vote SIAI, and then FHI, but I can understand the position of "I'm going to hold onto my money until I have a better picture of what's really important and who the big players are." I can't, however, understand the position of those who would give aid to Africa besides assuming some sort of irrationality or ignorance. But I will read over your post on the matter and see if anything there changes my mind.

Comment author: timtyler 01 October 2010 04:56:07PM *  -2 points [-]

I personally think that SIAI is net-beneficial for their cause of promoting clear and accurate thinking about the Singularity [...]

Is that what they are doing?!?

They seem to be funded by promoting the idea that DOOM is SOON - and that to avert it we should all be sending our hard-earned dollars to their intrepid band of Friendly Folk.

One might naively expect such an organisation would typically act so as to exaggerate the risks - so as to increase the flow of donations. That seems pretty consistent with their actions to me.

From that perspective the organisation seems likely to be an unreliable guide to the facts of the matter - since they have glaringly-obvious vested interests.

Comment author: [deleted] 02 October 2010 09:25:36PM 4 points [-]

This is an incredibly anti-name-calling community. People ascribe a lot of value to having "good" discussions (disagreement is common, but not adversarialism or ad hominems.) LW folks really don't like being called a cult.

SIAI isn't a cult, and Eliezer isn't a cult leader, and I'm sure you know that your insinuations don't correspond to literal fact, and that this organization is no more a scam than a variety of other charitable and advocacy organizations.

I do think that folks around here are over-sensitive to normal levels of name-calling and ad hominems. It's odd. Holding yourself above the fray comes across as a little snobbish. There's a whole world of discourse out there, people gathering evidence and exchanging opinions, and the vast majority of them are doing it like this: UR A FASCIST. But do you think there's therefore nothing to learn from them?