Will_Newsome comments on Intellectual Hipsters and Meta-Contrarianism - Less Wrong

147 Post author: Yvain 13 September 2010 09:36PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (323)

You are viewing a single comment's thread. Show more comments above.

Comment author: Will_Newsome 14 September 2010 12:39:51AM *  6 points [-]

Half-agree with you, as none of the 18 positions are 'correct', but I don't know what you mean by 'useless'. Instead of generalizing I'll list my personal positions:

  • KKK-style racist / politically correct liberal / "but there are scientifically proven genetic differences"

If I failed to notice that there are scientifically proven genetic differences I would be missing a far more important part of reality (evolutionary psychology and the huge effects of evolution in the last 20,000 years) than if I failed to notice that being a bigot was bad and impeded moral progress. That said, if most people took this position, it'd result in a horrible tragedy of the commons situation, which is why most social scientists cooperate on the 'let's not promote racism' dilemma. I'm not a social scientist so I get to defect and study some of the more interesting aspects of human evolutionary biology.

  • misogyny / women's rights movement / men's rights movement

No opinion. Women seem to be doing perfectly fine. Men seem to get screwed over by divorce laws and the like. Tentatively agree more with third level but hey, I'm pretty ignorant here.

  • conservative / liberal / libertarian

What can I say, it's politics. Libertarians in charge would mean more drugs and ethically questionable experiments of the sort I promote, as well as a lot more focus on the risks and benefits of technology. Since the Singularity trumps everything else policy-wise I have to root for the libertarian team here, even if I find them obnoxiously pretentious. (ETA: Actually, maybe more libertarians would just make it more likely that the 'Yeah yeah Singularity AI transhumanism wooooo!' meme would get bigger which would increase existential risk. So uh... never mind, I dunno.)

  • herbal-spiritual-alternative medicine / conventional medicine / Robin Hanson

Too ignorant to comment. My oxycodone and antiobiotics sure did me good when I got an infection a week ago. My dermatologist drugs didn't help much with my acne. I've gotten a few small surgeries which made me better. Overall conventional medicine seems to have helped me a fair bit and costs me little. I don't even know what Robin Hanson's claims are, though. A link would be great.

  • don't care about Africa / give aid to Africa / don't give aid to Africa

Okay, anyone who cares about helping people in Africa and can multiply should be giving their money to x-risk charities. Because saving the world also includes saving Africa. Therefore position 3 is essentially correct, but maybe it's really position 4 (give aid to Earth) that's the correct one, I dunno.

  • Obama is Muslim / Obama is obviously not Muslim, you idiot / Patri Friedman5

Um, Patri was just being silly. Obama is obviously not a Muslim in any meaningful sense.

In conclusion, I think that there isn't any real trend here, but maybe we're just disputing ways of carving up usefulness? It is subjective after all.

Added: Explanations for downvotes are always welcome. Lately I've decided to try less to appear impressive and consistently rational (like Carl Shulman) and try more to throw lots of ideas around for critique, criticism, and development (like Michael Vassar). So although downvotes are useful indicators of where I might have gone wrong, a quick explanatory comment is even more useful and very unlikely to be responded to with indignation or hostility.

Comment author: multifoliaterose 14 September 2010 01:19:38AM *  4 points [-]

Okay, anyone who cares about helping people in Africa and can multiply should be giving their money to x-risk charities. Because saving the world also includes saving Africa. Therefore position 3 is essentially correct, but maybe it's really position 4 (give aid to Earth) that's the correct one, I dunno.

Why is giving money to x-risk charities conducive to saving the world? (I don't necessarily disagree, but want to see what you have to say to substantiate your claim.) In particular, what's your response to Holden's comment #12 at the GiveWell Singularity Summit thread ?

Comment author: Will_Newsome 14 September 2010 01:44:01AM *  9 points [-]

Sorry, I didn't mean to assume the conclusion. Rather than do a disservice to the arguments with a hastily written reply, I'm going to cop out of the responsibility of providing a rigorous technical analysis and just share some thoughts. From what I've seen of your posts, your arguments were that the current nominally x-risk-reducing organizations (primarily FHI and SIAI) aren't up to snuff when it comes to actually saving the world (in the case of SIAI perhaps even being actively harmful). Despite and because of being involved with SIAI I share some of your misgivings. That said, I personally think that SIAI is net-beneficial for their cause of promoting clear and accurate thinking about the Singularity, and that the PR issues you cite regarding Eliezer will be negligible in 5-10 years when more academics start speaking out publically about Singularity issues, which will only happen if SIAI stays around, gets funding, keeps on writing papers, and promotes the pretty-successful Singularity Summits. Also, I never saw you mention that SIAI is actively working on the research problems of building a Friendly artificial intelligence. Indeed, in a few years, SIAI will have begun the endeavor of building FAI in earnest, after Eliezer writes his book on rationality (which will also likely almost totally outshine any of his previous PR mistakes). It's difficult to hire the very best FAI researchers without money, and SIAI doesn't have money without donations.

Now, perhaps you are skeptical that FAI or even AGI could be developed by a team of the most brilliant AI researchers within the next, say, 20 years. That skepticism is merited and to be honest I have little (but still a non-trivial amount of knowledge) to go on besides the subjective impressions of those who work on the problem. I do however have strong arguments that there is a ticking clock till AGI, with the clock binging before 2050. I can't give those arguments here, and indeed it would be against protocol to do so, as this is Less Wrong and not SIAI's forum (despite it being unfortunately treated as such a few times in the past). Hopefully at some point someone, at SIAI or no, will write up such an analysis: currently Steve Rayhawk and Peter de Blanc of SIAI are doing a literature search that will with luck end up in a paper of the current state of AGI development, or at least some kind of analysis besides "Trust us, we're very rational".

All that said, my impression is that SIAI is doing good of the kind that completely outweighs e.g. aid to Africa if you're using any kind of utilitarian calculus. And if you're not using anything like utilitarian calculus, then why are you giving aid to Africa and not e.g. kittens? FHI also seems to be doing good, academically respectable, and necessary research on a rather limited budget. So if you're going to donate money, I would first vote SIAI, and then FHI, but I can understand the position of "I'm going to hold onto my money until I have a better picture of what's really important and who the big players are." I can't, however, understand the position of those who would give aid to Africa besides assuming some sort of irrationality or ignorance. But I will read over your post on the matter and see if anything there changes my mind.

Comment author: multifoliaterose 14 September 2010 02:07:20AM 2 points [-]

Reasonable response, upvoted :-).

•As I said, I cut my planned sequence of postings on SIAI short. There's more that I would have liked to say and more that I hope to say in the future. For now I'm focusing on finishing my thesis.

•An important point that did not come across in my postings is that I'm skeptical of philanthropic projects having a positive impact on what they're trying to do in general (independently of relation to existential risk). One major influence here has been my personal experience with public institutions. Another major influence has been reading the GiveWell blog. See for example GiveWell's page on Social Programs That Just Don't Work. At present I think that it's a highly nonobvious but important fact that those projects which superficially look to be promising and which are not well-grounded by constant feedback from outsiders almost always fail to have any nontrivial impact on the relevant cause.

See the comment here by prase which I agree with.

•On the subject of a proposed project inadvertently doing more harm than good, see the last few paragraphs of the GiveWell post titled Against Promise Neighborhoods. Consideration of counterfactuals is very tricky and very smart people often get it wrong.

•Quite possibly SIAI is having a positive holistic impact - I don't have confidence that this is so, the situation is just that I don't have enough information to judge from the outside.

•Regarding the time line for AGI and the feasibility of FAI research, see my back and forth with Tim Tyler here.

•My thinking as to what the most important causes to focus at present are is very much in flux. I welcome any information that you or others can point me to.

•My reasons for supporting developing world aid in particular at present are various and nuanced and I haven't yet had the time to write out a detailed explanation that's ready for public consumption. Feel free to PM me with your email address if you'd like to correspond.

Thanks again for your thoughtful response.

Comment author: wedrifid 14 September 2010 02:22:54AM 3 points [-]

An important point that did not come across in my postings is that I'm skeptical of philanthropic projects having a positive impact on what they're trying to do in general (independently of relation to existential risk). One major influence here has been my personal experience with public institutions. Another major influence has been reading the GiveWell blog. See for example GiveWell's page on Social Programs That Just Don't Work. At present I think that it's a highly nonobvious but important fact that those projects which superficially look to be promising and which are not well-grounded by constant feedback from outsiders almost always fail to have any nontrivial impact on the relevant cause.

If you had a post on this specifically planned then I would be interested in reading it!

Comment author: timtyler 01 October 2010 04:56:07PM *  -2 points [-]

I personally think that SIAI is net-beneficial for their cause of promoting clear and accurate thinking about the Singularity [...]

Is that what they are doing?!?

They seem to be funded by promoting the idea that DOOM is SOON - and that to avert it we should all be sending our hard-earned dollars to their intrepid band of Friendly Folk.

One might naively expect such an organisation would typically act so as to exaggerate the risks - so as to increase the flow of donations. That seems pretty consistent with their actions to me.

From that perspective the organisation seems likely to be an unreliable guide to the facts of the matter - since they have glaringly-obvious vested interests.

Comment author: Will_Newsome 01 October 2010 05:53:20PM 13 points [-]

/startrant

They seem to be funded by promoting the idea that DOOM is SOON - and that to avert it we should all be sending our hard-earned dollars to their intrepid band of Friendly Folk.

Or, more realistically, the idea that DOOM has a CHANCE of happening any time between NOW and ONE HUNDRED YEARS FROM NOW but that small CHANCE has a large enough impact in EXPECTED UTILITY that we should really figure out more about the problem because someone, not necessarily SIAI might have to deal with the problem EVENTUALLY.

One might naively expect such an organization would typically act so as to exaggerate the risks -- but SIAI doesn't seem to be doing that so one's naive expectations would be wrong. It's amazing how people associate an aura of overconfidence coming from the philosophical positions of Eliezer with the actual confidence levels of the thinkers of SIAI. Seriously, where are these crazy claims about DOOM being SOON and that ELIEZER YUDKOWSKY is the MESSIAH? From something Eliezer wrote 10 years ago? The Singularity Institute is pretty damn reasonable. The journal and conference papers they write are pretty well grounded in sound and careful reasoning. But ha, who would read those? It's not like it'd be a good idea to actually read an organization's actual literary output before judging them based primarily on the perceived arrogance of one of their research fellows, that'd be stupid.

From that perspective the organisation seems likely to be an unreliable guide to the facts of the matter - since they have glaringly-obvious vested interests.

What vested interests? Money? Do you honestly think that the people at SIAI couldn't get 5 times as much money by working elsewhere? Status? Do you honestly think that making a seemingly crazy far mode belief that pattern matches to doomsdayism part of your identity for little pay and lots of hard work is a good way of gaining status? Eliezer would take a large status hit if he admitted he was wrong about this whole seed AI thing. Michael Vassar would too. But everyone else? Really good thinkers like Anna Salamon and Carl Shulman and Steve Rayhawk who have proved here on Less Wrong that they have exceptionally strong rationality, and who are consistently more reasonable than they have any right to be? (Seriously, you could give Steve Rayhawk the most retarded argument ever and he'd find a way to turn it into a reasonable argument worth seriously addressing. These people take their epistemology seriously.)

Maybe people at SIAI are, you know, actually worried about the problems because they know how to take ideas seriously instead of using the absurdity heuristic and personal distaste for Eliezer and then rationalizing their easy beliefs with vague outside view reference class tennis games or stupid things like that.

I like reading Multifoliaterose's posts. He raises interesting points, even if I think they're generally unfair. I can tell that he's at least using his brain. When most people criticize SIAI (really Eliezer, but it's easier to say SIAI 'cuz it feels less personal), they don't use any parts of their brain besides the 'rationalize reason for not associating with low status group' cognitive module.

timtyler, this comment isn't really a direct reply to yours so much as a venting of general frustrations. But I get annoyed by the attitude of 'haha let's be cynical and assume the worst of the people that are actually trying their hardest to do the most good they can for the world'. Carl Shulman would never write a reply anything like the one I've written. Carl Shulman is always reasonable and charitable. And I know Carl Shulman works incredibly hard on being reasonable, and taking into account opposing viewpoints, and not letting his affiliation with SIAI cloud his thinking, and still doing lots of good, reasonable, solid work on explaining the problem of Friendliness to the academic sphere in reasonable, solid journal articles and conference papers.

It's really annoying to me to have that go completely ignored just because someone wants to signal their oh-so-metacontrarian beliefs about SIAI. Use epistemic hygiene. Think before you signal. Don't judge an entire organization's merit off of stupid outside view comparisons without actually reading the material. Take the time to really update on the beliefs of longtime x-rationalists that have probably thought about this a lot more than you have. If you really think it through and still disagree, you should have stronger and more elegant counterarguments than things like "they have glaringly-obvious vested interests". Yeah, as if that didn't apply to anyone, especially anyone who thinks that we're in great danger and should do something about it. They have pretty obvious vested interests in telling people about said danger. Great hypothesis there chap. Great way to rationalize your desire to signal and do what is easy and what appeals to your vanity. Care to list your true rejections?

And if you think that I am being uncharitable in my interpretation of your true motivations, then be sure to notice the symmetry.

/endrant

Comment author: timtyler 01 October 2010 07:00:01PM *  -2 points [-]

That was quite a rant!

'haha let's be cynical and assume the worst of the people that are actually trying their hardest to do the most good they can for the world'.

I hope I don't come across as thinking "the worst" about those involved. I expect they are all very nice and sincere. By way of comparison, not all cults have deliberately exploitative ringleaders.

One might naively expect such an organization would typically act so as to exaggerate the risks - but SIAI doesn't seem to be doing that so one's naive expectations would be wrong.

Really? Really? You actually think the level of DOOM is cold realism - and not a ploy to attract funding? Why do you think that? De Garis and Warwick were doing much the same kind of attention-seeking before the SIAI came along - DOOM is an old school of marketing in the field.

You encourage me to speculate about the motives of the individuals involved. While that might be fun, it doesn't seem to matter much - the SIAI itself is evidently behaving as though it wants dollars, attention, and manpower - to help it meet its aims.

FWIW, I don't see what I am saying as particularly "contrarian". A lot of people would be pretty sceptical about the end of the world being nigh - or the idea that a bug might take over the world - or the idea that a bunch of saintly programmers will be the ones to save us all. Maybe contrary to the ideas of the true believers - if that is what you mean.

Anyway, the basic point is that if you are interested in DOOM, or p(DOOM), consulting a DOOM-mongering organisation, that wants your dollars to help them SAVE THE WORLD may not be your best move. The "follow the money" principle is simple - and often produces good results.

Comment author: Will_Newsome 01 October 2010 07:30:58PM *  8 points [-]

FWIW, I don't see what I am saying as particularly "contrarian". A lot of people would be pretty sceptical about the end of the world being nigh - or the idea that a bug might take over the world - or the idea that a bunch of saintly programmers will be the ones to save us all. Maybe contrary to the ideas of the true believers - if that is what you mean.

Right, I said metacontrarian. Although most LW people seem SIAI-agnostic, a lot of the most vocal and most experienced posters are pro-SIAI or SIAI-related, so LW comes across as having a generally pro-SIAI attitude, which is a traditionally contrarian attitude. Thus going against the contrarian status quo is metacontrarian.

You encourage me to speculate about the motives of the individuals involved. While that might be fun, it doesn't seem to matter much - the SIAI itself is evidently behaving as though it wants dollars, attention, and manpower - to help it meet its aims.

I'm confused. Anyone trying to accomplish anything is going to try to get dollars, attention, and manpower. I'm confused as to how this is relevant to the merit of SIAI's purpose. SIAI's never claimed to be fundamentally opposed to having resources. Can you expand on this?

I hope I don't come across as thinking "the worst" about those involved. I expect they are all very nice and sincere. By way of comparison, not all cults have deliberately exploitative ringleaders.

What makes that comparison spring to mind? Everyone is incredibly critical of Eliezer, probably much more so than he deserves, because everyone is racing to be first to establish their non-cult-victim status. Everyone at SIAI has different beliefs about the relative merits of different strategies for successful FAI development. That isn't a good thing -- fractured strategy is never good -- but it is evidence against cultishness. SIAI grounds its predictions in clear and careful epistemology. SIAI publishes in academic journals, attends scientific conferences, and hosts the Singularity Summit, where tons of prominent high status folk show up to speak about Singularity-related issues. Why is cult your choice of reference class? It is no more a cult than a typical global warming awareness organization. It's just that 'science fiction' is a low status literary genre in modern liberal society.

Comment author: ata 02 October 2010 06:16:37PM 11 points [-]

Everyone is incredibly critical of Eliezer, probably much more so than he deserves, because everyone is racing to be first to establish their non-cult-victim status.

I don't know about anybody else, but I am somewhat disturbed by Eliezer's persistent use of hyphens in place of em dashes, and am very concerned that it could be hurting SIAI's image.

Comment author: Will_Newsome 02 October 2010 07:00:35PM *  6 points [-]

And I say the same about his use of double spacing. It's an outdated and unprofessional practice. In fact, Anna Salamon and Louie Helm are 2 other SIAI folk that engage in this abysmal writing style, and for that reason I've often been tempted to write them off entirely. They're obviously not cognizant of the writing style of modern academic thinkers. The implications are obvious.

Comment author: wedrifid 02 October 2010 04:24:51AM 3 points [-]

Everyone is incredibly critical of Eliezer, probably much more so than he deserves, because everyone is racing to be first to establish their non-cult-victim status.

Another reason that I suspect is more important than trying to signal non-cult-victim status is that people who do want to be considered part of the cult believe that the cause is important and believe that Eliezer's mistakes could destroy the world (for example).

Comment author: timtyler 01 October 2010 08:00:40PM *  0 points [-]

Anyone trying to accomplish anything is going to try to get dollars, attention, and manpower. I'm confused as to how this is relevant to the merit of SIAI's purpose.

To recap, the SIAI is funded by donations from those who think that they will help prevent the end of the world at the hands of intelligent machines. For this pitch to work, the world must be at risk - in order for them to be able to save it. The SIAI face some resistance over this point, and these days, much of their output is oriented towards convincing others that these may be the end days. Also there will be a selection bias, with those most convinced of a high p(DOOM) most likely to be involved. Like I said, not necessarily the type of organisation one would want to approach if seeking the facts of the matter.

You pretend to fail to see connections between the SIAI and an END OF THE WORLD cult - but it isn't a terribly convincing act.

For the connections, see here. For protesting too much, see You're calling who a cult leader?

Comment author: Will_Newsome 01 October 2010 09:17:56PM 6 points [-]

You pretend to fail to see connections between the SIAI and an END OF THE WORLD cult - but it isn't a terribly convincing act.

No, I see it, look further, and find the model lacking in explanatory power. It selectively leaves out all kinds of useful information that I can use to control my anticipations.

Hmuh, I guess we won't be able to make progress, 'cuz I pretty much wholeheartedly agree with Vladimir when he says:

This whole "outside view" methodology, where you insist on arguing from ignorance even where you have additional knowledge, is insane (outside of avoiding the specific biases such as planning fallacy induced by making additional detail available to your mind, where you indirectly benefit from basing your decision on ignorance).

and Nick Tarleton when he says:

We all already know about this pattern match. Its reiteration is boring and detracts from the conversation.

Comment author: wedrifid 02 October 2010 01:58:23AM 1 point [-]

No, I see it, look further, and find the model lacking in explanatory power. It selectively leaves out all kinds of useful information that I can use to control my anticipations.

"This one is right" for example. ;)

Comment deleted 02 October 2010 04:25:03AM [-]
Comment author: timtyler 02 October 2010 12:17:54PM *  0 points [-]

I didn't say anyone was "racing to be first to establish their non-cult-victim status" - but it is certainly a curious image! [deleted parent comment was a dupe].

Comment author: orthonormal 02 October 2010 08:30:48PM 5 points [-]

Tim, do you think that nuclear-disarmament organizations were inherently flawed from the start because their aim was to prevent a catastrophic global nuclear war? Would you hold their claims to a much higher standard than the claims of organizations that looked to help smaller numbers of people here and now?

I recognize that there are relevant differences, but merely pattern-matching an organization's conclusion about the scope of their problem, without addressing the quality of their intermediate reasoning, isn't sufficient reason to discount their rationality.

Comment author: khafra 01 October 2010 07:24:40PM *  1 point [-]

I don't see what I am saying as particularly "contrarian".

Will said "meta-contrarian," which refers to the recent meta-contrarians are intellectual hipsters post.

I also think you see yourself as trying to help SIAI see how they look to "average joe" potential collaborators or contributors, while Will sees your criticisms as actually calling into question the motives, competence, and ingenuity of SIAI's staff. If I'm right, you're talking at cross-purposes.

Comment author: timtyler 01 October 2010 07:45:35PM *  0 points [-]

I also think you see yourself as trying to help SIAI see how they look to "average joe" potential collaborators or contributors

Reforming the SIAI is a possibility - but not a terribly realistic one, IMO. So, my intended audience here is less that organisation, and more some of the individuals here who I share interests with.

Comment author: Will_Newsome 01 October 2010 07:34:07PM 0 points [-]

Oh, that might be. Other comments by timtyler seemed really vague but generally anti-SIAI (I hate to set it up as if you could be for or against a set of related propositions in memespace, but it's natural to do here, meh), so I assumed he was expressing his own beliefs, and not a hypothetical average joe's.

Comment author: [deleted] 02 October 2010 09:25:36PM 4 points [-]

This is an incredibly anti-name-calling community. People ascribe a lot of value to having "good" discussions (disagreement is common, but not adversarialism or ad hominems.) LW folks really don't like being called a cult.

SIAI isn't a cult, and Eliezer isn't a cult leader, and I'm sure you know that your insinuations don't correspond to literal fact, and that this organization is no more a scam than a variety of other charitable and advocacy organizations.

I do think that folks around here are over-sensitive to normal levels of name-calling and ad hominems. It's odd. Holding yourself above the fray comes across as a little snobbish. There's a whole world of discourse out there, people gathering evidence and exchanging opinions, and the vast majority of them are doing it like this: UR A FASCIST. But do you think there's therefore nothing to learn from them?