Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Existential Risk and Public Relations

36 Post author: multifoliaterose 15 August 2010 07:16AM

[Added 02/24/14: Some time after writing this post, I discovered that it was based on a somewhat unrepresentative picture of SIAI. I still think that the concerns therein were legitimate, but they had less relative significance than I had thought at the time. SIAI (now MIRI) has evolved substantially since 2010 when I wrote this post, and the criticisms made in the post don't apply to MIRI as presently constituted.

A common trope on Less Wrong is the idea that governments and the academic establishment have neglected to consider, study and work against existential risk on account of their shortsightedness. This idea is undoubtedly true in large measure. In my opinion and in the opinion of many Less Wrong posters, it would be very desirable to get more people thinking seriously about existential risk. The question then arises: is it possible to get more people thinking seriously about existential risk? A first approximation to an answer to this question is "yes, by talking about it." But this answer requires substantial qualification: if the speaker or the speaker's claims have low credibility in the eyes of the audience then the speaker will be almost entirely unsuccessful in persuading his or her audience to think seriously about existential risk. Speakers who have low credibility in the eyes of an audience member decrease the audience member's receptiveness to thinking about existential risk. Rather perversely, speakers who have low credibility in the eyes of a sufficiently large fraction of their audience systematically raise existential risk by decreasing people's inclination to think about existential risk. This is true whether or not the speakers' claims are valid.

As Yvain has discussed in his excellent article titled The Trouble with "Good"

To make an outrageous metaphor: our brains run a system rather like Less Wrong's karma. You're allergic to cats, so you down-vote "cats" a couple of points. You hear about a Palestinian committing a terrorist attack, so you down-vote "Palestinians" a few points. Richard Dawkins just said something especially witty, so you up-vote "atheism". High karma score means seek it, use it, acquire it, or endorse it. Low karma score means avoid it, ignore it, discard it, or condemn it.

When Person X makes a claim which an audience member finds uncredible, the audience member's brain (semiconsciously) makes a mental note of the form "Boo for Person X's claims!"  If the audience member also knows that Person X is an advocate of existential risk reduction, the audience member's brain may (semiconsciously) make a mental note of the type "Boo for existential risk reduction!"

The negative reaction to Person X's claims is especially strong if the audience member perceives Person X's claims as arising from a (possibly subconscious) attempt on Person X's part to attract attention and gain higher status, or even simply to feel as though he or she has high status. As Yvain says in his excellent article titled That other kind of status:

But many, maybe most human actions are counterproductive at moving up the status ladder. 9-11 Conspiracy Theories are a case in point. They're a quick and easy way to have most of society think you're stupid and crazy. So is serious interest in the paranormal or any extremist political or religious belief. So why do these stay popular?

[...]

a person trying to estimate zir social status must balance two conflicting goals. First, ze must try to get as accurate an assessment of status as possible in order to plan a social life and predict others' reactions. Second, ze must construct a narrative that allows them to present zir social status as as high as possible, in order to reap the benefits of appearing high status.

[...]

In this model, people aren't just seeking status, they're (also? instead?) seeking a state of affairs that allows them to believe they have status. Genuinely having high status lets them assign themselves high status, but so do lots of other things. Being a 9-11 Truther works for exactly the reason mentioned in the original quote: they've figured out a deep and important secret that the rest of the world is too complacent to realize.

I'm presently a graduate student in pure mathematics. During graduate school I've met many smart people who I wish would take existential risk more seriously. Most such people who have heard of Eliezer do not find his claims credible. My understanding is that the reason for this is that Eliezer has made some claims which they perceive to be falling under the above rubric, and the strength of their negative reaction to these has tarnished their mental image of all of Eliezer's claims. Since Eliezer supports existential risk reduction, I believe that this has made them less inclined to think about existential risk than they were before they heard of Eliezer.

There is also a social effect which compounds the issue which I just mentioned. The issue which I just mentioned makes people who are not directly influenced by the issue that I just mentioned less likely to think seriously about existential risk on account of their desire to avoid being perceived as associated with claims that people find uncredible.

I'm very disappointed that Eliezer has made statements such as:

If I got hit by a meteorite now, what would happen is that Michael Vassar would take over sort of taking responsibility for seeing the planet through to safety...Marcello Herreshoff would be the one tasked with recognizing another Eliezer Yudkowsky if one showed up and could take over the project, but at present I don't know of any other person who could do that...

which are easily construed as claims that his work has higher expected value to humanity than the work of virtually all humans in existence. Even if such claims are true, people do not have the information that they need to verify that such claims are true, and so virtually everybody who could be helping out assuage existential risk find such claims uncredible. Many such people have an especially negative reaction to such claims because they can be viewed as arising from a tendency toward status grubbing, and humans are very strongly wired to be suspicious of those who they suspect to be vying for inappropriately high status.  I believe that such people who come into contact with Eliezer's statements like the one I have quoted above are less statistically likely to work to reduce existential risk than they were before coming into contact with such statements. I therefore believe that by making such claims, Eliezer has increased existential risk.

I would go further than that and say that that I presently believe that donating to SIAI has negative expected impact on existential risk reduction on account of that SIAI staff are making uncredible claims which are poisoning the existential risk reduction meme.  This is a matter on which reasonable people can disagree. In a recent comment, Carl Shulman expressed the view that though SIAI has had some negative impact on the existential risk reduction meme, the net impact of SIAI on the existential risk meme is positive. In any case, there's definitely room for improvement on this point.

Last July I made a comment raising this issue and Vladimir_Nesov suggested that I contact SIAI. Since then I have corresponded with Michael Vassar about this matter. My understanding of Michael Vassar's position is that the people who are dissuaded from thinking about existential risk because of remarks like Eliezer's are too irrational for it to be worthwhile for them to be thinking about existential risk. I may have misunderstood Michael's position and encourage him to make a public statement clarifying his position on this matter. If I have correctly understood his position, I do not find Michael Vassar's position on this matter credible.

I believe that if Carl Shulman is right, then donating to SIAI has positive expected impact on existential risk reduction. I believe that that even if this is the case, a higher expected value strategy is to withold donations from SIAI and informing SIAI that you will fund them if and only if they require their staff to exhibit a high degree of vigilance about the possibility of poisoning the existential risk meme by making claims that people find uncredible. I suggest that those who share my concerns adopt the latter policy until their concerns have been resolved.

Before I close, I should emphasize that my post should not be construed as an attack on Eliezer. I view Eliezer as an admirable person and don't think that he would ever knowingly do something that raises existential risk. Roko's Aspergers Poll suggests a strong possibility that the Less Wrong community exhibits an unusually high abundance of the traits associated with Aspergers Syndrome. It would not be at all surprising if the founders of Less Wrong have a similar unusual abundance of the traits associated with Aspergers Syndrome. I believe that more likely than not, the reason why Eliezer has missed the point that I raise in this post is social naivete on his part rather than willful self-deception.

Comments (613)

Comment author: Larks 17 August 2010 06:40:47AM 17 points [-]

It must be said that the reason no-one from SingInst has commented here is they're all busy running the Singularity Summit, a well-run conference full of AGI researchers, the one group that SingInst cares about impressing more than any other. Furthermore, Eliezer's speech was well received by those present.

I'm not sure whether attacking SingInst for poor public relations during the one week when everyone is busy with a massive public relations effort is either very ironic or very Machiavellian.

Comment author: orthonormal 15 August 2010 03:21:51PM 13 points [-]

whpearson mentioned this already, but if you think that the most important thing we can be doing right now is publicizing an academically respectable account of existential risk, then you should be funding the Future of Humanity Institute.

Funding SIAI is optimal only if you think that the pursuit of Friendly AI is by far the most important component of existential risk reduction, and indeed they're focusing on persuading more people of this particular claim. As you say, by focusing on something specific, radical and absurd, they run more of a risk of being dismissed entirely than does FHI, but their strategy is still correct given the premise.

Comment author: Eliezer_Yudkowsky 18 August 2010 02:46:17PM 6 points [-]

if you think that the most important thing we can be doing right now is publicizing an academically respectable account of existential risk, then you should be funding the Future of Humanity Institute. Funding SIAI is optimal only if you think that the pursuit of Friendly AI is by far the most important component of existential risk reduction

Agreed. (Modulo a caveat about marginal ROI eventually balancing if FHI got large enough or SIAI got small enough.)

Comment author: wedrifid 15 August 2010 05:22:43PM 13 points [-]

Funding SIAI is optimal only if you think that the pursuit of Friendly AI is by far the most important component of existential risk reduction

This seems to assume that existential risk reduction is the only thing people care about. I doubt I am the only person who wants more from the universe than eliminating risk of humans going extinct. I would trade increased chance of extinction for a commensurate change in the probable outcomes if we survive. Frankly I would consider it insane not to be willing make such a trade.

Comment author: orthonormal 15 August 2010 05:31:51PM 4 points [-]

I meant "optimal within the category of X-risk reduction", and I see your point.

Comment author: timtyler 15 August 2010 05:29:09PM *  1 point [-]

It seems pretty clear that very few care much about existential risk reduction.

That makes perfect sense from an evolutionary perspective. Organisms can be expected to concentrate on producing offspring - not indulging paranoid fantasies about their whole species being wiped out!

The bigger puzzle is why anyone seems to care about it at all. The most obvious answer is signalling. For example, if you care for the fate of everyone in the whole world, that SHOWS YOU CARE - a lot! Also, the END OF THE WORLD acts as a superstimulus to people's warning systems. So - they rush and warn their friends - and that gives them warm fuzzy feelings. The get credit for raising the alarm about the TERRIBLE DANGER - and so on.

Disaster movies - like 2012 - trade on people's fears in this area - stimulating and fuelling their paranoia further - by providing them with fake memories of it happening. One can't help wondering whether FEAR OF THE END is a healthy phenomenon - overall - and if not, whether it realy sensible to stimulate those fears.

Does the average human - on being convinced the world is about to end - behave better - or worse? Do they try and hold back the end - or do they rape and pillage? If their behaviour is likely to be worse then responsible adults should think very carefully before promoting the idea that THE END IS NIGH on the basis of sketchy evidence.

Comment author: Jonathan_Graehl 16 August 2010 09:46:42PM 2 points [-]

This seems correct. Do people object on style? Is it a repost? Off topic?

Comment author: cata 16 August 2010 10:12:59PM *  2 points [-]

I think it's bad form to accuse other people of being insincere without clearly defending your remarks. By claiming that the only reason anyone cares about existential risk is signalling, Tim is saying that a lot of people who appear very serious about X-risk reduction are either lying or fooling themselves. I know many altruists who have acted in a way consistent with being genuinely concerned about the future, and I don't see why I should take Tim's word over theirs. It certainly isn't the "most obvious answer."

I also don't like this claim that people are likely to behave worse when they think they're in impending danger, because again, I don't agree that it's intuitive, and no evidence is provided. It also isn't sufficient; maybe some risks are important enough that they ought to be addressed even if addressing them has bad cultural side effects. I know that the SIAI people, at least, would definitely put uFAI in this category without a second thought.

Comment author: SilasBarta 16 August 2010 10:20:41PM *  4 points [-]

Hm, I didn't get that out of timtyler's post (just voted up). He didn't seem to be saying, "Each and every person interested in this topic is doing it to signal status", but rather, "Hey, our minds aren't wired up to care about this stuff unless maybe it signals" -- which doesn't seem all that objectionable.

Comment author: whpearson 16 August 2010 10:39:32PM 3 points [-]

DNDV (did not down vote). Sure signalling has a lot to do with it, the type of signalling he suggests doesn't ring true with what I have see of most peoples behaviour. We do not seem to be great proselytisers most of the time.

The ancient circuits that x-risk triggers in me are those of feeling important, of being a player in the tribes future with the benefits that that entails. Of course I won't get the women if I eventually help save humanity, but my circuits that trigger on "important issues" don't seem to know that. In short by trying to deal with important issues I am trying to signal a raised status.

Comment author: Perplexed 16 August 2010 10:37:16PM 2 points [-]

I thought people here were compatibilists. Saying that someone does something of their own free will is compatible with saying that their actions are determined. Similarly, saying that they are genuinely concerned is compatible with saying that their expressions of concern arise (causally) from "signaling".

Comment author: wedrifid 17 August 2010 05:25:38AM 2 points [-]

That's what Tim could have said. His post may have got a better reception if he left off:

It seems pretty clear that very few care much about existential risk reduction. The bigger puzzle is why anyone seems to care about it at all.

I mean, I most certainly do care and the reasons are obvious. p(wedrifid survives | no human survives) = 0

Comment author: Eneasz 24 August 2010 07:27:33PM 2 points [-]

Does the average human - on being convinced the world is about to end - behave better - or worse? Do they try and hold back the end - or do they rape and pillage?

Given the current level of technology the end IS nigh, the world WILL end, for every person individually, in less than a century. On average it'll happen around the 77-year mark for males in the US. This has been the case through all of history (for most of it at a much younger age) and yet people generally do not rape and pillage. Nor are they more likely to do so as the end of their world approaches.

Thus, I do not think there is much reason for concern.

Comment author: ata 24 August 2010 08:20:32PM *  6 points [-]

People care (to varying degrees) about how the world will be after they die. People even care about their own post-mortem reputations. I think it's reasonable to ask whether people will behave differently if they anticipate that the world will die along with them.

Comment author: homunq 31 August 2010 04:11:20PM *  3 points [-]

I do "think that the pursuit of Friendly AI [and the avoidance of unfriendly AI] is by far the most important component of existential risk reduction". I also think that SIAI is not addressing the most important problem in that regard. I suspect there's a lot of people who would agree, for various reasons.

In my case, the logic is that I think:

1) That corporations, though not truly intelligent, are already superhuman and unFriendly.

2) That coordinated action (that is, strategic politics, in well-chosen solidarity with others with whom I have important differences) has the potential to reduce their power and/or increase their Friendliness

3) That this would, in turn, reduce the risk of them developing a first-mover unFriendly AI ...

3a) ... while also increasing the status of your ideas in a coalition which may be able to develop a Friendly one.

I recognize that points 2 and 3a are partially tribal and/or hope-seeking beliefs of mine, but think 1 and 3 are well-founded rationally.

Anyway, this is only one possible reason for parting ways with the SIAI and the FHI, without in any sense discounting the risks they are made to confront.

Comment author: Vladimir_Nesov 15 August 2010 03:40:32PM 3 points [-]

Funding SIAI is optimal only if you think that the pursuit of Friendly AI is by far the most important component of existential risk reduction

But who does the evaluation? It seems that it's better to let specialists think about whether a given cause is important, and they need funding just to get that running. This argues for ensuring minimum funding of organizations that research important uncertainties, even the ones where your intuitive judgment says probably lead nowhere. Just as most people shouldn't themselves research FAI, and instead fund its research, similarly most people shouldn't research feasibility of research of FAI, and instead fund the research of that feasibility.

Comment author: JamesAndrix 16 August 2010 03:02:28AM 12 points [-]

Warning: Shameless Self Promotion ahead

Perhaps part of the difficulty here is the attempt to spur a wide rationalist community on the same site frequented by rationalists with strong obscure positions on obscure topics.

Early in Lesswrong discussion of FAI was discouraged so that it didn't just become a site about FAI and the singularity, but a forum about human rationality more generally.

I can't track down an article[s] from EY about how thinking about AI can be too absorbing, and how in order properly create a community, you have to truly put aside the ulterior motive of advancing FAI research.

It might be wise for us to again deliberately shift our focus away from FAI and onto human rationality and how it can be applied more widely (say to science in general.)

Enter the SSP: For months now I've been brainstorming a community to educate people on the creation and use of 3D printers, with the eventual goal of making much better 3D printers. So this is a different big complicated problem with a potential high payoff, and it ties into many fields, provides tangible previews of the singularity, can benefit from the involvement of people with almost any skill set, and seems to be much safer than advancing AI, nanotech, or genetic engineering.

I had already intended to introduce rationality concepts where applicable and link a lot to Lesswrong. but if a few LWers were willing to help, It could become a standalone community of people committed to thinking clearly about complex technical and social problems, with a latent obsession with 3D printers.

Comment author: Jonathan_Graehl 16 August 2010 08:58:49PM 10 points [-]

When I'm talking to someone I respect (and want to admire me), I definitely feel an urge to distance myself from EY. I feel like I'm biting a social bullet in order to advocate for SIAI-like beliefs or action.

What's more, this casts a shadow over my actual beliefs.

This is in spite of the fact that I love EY's writing, and actually enjoy his fearless geeky humor ("hit by a meteorite" is indeed more fun than the conventional "hit by a bus").

The fear of being represented by EY is mostly due to what he's saying, not how he's saying it. That is, even if he were always dignified and measured, he'd catch nearly as much flak. If he'd avoided certain topics entirely, that would have made a significant difference, but on the other hand, he's effectively counter-signaled that he's fully honest and uncensored in public (of course he is probably not, exactly), which I think is also valuable.

I think EY can win by saying enough true things, convincingly, that smart people will be persuaded that he's credible. It's perhaps true that better PR will speed the process - by enough for it to be worth it? That's up to him.

The comments in this diavlog with Scott Aaronson - while some are by obvious axe-grinders - are critical of EY's manner. People appear to hate nothing more than (what they see as) undeserved confidence. Who knows how prevalent this adverse reaction to EY is, since the set of commenters is self-selecting.

People who are floundering in a debate with EY (e.g. Jason Lanier) seem to think they can bank on a "you crazy low-status sci-fi nerd" rebuttal to EY. This can score huge with lazy or unintellectual people if it's allowed to succeed.

Comment author: [deleted] 18 August 2010 03:46:37PM *  8 points [-]

I think largish fraction of the population have worries about human extinction / the end of the world. Very few associate this with the phrase "existential risk" -- I for one had never heard the term until after I had started reading about the technological singularity and related ideas. Perhaps rebranding of a sort would help you further the cause. Ditto for FAI - I think 'Ethical Artficial Intelligence' would get the idea across well enough and might sound less flakey to certain audiences.

Comment author: zemaj 19 August 2010 10:09:06AM 9 points [-]

"Ethical Artificial Intelligence" sounds great and makes sense without having to know the background of the technological singularity as "Friendly Artificial Intelligence" does. Every time I try to mention FAI to someone without any background on the topic I always have to take two steps back in the conversation and it becomes quickly confusing. I think I could mention Ethical AI and then continue on with whatever point I was making without any kind of background and it would still make the right connections.

I also expect it would appeal to a demographic likely to support the concept as well. People who worry about ethical food, business, healthcare etc... would be likely to worry about existential risk on many levels.

In fact I think I'll just go ahead and start using Ethical AI from now on. I'm sure people in the FAI community would understand what I'm talking about.

Comment author: josh0 21 August 2010 03:53:41AM 6 points [-]

It may be true that many are worried about 'the end of the world', however consider how many of them think that it was predicted by the Mayan calandar to occur on Dec. 21 2012, and how many actively want it to happen because they believe it will herald the coming of God's Kingdom on Earth, Olam Haba, or whatever.

We could rebrand 'existential risk' as 'end time' and gain vast numbers of followers. But I doubt that would actually be desirable.

I do think that Ethical Artificial Intelligence would strike a better chord with most than Friendly, though. 'Friendly' does sound a bit unserious.

Comment author: komponisto 16 August 2010 12:38:38AM *  32 points [-]

I'll state my own experience and perception, since it seems to be different from that of others, as evidenced in both the post and the comments. Take it for what it's worth; maybe it's rare enough to be disregarded.

The first time I heard about SIAI -- which was possibly the first time I had heard the word "singularity" in the technological sense -- was whenever I first looked at the "About" page on Overcoming Bias, sometime in late 2006 or early 2007, where it was listed as Eliezer Yudkowsky's employer. To make this story short, the whole reason I became interested in this topic in the first place was because I was impressed by EY -- specifically his writings on rationality on OB (now known as the Sequences here on LW). Now of course most of those ideas were hardly original with him (indeed many times I had the feeling he was stating the obvious, albeit in a refreshing, enjoyable way) but the fact that he was able to write them down in such a clear, systematic, and readable fashion showed that he understood them thoroughly. This was clearly somebody who knew how to think.

Now, when someone has made that kind of demonstration of rationality, I just don't have much problem listening to whatever they have to say, regardless of how "outlandish" it may seem in the context of most human discourse. Maybe I'm exceptional in this respect, but I've never been under the impression that only "normal-sounding" things can be true or important. At any rate, I've certainly never been under that impression to such an extent that I would be willing to dismiss claims made by the author of The Simple Truth and A Technical Explanation of a Technical Explanation, someone who understands things like the gene-centered view of evolution and why MWI exemplifies rather than violates Occam's Razor, in the context of his own professional vocation!

I really don't understand what the difference is between me and the "smart people" that you (and XiXiDu) know. In fact maybe they should be more inclined to listen to EY and SIAI; after all, they probably grew up reading science fiction, in households where mild existential risks like global warming were taken seriously. Are they just not as smart as me? Am I unusually susceptible to following leaders and joining cults? (Don't think so.) Do I simply have an unusual personality that makes me willing to listen to strange-sounding claims? (But why wouldn't they as well, if they're "smart"?)

Why can't they just read the darn sequences and pick up on the fact that these people are worth listening to?

Comment author: [deleted] 17 August 2010 08:34:31PM *  30 points [-]

I STRONGLY suspect that there is a enormous gulf between finding out things on your own and being directed to them by a peer.

When you find something on your own (existential risk, cryonics, whatever), you get to bask in your own fortuitousness, and congratulate yourself on being smart enough to understand it's value. You get a boost in (perceived) status, because not only do you know more than you did before, you know things other people don't know.

But when someone else has to direct you to it, it's much less positive. When you tell someone about existential risk or cryonics or whatever, the subtext is "look, you're weren't able to figure this out by yourself, let me help you". No matter how nicely you phrase it, there's going to be resistance because it comes with a drop in status - which they can avoid by not accepting whatever you're selling. It actually might be WORSE with smart people who believe that they have most things "figured out".

Comment author: multifoliaterose 16 August 2010 09:18:21AM *  9 points [-]

Thanks for your thoughtful comment.

To make this story short, the whole reason I became interested in this topic in the first place was because I was impressed by EY -- specifically his writings on rationality on OB (now known as the Sequences here on LW). Now of course most of those ideas were hardly original with him (indeed many times I had the feeling he was stating the obvious, albeit in a refreshing, enjoyable way) but the fact that he was able to write them down in such a clear, systematic, and readable fashion showed that he understood them thoroughly. This was clearly somebody who knew how to think.

I know some people who have had this sort of experience. My claim is not that Eliezer has uniformly repelled people from thinking about existential risk. My claim is that on average Eliezer's outlandish claims repel people from thinking about existential risk.

Do I simply have an unusual personality that makes me willing to listen to strange-sounding claims?

My guess would be that this is it. I'm the same way.

(But why wouldn't they as well, if they're "smart"?)

It's not clear that willingness to listen to strange-sounding claims exhibits correlation with instrumental rationality, or what the sign of that correlation is. People who are willing to listen to strange-sounding claims statistically end up hanging out with UFO conspiracy theorists, New Age people, etc. more often than usual. Statistically, people who make strange-sounding claims are not worth listening to. Too much willingness to listen to strange-sounding claims can easily result in one wasting large portions of one's life.

Why can't they just read the darn sequences and pick up on the fact that these people are worth listening to?

See my remarks above.

Comment author: ciphergoth 16 August 2010 10:48:59AM 12 points [-]

For my part, I keep wondering how long it's going to be before someone throws his "If you don't sign up your kids for cryonics then you are a lousy parent" remark at me, to which I will only be able to say that even he says stupid things sometimes.

(Yes, I'd encourage anyone to sign their kids up for cryonics; but not doing so is an extremely poor predictor of whether or not you treat your kids well in other ways, which is what the term should mean by any reasonable standard).

Comment author: James_Miller 18 August 2010 02:55:30PM *  8 points [-]

Given Eliezer's belief about the probability of cryonics working and belief that others should understand that cryonics has a high probability of working, his statement that "If you don't sign up your kids for cryonics then you are a lousy parent" is not just correct but trivial.

One of the reasons I so enjoy reading Less Wrong is Eliezer's willingness to accept and announce the logical consequences of his beliefs.

Comment author: ciphergoth 18 August 2010 03:00:15PM *  4 points [-]

There is a huge gap between "you are doing your kids a great disservice" and "you are a lousy parent": "X is an act of a lousy parent" to me implies that it is a good predictor of other lousy parent acts.

EDIT: BTW I should make clear that I plan to try to persuade some of my friends to sign up themselves and both their kids for cryonics, so I do have skin in the game...

Comment author: FAWS 18 August 2010 03:04:41PM *  7 points [-]

I'm not completely sure I disagree with that, but do you have the same attitude towards parents who try to heal treatable cancer with prayer and nothing else, but are otherwise great parents?

Comment author: ciphergoth 18 August 2010 03:31:11PM 4 points [-]

I think that would be a more effective predictor of other forms of lousiness: it means you're happy to ignore the advice of scientific authority in favour of what your preacher or your own mad beliefs tell you, which can get you into trouble in lots of other ways.

That said, this is a good counter, and it does make me wonder if I'm drawing the right line. For one thing, what do you count as a single act? If you don't get cryonics for your first child, it's a good predictor that you won't for your second either, so does that count? So I think another aspect of it is that to count, something has to be unusually bad. If you don't get your kids vaccinated in the UK in 2010, that's lousy parenting, but if absolutely everyone you ever meet thinks that vaccines are the work of the devil, then "lousy" seems too strong a term for going along with it.

Comment author: multifoliaterose 16 August 2010 11:47:35AM 4 points [-]

Yes, this is the sort of thing that I had in mind in making my cryonics post - as I said in the revised version of my post, I have a sense that a portion of the Less Wrong community has the attitude that cryonics is "moral" in some sort of comprehensive sense.

Comment author: James_Miller 18 August 2010 03:00:47PM 5 points [-]

If you believe that thousands of people die unnecessarily every single day then of course you think cryonics is a moral issue.

If people in the future come to believe that we should have know that cryonics would probably work then they might well conclude that our failure to at least offer cryonics to terminally ill children was (and yes I know what I'm about to write sounds extreme and will be off-putting to many) Nazi-level evil.

Comment author: katydee 16 August 2010 10:31:46AM 9 points [-]

Also, keep in mind that reading the sequences requires nontrivial effort-- effort which even moderately skeptical people might be unwilling to expend. Hopefully Eliezer's upcoming rationality book will solve some of that problem, though. After all, even if it contains largely the same content, people are generally much more willing to read one book rather than hundreds of articles.

Comment author: komponisto 16 August 2010 11:10:39AM 7 points [-]

Thank you for your thoughtful reply; although, as will be evident, I'm not quite sure I actually got the point across.

(But why wouldn't they as well, if they're "smart"?)

It's not clear that willingness to listen to strange-sounding claims exhibits correlation with instrumental rationality,

I didn't realize at all that by "smart" you meant "instrumentally rational"; I was thinking rather more literally in terms of IQ. And I would indeed expect IQ to correlate positively with what you might call openness. More precisely, although I would expect openness to be only weak evidence of high IQ, I would expect high IQ to be more significant evidence of openness.

People who are willing to listen to strange-sounding claims statistically end up hanging out with UFO conspiracy theorists, New Age people, etc...

Why can't they just read the darn sequences and pick up on the fact that these people are worth listening to?

See my remarks above.

The point of my comment was that reading his writings reveals a huge difference between Eliezer and UFO conspiracy theorists, a difference that should be more than noticeable to anyone with an IQ high enough to be in graduate school in mathematics. Yes, of course, if all you know about a person is that they make strange claims, then you should by default assume they're a UFO/New Age type. But I submit that the fact that Eliezer has written things like these decisively entitles him to a pass on that particular inference, and anyone who doesn't grant it to him just isn't very discriminating.

Comment author: multifoliaterose 16 August 2010 12:04:13PM *  11 points [-]

And I would indeed expect IQ to correlate positively with what you might call openness.

My own experience is that the correlation is not very high. Most of the people who I've met who are as smart as me (e.g. in the sense of having high IQ) are not nearly as open as I am.

I didn't realize at all that by "smart" you meant "instrumentally rational";

I did not intend to equate intelligence with instrumental rationality. The reason why I mentioned instrumental rationality is that ultimately what matters is to get people with high instrumental rationality (whether they're open minded or not) interested in existential risk.

My point is that people who are closed minded should not be barred from consideration as potentially useful existential risk researchers, that although people are being irrational to dismiss Eliezer as fast as they do, that doesn't mean that they're holistically irrational. My own experience has been that my openness has both benefits and drawbacks.

The point of my comment was that reading his writings reveals a huge difference between Eliezer and UFO conspiracy theorists, a difference that should be more than noticeable to anyone with an IQ high enough to be in graduate school in mathematics.

Math grad students can see a huge difference between Eliezer and UFO conspiracy theorists - they recognize that Eliezer's intellectually sophisticated. They're still biased to dismiss him out of hand. See bentram's comment

Edit: You might wonder where the bias to dismiss Eliezer comes from. I think it comes mostly from conformity, which is, sadly, very high even among very smart people.

Comment author: komponisto 16 August 2010 12:33:25PM *  4 points [-]

My point is that people who are closed minded should not be barred from consideration as potentially useful existential risk researchers

You may be right about this; perhaps Eliezer should in fact work on his PR skills. At the same time, we shouldn't underestimate the difficulty of "recruiting" folks who are inclined to be conformists; unless there's a major change in the general sanity level of the population, x-risk talk is inevitably going to sound "weird".

Math grad students can see a huge difference between Eliezer and UFO conspiracy theorists - they recognize that Eliezer's intellectually sophisticated. They're still biased to dismiss him out of hand

This is a problem; no question about it.

Comment author: multifoliaterose 16 August 2010 12:39:14PM *  6 points [-]

At the same time we shouldn't underestimate the difficulty of "recruiting" folks who are inclined to be conformists; unless there's a major change in the general sanity level of the population, x-risk talk is inevitably going to sound "weird".

I agree with this. It's all a matter of degree. Maybe at present one has to be in the top 1% of the population in nonconformity to be interested in existential risk and with better PR one could reduce the level of nonconformity required to the top 5% level.

(I don't know whether these numbers are right, but this is the sort of thing that I have in mind - I find it very likely that there are people who are nonconformist enough to potentially be interested in existential risk but too conformist to take it seriously unless the people who are involved seem highly credible.)

Comment author: multifoliaterose 16 August 2010 12:23:40PM 5 points [-]

One more point - though I could immediately recognize that there's something important to some of what Eliezer says, the fact that he makes outlandish claims did make me take longer to get around to thinking seriously about existential risk. This is because of a factor that I mention in my post which I quote below.

There is also a social effect which compounds the issue which I just mentioned. The issue which I just mentioned makes people who are not directly influenced by the issue that I just mentioned less likely to think seriously about existential risk on account of their desire to avoid being perceived as associated with claims that people find uncredible.

I'm not proud that I'm so influenced, but I'm only human. I find it very plausible that there are others like me.

Comment author: Oligopsony 15 August 2010 11:20:09AM 20 points [-]

I'm new to all this singularity stuff - and as an anecdotal data point, I'll say a lot of it does make my kook bells go off - but with an existential threat like uFAI, what does the awareness of the layperson count for? With global warming, even if most of any real solution involves the redesign of cities and development of more efficient energy sources, individuals can take some responsibility for their personal energy consumption or how they vote. uFAI is a problem to be solved by a clique of computer and cognitive scientists. Who needs to put thought into the possibility of misbuilding an AI other than people who will themselves engage in AI research? (This is not a rhetorical question - again, I'm new to this.)

There is, of course, the question of fundraising. ("This problem is too complicated for you to help with directly, but you can give us money..." sets off further alarm bells.) But from that perspective someone who thinks you're nuts is no worse than someone who hasn't heard of you. You can ramp up the variance of people's opinions and come out better financially.

Comment author: CarlShulman 15 August 2010 11:26:44AM 15 points [-]

Awareness on the part of government funding agencies (and the legislators and executive branch people with influence over them), technology companies and investors, and political and military decisionmakers (eventually) could all matter quite a lot. Not to mention bright young people deciding on their careers and research foci.

Comment author: wedrifid 15 August 2010 11:43:40AM 5 points [-]

Who needs to put thought into the possibility of misbuilding an AI other than people who will themselves engage in AI research? (This is not a rhetorical question - again, I'm new to this.)

The people who do the real work. Utlimately it doesn't matter if the people who do the AI research care about existential risk or not (if we make some rather absolute economic assumptions). But you've noticed this already and you are right about the 'further alarm bells'.

Ultimately, the awareness of the layperson matters for the same reason that the awareness of the layperson matters for any other political issue. While with AI people can't get their idealistic warm fuzzies out of barely relevant things like 'turning off a light bulb' things like 'how they vote' do matter. Even if it is at a lower level of 'voting' along the lines of 'which institutions do you consider more prestigious'?

You can ramp up the variance of people's opinions and come out better financially.

Good point!

Comment author: Eneasz 24 August 2010 05:46:22PM 30 points [-]

informing SIAI that you will fund them if and only if they require their staff to exhibit a high degree of vigilance about the possibility of poisoning the existential risk meme by making claims that people find uncredible

I believe you are completely ignoring the status-demolishing effects of hypocrisy and insincerity.

When I first started watching Blogging Heads discussions featuring Eliezer I would often have moments where I held my breath thinking "Oh god, he can't address that directly without sounding nuts, here comes the abhorrent back-peddling and waffling". Instead he met it head on with complete honesty and did so in a way I've never seen other people able to pull off - without sounding nuts at all. In fact, sounding very reasonable. I've since updated enough that I no longer wince and hold my breath, I smile and await the triumph.

If, as most people (and nearly all politicians) do, he would have waffled and presented an argument that he doesn't honestly hold, but that is more publicly acceptable, I'd feel disappointed and a bit sickened and I'd tune out the rest of what he has to say.

Hypocrisy is transparent. People (including neurotypical people) very easily see when others are making claims they don't personally believe, and they universally despise such actions. Politicians and lawyers are among the most hated groups in modern societies, in large part because of this hypocrisy. They are only tolerated because they are seen as a necessary evil.

Right now, People Working To Reduce Existential Risk are not seen as necessary. So it's highly unlikely that hypocrisy among them would be tolerated. They would repel anyone currently inclined to help, and their hypocrisy wouldn't draw in any new support. The answer isn't to try to deceive others about your true beliefs, it is to help make those beliefs more credible among the incredulous.

I feel that anyone advocating for public hypocrisy among the SIAI staff is working to disintegrate the organization (even if unintentionally).

Comment author: Eliezer_Yudkowsky 24 August 2010 06:30:57PM 17 points [-]

When I first started watching Blogging Heads discussions featuring Eliezer I would often have moments where I held my breath thinking "Oh god, he can't address that directly without sounding nuts, here comes the abhorrent back-peddling and waffling". Instead he met it head on with complete honesty

I am so glad that someone notices and appreciates this.

I feel that anyone advocating for public hypocrisy among the SIAI staff is working to disintegrate the organization (even if unintentionally).

Agreed.

Comment author: pnrjulius 12 June 2012 02:59:28AM 2 points [-]

On the other hand... people say they hate politicians and then vote for them anyway.

So hypocrisy does have upsides, and maybe we shouldn't dismiss it so easily.

Comment author: James_Miller 18 August 2010 03:30:54PM 7 points [-]

Given how superficially insane Eliezer's beliefs seem he has done a fantastic job of attracting support for his views.

Eliezer is popularizing his beliefs, not directly through his own writings, but by attracting people (such as conference speakers and this comment writer who is currently writing a general-audience book) who promote understanding of issues such as intelligence explosion, unfriendly AI and cryonics.

Eliezer is obviously not neurotypical. The non-neurotypical have a tough time making arguments that emotionally connect. Given that Eliezer has a massive non-comparative advantage in making such arguments we shouldn't expect him to spend his time trying to become slightly better at doing so.

Eliezer might not have won the backing of people such as super-rationalist self-made tech billionaire Peter Thiel had Eliezer devoted less effort to rational arguments.

Comment author: michaelkeenan 15 August 2010 09:30:50AM 7 points [-]

During graduate school I've met many smart people who I wish would take existential risk more seriously. Most such people who have heard of Eliezer do not find his claims credible. My understanding is that the reason for this is that Eliezer has made some claims which they perceive to be falling under the above rubric, and the strength of their negative reaction to these has tarnished their mental image of all of Eliezer's claims.

Can you tell us more about how you've seen people react to Yudkowsky? That these negative reactions are significant is crucial to your proposal, but I have rarely seen negative reactions to Yudkowsky (and never in person) so my first availability-heuristic-naive reaction is to think it isn't a problem. But I realize my experience may be atypical and there could be an abundance of avoidable Yudkowsky-hatred where I'm not looking, so would like to know more about that.

Did that objectionable Yudkowsky-meteorite comment get widely disseminated? YouTube says the video has only 500 views, and I imagine most of those are from Yudkowsky-sympathizing Less Wrong readers.

Comment author: wedrifid 15 August 2010 10:50:18AM 18 points [-]

But I realize my experience may be atypical and there could be an abundance of avoidable Yudkowsky-hatred where I'm not looking, so would like to know more about that.

Yudkowsky-hatred isn't the risk, Yudkowsky-mild-contempt is. People engage with things they hate, sometimes it brings respect and attention to both parties (by polarizing a crowd that would otherwise be indifferent.) But you never want to be exposed to mild contempt.

I can think of some examples of conversations about Eliezer that would fit the category but it is hard to translate them to text. The important part of the reaction was non-verbal. Cryonics was one topic and the problem there wasn't that it was uncredible but that it was uncool. Another topic is the old "thinks he can know something about Friendly AIs when he hasn't even made an AI yet" theme. Again, I've seen that reaction evident through mannerisms that in no way translate to text. You can convey that people aren't socially relevant without anything so crude as saying stuff.

Comment author: Kaj_Sotala 15 August 2010 11:19:33AM 14 points [-]

Cryonics was one topic and the problem there wasn't that it was uncredible but that it was uncool.

[insert the obvious bad pun here]

Comment author: wedrifid 15 August 2010 11:33:39AM 6 points [-]

I know, I couldn't think of worthy witticism to lampshade it so I let it slide. :P

Comment author: multifoliaterose 15 August 2010 10:53:19AM 13 points [-]

Can you tell us more about how you've seen people react to Yudkowsky? That these negative reactions are significant is crucial to your proposal, but I have rarely seen negative reactions to Yudkowsky (and never in person) so my first availability-heuristic-naive reaction is to think it isn't a problem. But I realize my experience may be atypical and there could be an abundance of avoidable Yudkowsky-hatred where I'm not looking, so would like to know more about that.

I haven't seen examples of Yudkowsky-hatred. But I have regularly seen people ridicule him. Recalling Hanson's view that a lot human behavior is really signaling and vying for status, I interpret this ridicule as an functioning to lower Eliezer's status to compensate for what people perceive as inappropriate status grubbing on his part.

Most of the smart people who I know (including myself) perceive him as exhibiting a high degree of overconfidence in the validity of his views about the world.

This leads some of them conceptualize him as a laughingstock; as somebody who's totally oblivious and feel that the idea that we should be thinking about artificial intelligence is equally worthy of ridicule. I personally am quite uncomfortable with these attitudes, agreeing with Holden Karnofsky's comment

"I believe that there are enormous risks and upsides associated with artificial intelligence. Managing these deserves serious discussion, and it’s a shame that many laugh off such discussion."

I'm somewhat surprised that you appear not to have notice this sort of thing independently. Maybe we hang out in rather different crowds.

Did that objectionable Yudkowsky-meteorite comment get widely disseminated? YouTube says the video has only 500 views, and I imagine most of those are from Yudkowsky-sympathizing Less Wrong readers.

Yes, I think that you're right. I just picked it out as a very concrete example of a statement that could provoke a substantial negative reaction. There are other qualitatively similar things (but more mild) things that Eliezer has said that have been more widely disseminated.

Comment author: bentarm 15 August 2010 11:45:24AM 10 points [-]

I haven't seen examples of Yudkowsky-hatred. But I have regularly seen people ridicule him

Ditto.

I know of a lot of very smart people (ok, less than 10, but still, more than 1) who essentially read Eliezer's AI writings as a form of entertainment, and don't take them even slightly seriously. This is partly because of the Absurdity Heuristic, but I think it's also because of Eliezer's writing style, and statements like the one in the initial post.

I personally fall somewhere between these people and, say, someone who has spent a summer at the SIAI on the 'taking Eliezer seriously' scale - I think he (and the others) probably have a point, and I at least know that they intend to be taken seriously, but I've never gotten round to doing anything about it.

Comment author: CarlShulman 15 August 2010 12:02:39PM 2 points [-]

who essentially read Eliezer's AI writings as a form of entertainment, and don't take them even slightly seriously.

Why do they find them entertaining?

Comment author: bentarm 15 August 2010 12:44:34PM 6 points [-]

As XiXiDu says - pretty much the same reason they find Isaac Asimov entertaining.

Comment author: XiXiDu 15 August 2010 12:38:13PM 6 points [-]

I said the same before. It's mainly good science fiction. I'm trying to find out if there's more to it though.

Just saying this as evidence that there is a lot doubt even within the LW community.

Comment author: michaelkeenan 15 August 2010 06:23:24PM 7 points [-]

I'm somewhat surprised that you appear not to have notice this sort of thing independently. Maybe we hang out in rather different crowds.

Oh, definitely. I have no real-life friends who are interested enough in these topics to know who Yudkowsky is (except, possibly, for what little they hear from me, and I try to keep the proselytizing to acceptable levels). So it's just me and the internet.

I haven't seen examples of Yudkowsky-hatred. But I have regularly seen people ridicule him. Recalling Hanson's view that a lot human behavior is really signaling and vying for status, I interpret this ridicule as an functioning to lower Eliezer's status to compensate for what people perceive as inappropriate status grubbing on his part.

I have seen some ridicule of Yudkowsky (on the internet) but my impression had been that it wasn't a reaction to his tone, but rather that people were using the absurdity heuristic (cryonics and AGI are crazy talk) or reacting to surface-level status markers (Yudkowsky doesn't have a PhD). That is to say, it didn't seem the kind of ridicule that was avoidable by managing one's tone. I don't usually read ridicule in detail so it makes sense I'd be mistaken about that.

Comment author: timtyler 15 August 2010 11:25:51AM *  1 point [-]

Recalling Hanson's view that a lot human behavior is really signaling and vying for status

Existential risk reduction too! Charities are mostly used for signalling purposes - and to display affiliations and interests. Those caught up in causes use them for social networking with like-minded individuals - to signal how much they care, to signal how much spare time and energy they have - and so on. The actual cause is usually not irrelevant - but it is not particularly central either. It doesn't make much sense to expect individuals to be actually attempting to SAVE THE WORLD! This is much more likely to be a signalling phenomenon, making use of a superstimulus for viral purposes.

Comment author: XiXiDu 15 August 2010 01:32:06PM *  16 points [-]

Negative reactions to Yudkowsky from various people (academics concerned with x-risk), just within the past few weeks:

I also have an extreme distaste for Eliezer Yudkowsky, and so I have a hard time forcing myself to cooperate with any organization that he is included in, but that is a personal matter.

You know, maybe I'm not all that interested in any sort of relationship with SIAI after all if this, and Yudkowsky, are the best you have to offer.

...

There are certainly many reasons to doubt the belief system of a cult based around the haphazard musings of a high school dropout, who has never written a single computer program but professes to be an expert on AI. As you point out none of the real AI experts are crying chicken little, and only a handful of AI researchers, cognitive scientists or philosophers take the FAI idea seriously.

...

Wow, that's an incredibly arrogant put-down by Eliezer..SIAI won't win many friends if he puts things like that...

...

...he seems to have lost his mind and written out of strong feelings. I disagree with him on most of these matters.

...

Questions of priority - and the relative intensity of suffering between members of different species - need to be distinguished from the question of whether other sentient beings have moral status at all. I guess that was what shocked me about Eliezer's bald assertion that frogs have no moral status. After all, humans may be less sentient than frogs compared to our posthuman successors. So it's unsettling to think that posthumans might give simple-minded humans the same level of moral consideration that Elizeer accords frogs.

I was told that the quotes above state some ad hominem falsehoods regarding Eliezer. I think it is appropriate to edit the message to show that indeed some person might not be have been honest, or clueful. Otherwise I'll unnecessary end up perpetuating possible ad hominem attacks.

Comment author: Eliezer_Yudkowsky 18 August 2010 03:06:41PM 15 points [-]

who has never written a single computer program

utterly false, wrote my first one at age 5 or 6, in BASIC on a ZX-81 with 4K of RAM

The fact that a lot of these reactions are based on false info is worth noting. It doesn't defeat any arguments directly, but it says that the naive model where everything happens because of the direct perception of actions I directly control is false.

Comment author: NancyLebovitz 15 August 2010 10:27:03PM 8 points [-]

Is it likely that someone who's doing interesting work that's publicly available wouldn't attract some hostility?

Comment author: Jonathan_Graehl 16 August 2010 10:10:25PM 3 points [-]

I guess that was what shocked me about Eliezer's bald assertion that frogs have no moral status.

This seems a rather minor objection.

Comment author: Emile 18 August 2010 03:26:20PM 6 points [-]

But frogs are CUTE!

And existential risks are boring, and only interest Sci-Fi nerds.

Comment author: Vladimir_Nesov 15 August 2010 01:43:23PM 6 points [-]

That N negative reactions about issue S exist only means that issue S is sufficiently popular.

Comment author: CarlShulman 15 August 2010 01:53:58PM 5 points [-]

Not if the polling is of folk in a position to have had contact with S, or is representative.

Comment author: Vladimir_Nesov 15 August 2010 02:03:34PM 3 points [-]

Sure, but XiXiDu's quotes bear no such framing.

Comment author: XiXiDu 15 August 2010 01:55:44PM 5 points [-]

I don't like to, but if necessary I can provide the indentity of the people who stated the above. They all directly work to reduce x-risks. I won't do so in public however.

Comment author: Vladimir_Nesov 15 August 2010 02:05:02PM *  4 points [-]

Identity of these people is not the issue. The percentage of people in given category that have negative reactions for given reason, negative reactions for other reason, and positive reactions would be useful, but not a bunch of filtered (in unknown way) soldier-arguments.

Comment author: XiXiDu 15 August 2010 02:12:21PM *  6 points [-]

I know. I however just wanted to highlight that there are negative reactions, including not so negative critique. If you look further, you'll probably find more. I haven't saved all I saw over the years, I just wanted to show that it's not like nobody has a problem with EY. And in all ocassion I actually defended him by the way.

The context is also difficult to provide as some of it is from private e-Mails. Although the first one is from here and after thinking about it I can also provide the name since he was anyway telling this Michael Anissimov. It is from Sean Hays:

Sean A Hays PhD Post Doctoral Fellow, Center for Nanotechnology in Society at ASU Research Associate, ASU-NAF-Slate Magazine "Future Tense" Initiative Program Director, IEET Securing the Future Program

Comment author: Rain 18 August 2010 04:48:56PM *  4 points [-]

You have a 'nasty things people say about Eliezer' quotes file?

Comment author: Vladimir_M 15 August 2010 06:58:39PM *  46 points [-]

I am a relative newbie commenter here, and my interest in this site has so far been limited to using it as a fun forum where it's possible to discuss all kinds of sundry topics with exceptionally smart people. However, I have read a large part of the background sequences, and I'm familiar with the main issues of concern here, so even though it might sound impertinent coming from someone without any status in this community, I can't resist commenting on this article.

To put it bluntly, I think the main point of the article is, if anything, an understatement. Let me speak from personal experience. From the perspective of this community, I am a sort of person who should be exceptionally easy to get interested and won over to its cause, considering both my intellectual background and my extreme openness to contrarian viewpoints and skepticism towards the official academic respectability as a criteron of truth and intellectual soundness. Yet, to be honest, even though I find a lot of the writing and discussion here extremely interesting, and the writings of Yudkowsky (in addition to others such as Bostrom, Hanson, etc.) have convinced me that technology-related existential risks should be taken much more seriously than they presently are, I still keep encountering things in this community that set off various red flags, which are undoubtedly taken by many people as a sign of weirdness and crackpottery, and thus alienate huge numbers of potential quality audience.

Probably the worst such example I've seen was the recent disturbance in which Roko was subjected to abuse that made him leave. When I read the subsequent discussions, it surprised me that virtually nobody here appears to be aware what an extreme PR disaster it was. Honestly, for someone unfamiliar with this website who has read about that episode, it would be irrational not to conclude that there's some loony cult thing going on here, unless he's also presented with enormous amounts of evidence to the contrary in the form of a selection of the best stuff that this site has to offer. After these events, I myself wondered whether I want to be associated with an outlet where such things happen, even just as an occasional commenter. (And not to even mention that Roko's departure is an enormous PR loss in its own right, in that he was one of the few people here who know how to write in a way that's interesting and appealing to people who aren't hard-core insiders.)

Even besides this major PR fail, I see many statements and arguments here that may be true, or at least not outright unreasonable, but should definitely be worded more cautiously and diplomatically if they're given openly for the whole world to see. I'm not going to get into details of concrete examples -- in particular, I do not concur unconditionally with any of the specific complaints from the above article -- but I really can't help but conclude that lots of people here, including some of the most prominent individuals, seem oblivious as to how broader audiences, even all kinds of very smart, knowledgeable, and open-minded people, will perceive what they write and say. If you want to have a closed inner circle where specific background knowledge and attitudes can be presumed, that's fine -- but if you set up a large website attracting lots of visitors and participants to propagate your ideas, you have to follow sound PR principles, or otherwise its effect may well end up being counter-productive.

Comment author: prase 16 August 2010 04:01:47PM 21 points [-]

I agree completely. I still read LessWrong because I am a relatively long-time reader, and thus I know that most of the people here are sane. Otherwise, I would conclude that there is some cranky process going on here. Still, the Roko affair caused me to significantly lower my probabilities assigned to SIAI success and forced me to seriously consider the hypothesis that Eliezer Yudkowsky went crazy.

By the way, I have a little bit disturbing feeling that too little of the newer material here is actually devoted to refining the art of human rationality, as the blog's header proudly states, while instead the posts often discuss relatively narrow list of topics which are only tangentially related to rationality. E.g. cryonics, AI stuff, evolutionary psychology, Newcomb-like scenarios.

Comment author: Morendil 16 August 2010 04:26:50PM 4 points [-]

By the way, I have a little bit disturbing feeling that too little of the newer material here is actually devoted to refining the art of human rationality

Part of that mission is to help people overcome the absurdity heuristic, and to help them think carefully about topics that normally trigger a knee-jerk reflex of dismissal on spurious grounds; it is in this sense that cryonics and the like are more than tangentially related to rationality.

I do agree with you that too much of the newer material keeps returning to those few habitual topics that are "superstimuli" for the heuristic. This perhaps prevents us from reaching out to newer people as effectively as we could. (Then again, as LW regulars we are biased in that we mostly look at what gets posted, when what may matter more for attracting and keeping new readers is what gets promoted.)

A site like YouAreNotSoSmart may be more effective in introducing these ideas to newcomers, to the extent that it mostly deals with run-of-the-mill topics. What makes LW valuable which YANSS lacks is constructive advice for becoming less wrong.

Comment author: [deleted] 15 August 2010 07:19:48PM 11 points [-]

Agreed.

One good sign here is that LW, unlike most other non-mainstream organizations, doesn't really function like a cult. Once one person starts being critical, critics start coming out of the woodwork. I have my doubts about this place sometimes too, but it has a high density of knowledgeable and open-minded people, and I think it has a better chance than anyone of actually acknowledging and benefiting from criticism.

I've tended to overlook the weirder stuff around here, like the Roko feud -- it got filed under "That's confusing and doesn't make sense" rather than "That's an outrage." But maybe it would be more constructive to change that attitude.

Comment author: Kevin 16 August 2010 10:01:47AM *  9 points [-]

What are the scenarios where someone unfamiliar with this website would hear about Roko's deleted post?

I suppose it could be written about dramatically (because it was dramatic!) but I don't think anyone is going to publish such an account. It was bad from the perspective of most LWers -- a heuristic against censorship is a good heuristic.

This whole thing is ultimately a meta discussion about moderation policy. Why should this discussion about banned topics be that much interesting than a post on Hacker News that is marked as dead? Hacker News generally doesn't allow discussion of why stories were marked dead. The moderators are anonymous and have unquestioned authority.

If Less Wrong had a mark as dead function (on HN unregistered users don't see dead stories, but registered users can opt-in to see them), I suspect Eliezer would have killed Roko's post instead of deleting it to avoid the concerns of censorship, but no one has written that LW feature yet.

As a solid example of what a not-PR disaster it was, I doubt that anyone at the Singularity Summit that isn't a regular Less Wrong reader (the majority of attendees) has heard that Eliezer deleted a post. It's just not the kind of thing that actually makes a PR disaster... honestly if this was a PR issue it might be a net positive because it would lead some people to hear of LW that otherwise would never have heard of Less Wrong. Please don't take that as a reason to make this a PR issue.

Eliezer succeeded in the sense that it is very unlikely that people in the future on Less Wrong are going to make stupid emotionally abhorrent posts about weird decision theory torture scenarios. He failed in that he could have handled the situation better.

If anyone would like to continue talking about Less Wrong moderation policy, the place to talk about it is the Meta Thread (though you'd probably want to make a new one (good for +[20,50] karma!) instead of discussing it in an out of season thread)

Comment author: homunq 31 August 2010 03:37:26PM 6 points [-]

As someone who had over 20 points of karma obliterated for reasons I don't fully understand, for having posted something which apparently strayed too close to a Roko post which I never read in its full version, I can attest that further and broader discussion of the moderation policy would be beneficial. I still don't really know what happened. Of course I have vague theories , and I've received a terse and unhelpful response from EY (a link to a horror story about a "riddle" which kills - a good story which I simply don't accept as a useful parable of reality), but nothing clear. I do not think that I have anything of outstanding value to offer this community, but I suspect that Roko, little I, and the half-dozen others like us which probably exist, are a net loss to the community if driven away, especially if not being seen as cultlike is valuable.

Comment author: Airedale 31 August 2010 05:49:37PM *  3 points [-]

As someone who had over 20 points of karma obliterated for reasons I don't fully understand, for having posted something which apparently strayed too close to a Roko post which I never read in its full version, I can attest that further and broader discussion of the moderation policy would be beneficial.

I believe you lost 20 karma because you had 2 net downvotes on your post at the time it was deleted (and those votes still affect your total karma, although the post cannot be further upvoted or downvoted). The loss of karma did not result directly from the deletion of the post, except for the fact that the deletion froze the post’s karma at the level it was at when it was deleted.

I only looked briefly at your post, don’t remember very much about it, and am only one reader here, but from what I recall, your post did not seem so obviously good that it would have recovered from those two downvotes. Indeed, my impression is that it’s more probable that if the post had been left up longer, it would have been even more severely downvoted than it was at the time of deletion, as is the case with the many people’s first posts. I’m not very confident about that, but there certainly would have been that risk.

All that being said, I can understand if you would rather have taken the risk of an even greater hit to karma if it would have meant that people were able to read and comment on your post. I can also sympathize with your desire for a clearer moderation policy, although unless EY chose to participate in the discussion, I don’t think clearer standards would emerge, because it’s ultimately EY’s call whether to delete a post or comment. (I think there are a couple others with moderation powers, but it’s my understanding that they would not independently delete a non-troll/spam post).

Comment author: homunq 01 September 2010 12:58:19PM 3 points [-]

I think it was 30 karma points (3 net downvotes), though I'm not sure. And I believe that it is entirely possible that some of those downvotes (more than 3, because I had at least 3 upvotes) were for alleged danger, not for lack of quality. Most importantly, if the post hadn't been deleted, I could have read the comments which presumably would have given me some indication of the reason for those downvotes.

Comment author: Will_Newsome 16 August 2010 09:48:34AM 2 points [-]

Looking at my own posts I see a lot of this problem; that is, the problem of addressing only far too small an audience. Thank you for pointing it out.

Comment author: Jordan 15 August 2010 06:33:45PM 6 points [-]

Damnit! My smug self assurance that I could postpone thinking about these issues seriously because I'm an SIAI donor .... GONE! How am I supposed to get any work done now?

Seriously though, I do wish the SIAI toned down its self importance and incredible claims, however true they are. I realize, of course, that dulling some claims to appear more credible is approaching a Dark Side type strategy, but... well, no buts. I'm just confused.

Comment author: Rain 15 August 2010 02:28:19PM *  14 points [-]

Have we seen any results (or even progress) come from the SIAI Challenge Grants, which included a Comprehensive Singularity FAQ and many academic papers dealing directly with the topics of concern? These should hopefully be less easy to ridicule and provide an authoritative foundation after the peer review process.

Edit: And if they fail to come to fruition, then we have some strong evidence to doubt SIAI's effectiveness.

Comment author: Eliezer_Yudkowsky 18 August 2010 02:28:13PM 17 points [-]

I don't mean to dismiss the points of this post, but all of those points do need to be reinterpreted in light of the fact that I'd rather have a few really good rationalists as allies than a lot of mediocre rationalists who think "oh, cool" and don't do anything about it. Consider me as being systematically concerned with the top 5% rather than the average case. However, I do still care about things like propagation velocities because that affects what population size the top 5% is 5% of, for example.

Comment author: XiXiDu 18 August 2010 02:58:32PM 9 points [-]

Somewhere you said that you are really happy to be finally able to concentrate directly on the matters you deem important and don't have to raise money anymore. This obviously worked, so you won't have to change anything. But if you ever need to raise more money for a certain project, my question is how much of the money you already get comes from people you would consider mediocre rationalists?

I'm not sure if you expect to ever need a lot of money for a SIAI project, but if you solely rely on those few really good rationalists then you might have a hard time in that case.

People like me will probably always stay on your side, whether you tell them they are idiots. But I'm not sure if that might be enough in a scenario where donations are important.

Comment author: multifoliaterose 18 August 2010 04:17:02PM *  6 points [-]

Agree with the points of both of ChistianKI and XiXiDu.

As for really good rationalists, I have the impression that even when it comes to them you inadvertently alienate them with higher than usual frequency on account of saying things that sound quite strange.

I think (but am not sure) that you would benefit from spending more time understanding what goes on in neurotypical people's minds. This would carry not only social benefits (which you may no longer need very much at this point) but also epistemological benefits.

However, I do still care about things like propagation velocities because that affects what population size the top 5% is 5% of, for example.

I'm encouraged by this remark.

Comment author: ChristianKl 18 August 2010 03:09:18PM 2 points [-]

If we think existential risk reduction is important than we should care about whether politicians think that existential risk reduction is a good idea. I don't think that a substantial number of US congressman are what you consider to be good rationalists.

Comment author: Eliezer_Yudkowsky 18 August 2010 04:28:20PM 12 points [-]

For Congress to implement good policy in this area would be performance vastly exceeding what we've previously seen from them. They called prediction markets terror markets. I expect more of the same, and expect to have little effect on them.

Comment author: Psy-Kosh 18 August 2010 08:58:00PM 9 points [-]

The flipside though is if we can frame the issue in a way that there's no obvious Democrat or Republican position, then we can, as Robin Hanson puts it, "pull the rope sideways".

The very fact that much of the existential risk stuff is "strange sounding" relative to what most people are used to really thinking about in the context of political arguments might thus act as a positive.

Comment author: Vladimir_Nesov 15 August 2010 10:07:24AM *  4 points [-]

I don't find persuasive your arguments that the following policy suggestion has high impact (or indeed is something to worry about, in comparison with other factors):

"requiring [SIAI] staff to exhibit a high degree of vigilance about the possibility of poisoning the existential risk meme by making claims that people find uncredible"

(Note that both times I qualified the suggestion to contact SIAI (instead of starting a war) with "if you have a reasonable complaint/usable suggestion for improvement".)

Comment author: Perplexed 15 August 2010 06:22:57PM 10 points [-]

I believe that more likely than not, the reason why Eliezer has missed the point that I raise in this post is social naivete on his part rather than willful self-deception.

I find it impossible to believe that the author of Harry Potter and the Methods of Rationality is oblivious to the first impression he creates. However, I can well believe that he imagines it to be a minor handicap which will fade in importance with continued exposure to his brilliance (as was the fictional case with HP). The unacknowledged problem in the non-fictional case, of course, is in maintaining that continued exposure.

I am personally currently skeptical that the singularity represents existential risk. But having watched Eliezer completely confuse and irritate Robert Wright, and having read half of the "debate" with Hanson, I am quite willing to hypothesize that the explanation of what the singularity is (and why we should be nervous about it) ought to come from anybody but Eliezer. He speaks and writes clearly on many subjects, but not that one.

Perhaps he would communicate more successfully on this topic if he tried a dialog format. But it would have to be one in which his constructed interlocutors are convincing opponents, rather than straw men.

Comment author: wedrifid 15 August 2010 09:05:06AM *  12 points [-]

I like your post. I wouldn't go quite so far as to ascribe outright negative utility to SIAI donations - I believe you underestimate just how much potential social influence money provides. I suspect my conclusion there would approximately mirror Vassar's.

It would not be at all surprising if the founders of Less Wrong have a similar unusual abundance of the associated with Aspergers Syndrome. I believe that more likely than not, the reason why Eliezer has missed the point that I raise in this post is social naivete on his part rather than willful self-deception.

(Typo: I think you meant to include 'traits' or similar in there.)

While Eliezer occasionally takes actions that seem clearly detrimental to his cause I do suggest that Eliezer is at least in principle aware of the dynamics you discuss. His alter ego "Harry Potter" has had similar discussion with his Draco in his fanfiction.

Also note that appearing too sophisticated would be extremely dangerous. If Eliezer or SIAI gains the sort of status and credibility you would like them to seek they open themselves to threats from governments and paramilitary organisations. If you are trying to take over the world it is far better to be seen as an idealistic do gooder who writes fanfic than as a political power player. You don't want the <TLA of choice> to raid your basement, kill you and run your near complete FAI with the values of the TLA. Obviously there is some sort of balance to be reached here...

Comment author: XiXiDu 15 August 2010 01:09:37PM *  10 points [-]

I raised a similar point on the IEET existential risk mailing list in a reply to James J. Hughes:

Michael

For the record, I have no problem with probability estimates. I am less and less willing to offer them myself however since we have the collapse of the Soviet Union etc. as evidence of a chaotic and unpredictable nature of history, Ray’s charts not-with-standing.

What I find a continuing source of amazement is that there is a subculture of people half of whom believe that AI will lead to the solving of all mankind’s problems (which me might call Kurzweilian S^) and the other half of which is more or less certain (75% certain) that it will lead to annihilation. Lets call the latter the SIAI S^.

Yet you SIAI S^ invite these proponents of global suicide by AI, K-type S^, to your conferences and give them standing ovations.

And instead of waging desperate politico-military struggle to stop all this suicidal AI research you cheerlead for it, and focus your efforts on risk mitigation on discussions of how a friendly god-like AI could save us from annihilation.

You are a deeply schizophrenic little culture, which for a sociologist like me is just fascinating.

But as someone deeply concerned about these issues I find the irrationality of the S^ approach to a-life and AI threats deeply troubling.

James J. Hughes (existential.ieet.org mailing list, 2010-07-11)

I replied:

Keep your friends close...maybe they just want to keep the AI crowd as close together as possible. Making enemies wouldn't be a smart idea either, as the 'K-type S^' subgroup would likely retreat from further information disclosure. Making friends with them might be the best idea.

An explanation of the rather calm stance regarding a potential giga-death or living hell event would be to keep a low profile until acquiring more power.

Comment author: wedrifid 15 August 2010 05:09:50PM *  12 points [-]

Yet you SIAI S^ invite these proponents of global suicide by AI, K-type S^, to your conferences and give them standing ovations. ... You are a deeply schizophrenic little culture, which for a sociologist like me is just fascinating.

This is a perfect example of where the 'outside view' can go wrong. Even the most basic 'inside view' of the topic would make it overwhelmingly obvious why a "75% certain of death by AI" folks could be allied (or the same people!) as the "solve all problems through AI" group. Splitting the two positions prematurely and trying to make a simple model of political adversity like that is just naive.

I personally guess >= 75% for AI death and also advocate FAI research. Preventing AI development indefinitely via desperate politico-military struggle would just not work in the long term. Trying would be utter folly. Nevermind the even longer term which would probably result in undesirable outcomes even if humanity did manage to artificially stunt its own progress in such a manner.

(The guy also uses 'schizophrenic' incorrectly.)

Comment author: Aleksei_Riikonen 15 August 2010 07:35:32PM *  4 points [-]

I don't think James Hughes would present or believe in that particular low-quality analysis himself either, if he didn't feel that SIAI is an organization competing with his IEET for popularity within the transhumanist subculture.

So mostly that statement is probably just about using "divide and conquer" towards transhumanists/singularitarians who are currently more popular within the transhumanist subculture than he is.

Comment author: Rain 15 August 2010 04:50:24PM *  2 points [-]

Seeing you quote James Hughes makes me wonder if I didn't realize where you were getting your ideas when I said the anti-Summit should be technical minded and avoid IEET-style politics.

Comment author: [deleted] 19 August 2010 02:35:43AM *  3 points [-]

It sounds to me like half of the perceived public image problem comes from apparently blurred lines between the SIAI and LessWrong, and between the SIAI and Eliezer himself. These could be real problems - I generally have difficulty explaining one of the three without mentioning the other two - but I'm not sure how significant it is.

The ideal situation would be that people would evaluate SIAI based on its publications, the justification of the research areas, and whether the current and proposed projects satisfy those goals best, are reasonably costed, and are making progress.

Whoever actually holds these as the points to be evaluated will find the list of achievements. Individual projects all have detailed proposals and a budget breakdown, since donors can choose to donate directly to one research project or another.

Finally, a large number of those projects are academic papers. If you dig a bit, you'll find that many of these papers are submitted at academic and industry conferences. Hosting the Singularity Summit doesn't hurt either.

It doesn't make sense to downplay a researcher's strange viewpoints if those viewpoints seem valid. Eliezer believes his viewpoint to be valid. LessWrong, a project of his, has a lot of people who agree with his ideas. There are also people who disagree with some of his ideas, but the point is that it shouldn't matter. LessWrong is a project of SIAI, not the organization itself. Support on this website of his ideas should have little to do with SIAI's support of his ideas.

Your points seem to be that claims made by Eliezer and upheld by the SIAI don't appear credible due to insufficient argument, and due to one person's personality. You can argue all you want about how he is viewed. You can debate the published papers' worth. But the two shouldn't be equated. This despite the fact that he's written half of the publications.

Here are the questions (that tie to your post) which I think are worth discussing on public relations, if not the contents of the publications:

  • Do people equate "The views of Eliezer Yudkowsky" with "The views of SIAI"? Do people view the research program or organization as "his" project?
  • Which people, and to what extent?
  • Is this good or bad, and how important is it?

The optimal answer to those questions is the one that leads the most AI researchers to evaluate the most publications with the respect of serious scrutiny and consideration.

I'll repeat that other people have published papers with the SIAI, that their proposals are spelled out, that some papers are presented at academic and industry conferences, and that the SIAI's Singularity Summit hosts speakers who do not agree with all of Eliezer's opinions, who nonetheless associate with the organization by attendance.

Comment author: [deleted] 19 August 2010 02:46:19AM 6 points [-]

To top it off, the SIAI is responsible for getting James Randi's seal of approval on the Singularity being probable. That's not poisoning the meme, not one bit.

Comment author: nonhuman 21 August 2010 03:37:13AM 4 points [-]

I feel it's worth pointing out that just because something should be, doesn't mean it is. You state:

Your points seem to be that claims made by Eliezer and upheld by the SIAI don't appear credible due to insufficient argument, and due to one person's personality. You can argue all you want about how he is viewed. You can debate the published papers' worth. But the two shouldn't be equated.

I agree with the sentiment, but how practical is it? Just because it would be incorrect to equate Eliezer and the SIAI doesn't meant that people won't do it. Perhaps it would be reasonable to say that the people who fail to make the distinction are also the people on whom it's not worth expending the effort trying to explicate the situation, but I suspect that it is still the case that the majority of people are going to have a hard time not making that equation if they even try at all.

The purpose of this article, I would presume to say, is that public relations actually does serve a valid and useful purpose. It is not a wasted effort to ensure that the ideas that one considers true, or at least worthwhile, are presented in the sort of light that encourages people to take them seriously. This is something that I think many people of a more intellectual bent often fail to consider; though some of us might actually invest time and effort into determining for ourselves whether an idea is good or not, I would say the majority do not and instead rely on trusted sources to guide them (with often disastrous results).

Again, it may just be that we don't care about those people (and it's certainly tempting to go that way), but there may be times when quantity of supporters, in addition to quality, could be useful.

Comment author: [deleted] 21 August 2010 06:32:48PM 2 points [-]

We don't disagree on any point that I can see. I was contrasting an ideal way of looking at things (part of what you quoted) from how people might actually see things (my three bullet-point questions).

As much as I enjoy Eliezer's thoughts and respect his work, I'm also of the opinion that one of the tasks the SIAI must work on (and almost certainly is working on) is keeping his research going while making the distinction between the two entities more obvious. But to whom? The research community should be the first and primary target.

Coming back from the Summit, I feel that they're taking decent measures toward this. The most important thing to do is for the other SIAI names to be known. Michael Vassar's is the easiest to get people to hold because of the name of his role, and he was acting as the SIAI face more than Eliezer was. At this point, a dispute would make the SIAI look unstable - they need positive promotion of leadership and idea diversity, more public awareness of their interactions with academia, and that's about it.

Housing a clearly promoted second research program would solve this problem. If only there was enough money, and a second goal which didn't obviously conflict with the first, and the program still fit under the mission statement. I don't know if that is possible. Money aside, I think that it is possible. Decision theoretic research with respect to FAI is just one area of FAI research. Utterly essential, but probably not all there is to do.

Comment author: thomblake 16 August 2010 02:17:13PM 3 points [-]

This post reminds me of the talk at this year's H+ summit by Robert Tercek. Amongst other things, he was pointing out how the PR battle over transhumanist issues was already lost in popular culture, and that the transhumanists were not helping matters by putting people with very freaky ideas in the spotlight.

I wonder if there are analogous concerns here.

Comment author: mranissimov 16 August 2010 07:24:11AM 3 points [-]

Just to check... have I said any "naughty" things analogous to the Eliezer quote above?

Comment author: Larks 17 August 2010 10:11:48PM *  6 points [-]

This page is now the 8th result for a google search for 'existential risk' and the 4th result for 'singularity existential risk'

Regardless of the effect SIAI may have had on the public image of existential risk reduction, it seems this is unlikely to be helpful.

Edit: it is now 7th and first, respectively. This is plusungood.

Comment author: jimrandomh 18 August 2010 03:32:02AM 8 points [-]

This is partially because Google gives a ranking boost to things it sees as recent, so it may not stay that well ranked.

Comment author: JoshuaZ 15 August 2010 04:28:32PM 7 points [-]

I disagree strongly with this post. In general, it is a bad idea to refrain from making claims that one believes are true simply because those claims will make people less likely to listen to other claims. That direction lies the downwards spiral of emotional manipulation, rhetoric, and other things not conducive to rational discourse.

Would one under this logic encourage the SIAI to make statements that are commonly accepted but wrong in order to make people more likely to listen to the SIAI? If not, what is the difference?

Comment author: multifoliaterose 15 August 2010 05:26:31PM *  6 points [-]

I believe that there are contexts in which the right thing to do is to speak what one believes to be true even if doing so damages public relations.

These things need to be decided on a case-by-case basis. There's no royal road to instrumental rationality.

As I say here, in the present context, a very relevant issue in my mind is that Eliezer & co. have not substantiated their most controversial claims with detailed evidence.

It's clichéd to say so, but extraordinary claims require extraordinary evidence. A claim of the type "I'm the most important person alive" is statistically many orders of magnitude more likely to be made by a poser than by somebody for whom the claim is true. Casual observers are rational to believe that Eliezer is a poser. The halo effect problem is irrational, yes, but human irrationality must be acknowledged, it's not the sort of thing that goes away if you pretend that it's not there.

I don't believe that Eliezer's outlandish and unjustified claims contribute to rational discourse. I believe that Eliezer's outlandish and unjustified claims lower the sanity waterline.

To summarize, I believe that in this particular case the costs that you allude to are outweighed by the benefits.

Comment author: timtyler 15 August 2010 06:03:07PM *  3 points [-]

Come on - he never actually claimed that.

Besides, many people have inflated views of their own importance. Humans are built that way. For one thing, It helps them get hired, if they claim that they can do the job. It is sometimes funny - but surely not a big deal.

Comment author: timtyler 15 August 2010 04:32:54PM 2 points [-]

It seems as though the latter strategy could backfire - if the false statements were exposed. Keeping your mouth shut about controversial issues seems safer.

Comment author: Mitchell_Porter 15 August 2010 11:51:14AM 10 points [-]

But what if you're increasing existential risk, because encouraging SIAI staff to censor themselves will make them neurotic and therefore less effective thinkers? We must all withhold karma from multifoliaterose until this undermining stops! :-)

Comment author: [deleted] 15 August 2010 02:46:09PM 13 points [-]

I am one of those who haven't been convinced by the SIAI line. I have two main objections.

First, EY is concerned about risks due to technologies that have not yet been developed; as far as I know, there is no reliable way to make predictions about the likelihood of the development of new technologies. (This is also the basis of my skepticism about cryonics.) If you're going to say "Technology X is likely to be developed" then I'd like to see your prediction mechanism and whether it's worked in the past.

Second, shouldn't an organization worried about the dangers of AI be very closely in touch with AI researchers in computer science departments? Sure, there's room for pure philosophy and mathematics, but you'd need some grounding in actual AI to understand what future AIs are likely to do.

I think multifoliaterose is right that there's a PR problem, but it's not just a PR problem. It seems, unfortunately, to be a problem with having enough justification for claims, and a problem with connecting to the world of professional science. I think the PR problems arise from being too disconnected from the demands placed on other scientific or science policy organizations. People who study other risks, say epidemic disease, have to get peer-reviewed, they have to get government funding -- their ideas need to pass a round of rigorous criticism. Their PR is better by necessity.

Comment author: nhamann 15 August 2010 06:22:22PM *  6 points [-]

Second, shouldn't an organization worried about the dangers of AI be very closely in touch with AI researchers in computer science departments? Sure, there's room for pure philosophy and mathematics, but you'd need some grounding in actual AI to understand what future AIs are likely to do.

I'm not sure what you refer to by "actual AI." There is a sub-field of academic computer science which calls itself "Artificial Intelligence," but it's not clear that this is anything more than a label, or that this field does anything more than use clever machine learning techniques to make computer programs accomplish things that once seemed to require intelligence (like playing chess, driving a car, etc.)

I'm not sure why it is a requirement that an organization concerned with the behavior of hypothetical future engineered minds would need to be in contact with these researchers.

Comment author: Eliezer_Yudkowsky 18 August 2010 02:31:02PM 3 points [-]

I'm not sure why it is a requirement that an organization concerned with the behavior of hypothetical future engineered minds would need to be in contact with these researchers.

You have to know some of their math (some of it is interesting, some not) but this does not require getting on the phone with them and asking them to explain their math, to which of course they would tell to you to RTFM instead of calling them.

Comment author: [deleted] 15 August 2010 06:59:28PM 3 points [-]

Yes, the subfield of computer science is what I'm referring to.

I'm not sure that the difference between "clever machine learning techniques" and "minds" is as hard and fast as you make it. A machine that drives a car is doing one of the things a human mind does; it may, in some cases, do it through a process that's structurally similar to the way the human mind does it. It seems to me that machines that can do these simple cognitive tasks are the best source of evidence we have today about hypothetical future thinking machines.

Comment author: nhamann 15 August 2010 08:10:18PM 5 points [-]

I'm not sure that the difference between "clever machine learning techniques" and "minds" is as hard and fast as you make it.

I gave the wrong impression here. I actually think that machine learning might be a good framework for thinking about how parts of the brain work, and I am very interested in studying machine learning. But I am skeptical that more than a small minority of projects where machine learning techniques have been applied to solve some concrete problem have shed any light on how (human) intelligence works.

In other words, I largely agree with Ben Goertzel's assertion that there is a fundamental difference between "narrow AI" and AI research that might eventually lead to machines capable of cognition, but I'm not sure I have good evidence for this argument.

Comment author: komponisto 15 August 2010 10:19:50PM 11 points [-]

In other words, I largely agree with Ben Goertzel's assertion that there is a fundamental difference between "narrow AI" and AI research that might eventually lead to machines capable of cognition, but I'm not sure I have good evidence for this argument.

Although one should be very, very careful not to confuse the opinions of someone like Goertzel with those of the people (currently) at SIAI, I think it's fair to say that most of them (including, in particular, Eliezer) hold a view similar to this. And this is the location -- pretty much the only important one -- of my disagreement with those folks. (Or, rather, I should say my differing impression from those folks -- to make an important distinction brought to my attention by one of the folks in question, Anna Salamon.) Most of Eliezer's claims about the importance of FAI research seem obviously true to me (to the point where I marvel at the fuss that is regularly made about them), but the one that I have not quite been able to swallow is the notion that AGI is only decades away, as opposed to a century or two. And the reason is essentially disagreement on the above point.

At first glance this may seem puzzling, since, given how much more attention is given to narrow AI by researchers, you might think that someone who believes AGI is "fundamentally different" from narrow AI might be more pessimistic about the prospect of AGI coming soon than someone (like me) who is inclined to suspect that the difference is essentially quantitative. The explanation, however, is that (from what I can tell) the former belief leads Eliezer and others at SIAI to assign (relatively) large amounts of probability mass to the scenario of a small set of people having some "insight" which allows them to suddenly invent AGI in a basement. In other words, they tend to view AGI as something like an unsolved math problem, like those on the Clay Millennium list, whereas it seems to me like a daunting engineering task analogous to colonizing Mars (or maybe Pluto).

This -- much more than all the business about fragility of value and recursive self-improvement leading to hard takeoff, which frankly always struck me as pretty obvious, though maybe there is hindsight involved here -- is the area of Eliezer's belief map that, in my opinion, could really use more public, explicit justification.

Comment author: Daniel_Burfoot 15 August 2010 11:32:57PM 5 points [-]

whereas it seems to me like a daunting engineering task analogous to colonizing Mars

I don't think this is a good analogy. The problem of colonizing Mars is concrete. You can make a TODO list; you can carve the larger problem up into subproblems like rockets, fuel supply, life support, and so on. Nobody knows how to do that for AI.

Comment author: nhamann 16 August 2010 03:48:56AM *  3 points [-]

I don't think AGI in a few decades is very farfetched at all. There's a heckuvalot of neuroscience being done right now (the Society for Neuroscience has 40,000 members), and while it's probably true that much of that research is concerned most directly with mere biological "implementation details" and not with "underlying algorithms" of intelligence, it is difficult for me to imagine that there will still be no significant insights into the AGI problem after 3 or 4 more decades of this amount of neuroscience research.

Comment author: komponisto 16 August 2010 04:53:11AM *  3 points [-]

Of course there will be significant insights into the AGI problem over the coming decades -- probably many of them. My point was that I don't see AGI as hard because of a lack of insights; I see it as hard because it will require vast amounts of "ordinary" intellectual labor.

Comment author: nhamann 16 August 2010 06:10:36AM 9 points [-]

I'm having trouble understanding how exactly you think the AGI problem is different from any really hard math problem. Take P != NP, for instance the attempted proof that's been making the rounds on various blogs. If you've skimmed any of the discussion you can see that even this attempted proof piggybacks on "vast amounts of 'ordinary' intellectual labor," largely consisting of mapping out various complexity classes and their properties and relations. There's probably been at least 30 years of complexity theory research required to make that proof attempt even possible.

I think you might be able to argue that even if we had an excellent theoretical model of an AGI, that the engineering effort required to actually implement it might be substantial and require several decades of work (e.g. Von Neumann architecture isn't suitable for AGI implementation, so a great deal of computer engineering has to be done).

If this is your position, I think you might have a point, but I still don't see how the effort is going to take 1 or 2 centuries. A century is a loooong time. A century ago humans barely had powered flight.

Comment author: Daniel_Burfoot 18 August 2010 06:03:12PM 4 points [-]

but I still don't see how the effort is going to take 1 or 2 centuries. A century is a loooong time.

I think the following quote is illustrative of the problems facing the field:

After [David Marr] joined us, our team became the most famous vision group in the world, but the one with the fewest results. His idea was a disaster. The edge finders they have now using his theories, as far as I can see, are slightly worse than the ones we had just before taking him on. We've lost twenty years.

-Marvin Minsky, quoted in "AI" by Daniel Crevier.

Some notes and interpretation of this comment:

  • Most vision researchers, if asked who is the most important contributor to their field, would probably answer "David Marr". He set the direction for subsequent research in the field; students in introductory vision classes read his papers first.
  • Edge detection is a tiny part of vision, and vision is a tiny part of intelligence, but at least in Minsky's view, no progress (or reverse progress) was achieved in twenty years of research by the leading lights of the field.
  • There is no standard method for evaluating edge detector algorithms, so it is essentially impossible to measure progress in any rigorous way.

I think this kind of observation justifies AI-timeframes on the order of centuries.

Comment author: komponisto 16 August 2010 07:35:04AM 7 points [-]

Take P != NP, for instance the attempted proof that's been making the rounds on various blogs. If you've skimmed any of the discussion you can see that even this attempted proof piggybacks on "vast amounts of 'ordinary' intellectual labor,

By no means do I want to downplay the difficulty of P vs NP; all the same, I think we have different meanings of "vast" in mind.

The way I think about it is: think of all the intermediate levels of technological development that exist between what we have now and outright Singularity. I would only be half-joking if I said that we ought to have flying cars before we have AGI. There are of course more important examples of technologies that seem easier than AGI, but which themselves seem decades away. Repair of spinal cord injuries; artificial vision; useful quantum computers (or an understanding of their impossibility); cures for the numerous cancers; revival of cryonics patients; weather control. (Some of these, such as vision, are arguably sub-problems of AGI: problems that would have to be solved in the course of solving AGI.)

Actually, think of math problems if you like. Surely there are conjectures in existence now -- probably some of them already famous -- that will take mathematicians more than a century from now to prove (assuming no Singularity or intelligence enhancement before then). Is AGI significantly easier than the hardest math problems around now? This isn't my impression -- indeed, it looks to me more analogous to problems that are considered "hopeless", like the "problem" of classifying all groups, say.

Comment author: Eliezer_Yudkowsky 18 August 2010 02:36:25PM 10 points [-]

By no means do I want to downplay the difficulty of P vs NP; all the same, I think we have different meanings of "vast" in mind.

I hate to go all existence proofy on you, but we have an existence proof of a general intelligence - accidentally sneezed out by natural selection, no less, which has severe trouble building freely rotating wheels - and no existence proof of a proof of P != NP. I don't know much about the field, but from what I've heard, I wouldn't be too surprised if proving P != NP is harder than building FAI for the unaided human mind. I wonder if Scott Aaronson would agree with me on that, even though neither of us understand the other's field? (I just wrote him an email and asked, actually; and this time remembered not to say my opinion before asking for his.)

Comment author: timtyler 16 August 2010 06:28:37AM 2 points [-]

...but you don't really know - right?

You can't say with much confidence that there's no AIXI-shaped magic bullet.

Comment author: komponisto 16 August 2010 07:38:22AM *  2 points [-]

That's right; I'm not an expert in AI. Hence I am describing my impressions, not my fully Aumannized Bayesian beliefs.

Comment author: JoshuaZ 15 August 2010 10:31:10PM *  4 points [-]

In other words, I largely agree with Ben Goertzel's assertion that there is a fundamental difference between "narrow AI" and AI research that might eventually lead to machines capable of cognition, but I'm not sure I have good evidence for this argument.

One obvious piece of evidence is that many forms of narrow learning are mathematically incapable of doing much. There are for example a whole host of theorems about what different classes of neural networks can actually recognize, and the results aren't very impressive. Similarly, support vector machine's have a lot of trouble learning anything that isn't a very simple statistical model, and even then humans need to decide which stats are relevant. Other linear classifiers run into similar problems.

Comment author: Simulation_Brain 18 August 2010 06:20:49AM 3 points [-]

I work in this field, and was under approximately the opposite impression; that voice and visual recognition are rapidly approaching human levels. If I'm wrong and there are sharp limits, I'd like to know. Thanks!

Comment author: timtyler 18 August 2010 06:31:35AM *  2 points [-]

Machine intelligence has surpassed "human level" in a number of narrow domains. Already, humans can't manipulate enough data to do anything remotely like a search engine or a stockbot can do.

The claim seems to be that in narrow domains there are often domain-specific "tricks" - that wind up not having much to do with general intelligence - e.g. see chess and go. This seems true - but narrow projects often broaden out. Search engines and stockbots really need to read and understand the web. The pressure to develop general intelligence in those domains seems pretty strong.

Those who make a big deal about the distinction between their projects and "mere" expert systems are probably mostly trying to market their projects before they are really experts at anything.

One of my videos discusses the issue of whether the path to superintelligent machines will be "broad" or "narrow":

http://alife.co.uk/essays/on_general_machine_intelligence_strategies/

Comment author: ciphergoth 16 August 2010 06:14:01PM *  5 points [-]

There needs to be an article on this point. In the absence of a really good way of deciding what technologies are likely to be developed, you are still making a decision. You haven't signed up yet; whether you like it or not, that is a decision. And it's a decision that only makes sense if you think technology X is unlikely to be developed, so I'd like to see your prediction mechanism and whether it's worked in the past. In the absence of really good information, we sometimes have to decide on the information we have.

EDIT: I was thinking about cryonics when I wrote this, though the argument generalizes.

Comment author: John_Maxwell_IV 16 August 2010 12:24:34AM *  5 points [-]

First, EY is concerned about risks due to technologies that have not yet been developed; as far as I know, there is no reliable way to make predictions about the likelihood of the development of new technologies. (This is also the basis of my skepticism about cryonics.) If you're going to say "Technology X is likely to be developed" then I'd like to see your prediction mechanism and whether it's worked in the past.

Let's keep in mind that your estimated probabilities of various technological advancements occurring and your level of confidence in those estimates are completely distinct... In particular, here you seem to express low estimated probabilities of various advancements occurring, and you justify this by saying "we really have no idea". This seems like a complete non sequitur. Maybe you have a correct argument in your mind, but you're not giving us all the pieces.

Comment author: [deleted] 16 August 2010 12:37:09AM 6 points [-]
  1. Technology X is likely to be developed in a few decades.
  2. Technology X is risky.
  3. We must take steps to mitigate the risk.

If you haven't demonstrated 1 -- if it's still unknown -- you can't expect me to believe 3. The burden of proof is on whoever's asking for money for a new risk-mitigating venture, to give strong evidence that the risk is real.

Comment author: Aleksei_Riikonen 16 August 2010 01:35:58AM *  4 points [-]

So you think a danger needs to likely arrive in a few decades for it to merit attention?

I think that is quite irresponsible. No law of physics states that all problems can certainly be solved very well in a few decades (the solutions for some problems might even necessarily involve political components, btw), so starting preparations earlier can be necessary.

Comment author: John_Maxwell_IV 16 August 2010 01:03:42AM *  2 points [-]

I see "burden of proof" as a misconcept in the same way that someone "deserving" something is. A better way of thinking about this: "You seem to be making a strong claim. Mind sharing the evidence for your claim for me? ...I disagree that the evidence you present justifies your claim."

For what it's worth, I also see "must _" as a misconcept--although "must _ to _" is not. It's an understandable usage if the "to _*" clause is implicit, but that doesn't seem true in this case. So to fix up SIAI's argument, you could say that these are the statements whose probabilities are being contested:

  1. If SarahC takes action Y before the development of Technology X and Technology X is developed, the expected value of her action will exceed its cost.
  2. Technology X will be developed.

And depending on their probabilities, the following may or may not be true:

  • SarahC wants to take action Y.

Pretty much anything you say that's not relevant to one of statements 1 or 2 (including statements that certain people haven't been "responsible" enough in supporting their claims) is completely irrelevant to the question of whether you want to take action Y. You already have (or ought to be able to construct) probability estimates for each of 1 and 2.

Comment author: NancyLebovitz 15 August 2010 11:17:33PM 5 points [-]

Prediction is hard, especially about the future.

One thing that intrigues me is snags. Did anyone predict how hard to would be to improve batteries, especially batteries big enough for cars?

Comment author: xamdam 15 August 2010 04:36:33PM *  4 points [-]

First, EY is concerned about risks due to technologies that have not yet been developed; as far as I know, there is no reliable way to make predictions about the likelihood of the development of new technologies. (This is also the basis of my skepticism about cryonics.) If you're going to say "Technology X is likely to be developed" then I'd like to see your prediction mechanism and whether it's worked in the past.

I think there are ways to make these predictions. On the most layman level I would point out that IBM build a robot that beats people at Jeopardy. Yes, I am aware that this is a complete machine-learning hack (this is what I could gather from the NYT coverage) and is not true cognition, but it surprised even me (I do know something about ML). I think this is useful to defeat the intuition of "machines cannot do that". If you are truly interested I think you can (I know you're capable) read Norvig's AI book, and than follow up on the parts of it that most resemble human cognition; I think serious progress is made in those areas. BTW, Norvig does take FAI issues seriously, including a reference to EY paper in the book.

Second, shouldn't an organization worried about the dangers of AI be very closely in touch with AI researchers in computer science departments? Sure, there's room for pure philosophy and mathematics, but you'd need some grounding in actual AI to understand what future AIs are likely to do.

I think they should, I have no idea if this is being done; but if I would do it I would not do it publicly, as it may have very counterproductive consequences. So until you or I become SIAI fellows we will not know, and I cannot hold such lack of knowledge against them.

Comment author: [deleted] 15 August 2010 07:07:04PM 1 point [-]

First, I'm not really claiming "machines cannot do that." I can see advances in machine learning and I can imagine the next round of advances being pretty exciting. But I'm thinking in terms of maybe someday a machine being able to distinguish foreground from background, or understand a sentence in English, not being a superintelligence that controls Earth's destiny. The scales are completely different. One scale is reasonable; one strains credibility, I'm afraid.

Thanks for the book recommendation; I'll be sure to check it out.

Comment author: Apprentice 16 August 2010 11:01:58PM 16 points [-]

I think controlling Earth's destiny is only modestly harder than understanding a sentence in English - in the same sense that I think Einstein was only modestly smarter than George W. Bush. EY makes a similar point.

You sound to me like someone saying, sixty years ago: "Maybe some day a computer will be able to play a legal game of chess - but simultaneously defeating multiple grandmasters, that strains credibility, I'm afraid." But it only took a few decades to get from point A to point B. I doubt that going from "understanding English" to "controlling the Earth" will take that long.

Comment author: Eliezer_Yudkowsky 18 August 2010 02:43:56PM 4 points [-]

I think controlling Earth's destiny is only modestly harder than understanding a sentence in English.

Well said. I shall have to try to remember that tagline.

Comment author: cousin_it 21 September 2010 11:28:34PM *  4 points [-]

There's a problem with it, though. Some decades ago you'd have just as eagerly subscribed to this statement: "Controlling Earth's destiny is only modestly harder than playing a good game of chess", which we now know to be almost certainly false.

Comment author: SilasBarta 22 September 2010 07:17:28PM 2 points [-]

I agree with Rain. Understanding implies a much deeper model than playing. To make the comparison to chess, you would have to change it to something like, "Controlling Earth's destiny is only modestly harder than making something that can learn chess, or any other board game, without that game's mechanics (or any mapping from the computer's output to game moves) being hard-coded, and then play it at an expert level."

Not obviously false, I think.

Comment author: Rain 22 September 2010 06:49:14PM *  2 points [-]

It's the word "understanding" in the quote which makes it presume general intelligence and/or consciousness without directly stating it. The word "playing" does not have such a connotation, at least to me. I don't know if I would think differently back when chess required intelligence.

Comment author: Will_Newsome 21 September 2010 11:08:20PM 2 points [-]

Hey, remember this tagline: "I think controlling Earth's destiny is only modestly harder than understanding a sentence in English."

Comment author: Will_Newsome 17 July 2011 11:10:42PM 3 points [-]

(Again:) Hey, remember this tagline: "I think controlling Earth's destiny is only modestly harder than understanding a sentence in English."

Comment author: multifoliaterose 15 August 2010 04:16:54PM 4 points [-]

I think multifoliaterose is right that there's a PR problem, but it's not just a PR problem. It seems, unfortunately, to be a problem with having enough justification for claims, and a problem with connecting to the world of professional science. I think the PR problems arise from being too disconnected from the demands placed on other scientific or science policy organizations. People who study other risks, say epidemic disease, have to get peer-reviewed, they have to get government funding -- their ideas need to pass a round of rigorous criticism. Their PR is better by necessity.

I agree completely. The reason why I framed my top level post in the way that I did was so that it would be relevant to readers of a variety of levels of confidence in SIAI's claims.

As I indicate here, I personally wouldn't be interested in funding SIAI as presently constituted even if there was no PR problem.

Comment author: orthonormal 15 August 2010 03:46:57PM 10 points [-]

First, EY is concerned about risks due to technologies that have not yet been developed; as far as I know, there is no reliable way to make predictions about the likelihood of the development of new technologies.

As was mentioned in other threads, SIAI's main arguments rely on disjunctions and antipredictions more than conjunctions and predictions. That is, if several technology scenarios lead to the same broad outcome, that's a much stronger claim than one very detailed scenario.

For instance, the claim that AI presents a special category of existential risk is supported by such a disjunction. There are several technologies today which we know would be very dangerous with the right clever 'recipe'– we can make simple molecular nanotech machines, we can engineer custom viruses, we can hack into some very sensitive or essential computer systems, etc. What these all imply is that a much smarter agent with a lot of computing power is a severe existential threat if it chooses to be.

Comment author: JRMayne 15 August 2010 04:25:58PM 6 points [-]

Solid, bold post.

Eliezer's comments on his personal importance to humanity remind me of the Total Perspective Device from Hitchhiker's. Everyone who gets perspective from the TPD goes mad; Zaphod Beeblebrox goes in and finds out he's the most important person in human history.

Eliezer's saying he's Zaphod Beeblebrox. Maybe he is, but I'm betting heavily against that for the reasons outlined in the post. I expect AI progress of all sorts to come from people who are able to dedicate long, high-productivity hours to the cause, and who don't believe that they and only they can accomplish the task.

I also don't care if the statements are social naivete or not; I think the statements that indicate that he is the most important person in human history - and that seems to me to be what he's saying - are so seriously mistaken, and made with such a high confidence level, as to massively reduce my estimated likelihood that SIAI is going to be productive at all.

And that's a good thing. Throwing money into a seriously suboptimal project is a bad idea. SIAI may be good at getting out the word of existential risk (and I do think existential risk is serious, under-discussed business), but the indicators are that it's not going to solve it. I won't give to SIAI if Eliezer stops saying these things, because it appears he'll still be thinking those things.

I expect AI progress to come incrementally, BTW - I don't expect the Foomination. And I expect it to come from Google or someone similar; a large group of really smart, really hard-working people.

I could be wrong.

--JRM

Comment author: nhamann 15 August 2010 05:33:46PM 6 points [-]

I expect AI progress to come incrementally, BTW - I don't expect the Foomination. And I expect it to come from Google or someone similar; a large group of really smart, really hard-working people.

I'd like to point out that it's not either/or: it's possible (likely?) that it will take decades of hard work and incremental progress by lots of really smart people to advance AI science to a point where an AI could FOOM.

Comment author: CarlShulman 15 August 2010 06:05:22PM 3 points [-]

I would say likely, conditional on eventual FOOM. The alternative means both a concentration of probability mass in the next ten years and that the relevant theory and tools are almost wholly complete.

Comment author: Eliezer_Yudkowsky 18 August 2010 03:03:07PM 0 points [-]

And saddened once again at how people seem unable to distinguish "multi claims that something Eliezer said could be construed as claim X" and "Eliezer claimed X!"

Please note that for the next time you're worried about damaging an important cause's PR, multi.

Comment author: multifoliaterose 18 August 2010 04:08:02PM 9 points [-]

My understanding of JRMayne's remark is that he himself construes your statements in the way that I mentioned in my post.

If JRMayne has misunderstood you, you can effectively deal with the situation by making a public statement about what you meant to convey.

Note that you have not made a disclaimer which rules out the possibility that you claim that you're the most important person in human history. I encourage you to make such a disclaimer if JRMayne has misunderstood you.

Comment author: XiXiDu 18 August 2010 03:23:21PM *  8 points [-]

I have to disagree based on the following evidence:

Q: The only two legitimate occupations for an intelligent person in our current world? (Answer)

and

"At present I do not know of any other person who could do that." (Reference)

This makes it reasonable to state that you think you might be the most important person in the world.

Comment author: Eliezer_Yudkowsky 18 August 2010 03:26:54PM 0 points [-]

I love that "makes it reasonable" part. Especially in a discussion on what you shouldn't say in public.

Now we're to avoid stating any premises from which any absurd conclusions seem reasonable to infer?

This would be a reducto of the original post if the average audience member consistently applied this sort of reasoning; but of course it is motivated on XiXiDu's part, not necessarily something the average audience member would do.

Note that saying "But you must therefore argue X..." where the said person has not actually uttered X, but it would be a soldier against them if they did say X, is a sign of political argument gone wrong.

Comment author: JRMayne 18 August 2010 04:59:12PM 8 points [-]

Gosh, I find this all quite cryptic.

Suppose I, as Lord Chief Prosecutor of the Heathens say:

  1. All heathens should be jailed.

  2. Mentally handicapped Joe is a heathen; he barely understands that there are people, much less the One True God.

One of my opponents says I want Joe jailed. I have not actually uttered that I want Joe jailed, and it would be a soldier against me if I had, because that's an unpopular position. This is a mark of a political argument gone wrong?

I'm trying to find another logical conclusion to XiXiDu's cited statements (or a raft of others in the same vein.) Is there one I don't see? Is it just that you're probably the most important entity in history, but, you know, maybe not? Is it that there's only a 5% chance that you're the most important person in human history?

I have not argued that you should not say these things, BTW. I have argued that you probably should not think them, because they are very unlikely to be true.

Comment author: JRMayne 18 August 2010 04:52:19PM 11 points [-]

Um, I wasn't basing my conclusion on multifoliaterose's statements. I had made the Zaphod Beeblebrox analogy due to the statements you personally have made. I had considered doing an open thread comment on this very thing.

Which of these statements do you reject?:

  1. FAI is the most important project on earth, right now, and probably ever.

  2. FAI may be the difference between a doomed multiverse of [very large number] of sentient beings. No project in human history is of greater importance.

  3. You are the most likely person - and SIAI the most likely agency, because of you - to accomplish saving the multiverse.

Number 4 is unnecessary for your being the most important person on earth, but:

  1. People who disagree with you are either stupid or ignorant. If only they had read the sequences, then they would agree with you. Unless they were stupid.

And then you've blamed multi for this. He is trying to help an important cause; both multifoliaterose and XiXiDu are, in my opinion, acting in a manner they believe will help the existential risk cause.

And your final statement, that multifoliaterose is damaging an important cause's PR appears entirely deaf to multi's post. He's trying to help the cause - he and XiXiDu are orders of magnitude more sympathetic to the cause of non-war existential risk than just about anyone. You appear to have conflated "Eliezer Yudkowsky," with "AI existential risk."

Again.

I might be wrong about my interpretation - but I don't think I am. If I am wrong, other very smart people who want to view you favorably have done similar things. Maybe the flaw isn't in the collective ignorance and stupidity in other people. Just a thought.

--JRM

Comment author: JGWeissman 18 August 2010 06:39:40PM 7 points [-]

Which of those statements do you reject?

Comment author: MaoShan 15 August 2010 08:26:59PM 3 points [-]

Aside from the body of the article, which is just "common" sense, given the author's opinion against the current policies of SIAI, I found the final paragraph interesting because I also exhibit "an unusually high abundance of the traits associated with Aspergers Syndrome." Perhaps possessing that group of traits gives one a predilection to seriously consider existential risk reduction by being socially detached enough to see the bigger picture. Perhaps LW is somewhat homogenously populated with this "certain kind" of people. So, how do we gain credibility with normal people?

Comment author: timtyler 15 August 2010 07:35:45AM *  3 points [-]

The one "uncredible" claim mentioned - about Eliezer being "hit by a meteorite" - sounds as though it is the kind of thing he might plausibly think. Not too much of a big deal, IMO.

As with many charities, it is easy to think the SIAI might be having a negative effect - simply because it occupies the niche of another organisation that could be doing a better job - but what to do? Things could be worse as well - probably much worse.

Comment author: multifoliaterose 15 August 2010 08:27:35AM 4 points [-]

The point of my post is not that there's a problem of SIAI staff making claims that you find uncredible, the point of my post is that there's a problem of SIAI making claims that people who are not already sold on taking existential risk seriously find uncredible.

Comment author: Wei_Dai 15 August 2010 09:23:35AM 6 points [-]

Can you give a few more examples of claims made by SIAI staff that people find uncredible? Because it's probably not entirely clear to them (or to others interested in existential risk advocacy) what kind of things a typical smart person would find uncredible.

Looking at your previous comments, I see that another example you gave was that AGI will be developed within the next century. Any other examples?

Comment author: whpearson 15 August 2010 12:34:18PM *  8 points [-]

Things that stretch my credibility.

  • AI will be developed by a small team (at this time) in secret
  • That formal theory involving infinite/near infinite computing power has anything to do with AI and computing in the real world. It might be vaguely useful for looking at computing in the limit (e.g. Galaxy sized computers), but otherwise it is credibility stretching.
Comment author: Wei_Dai 16 August 2010 11:52:18PM *  6 points [-]

AI will be developed by a small team (at this time) in secret

I find this very unlikely as well, but Anna Salamon once put it as something like "9 Fields-Medalist types plus (an eventual) methodological revolution" which made me raise my probability estimate from "negligible" to "very small", which I think given the potential payoffs, is enough for someone to be exploring the possibility seriously.

I have a suspicion that Eliezer isn't privately as confident about this as he appears, and his apparent confidence is itself a PR strategy.

That formal theory involving infinite/near infinite computing power has anything to do with AI and computing in the real world.

Turing's theories involving infinite computing power contributed to building actual computers, right? I don't see why such theories wouldn't be useful stepping stones for building AIs as well. There's a lot of work on making AIXI practical, for example (which may be disastrous if they succeeded since AIXI wasn't designed to be Friendly).

If this is really something that a typical smart person finds hard to believe at first, it seems like it would be relatively easy to convince them otherwise.

Comment author: JanetK 15 August 2010 10:48:01AM 8 points [-]

Is accepting multi-universes important to the SIAI argument? There are a very, very large number of smart people who know very little about physics. They give lip service to quantum theory and relativity because of authority - but they do not understand them. Mentioning multi-universes just slams a door in their minds. If it is important then you will have to continue referring to it but if it is not then it would be better not to sound like you have science fiction type ideas.

Comment author: wedrifid 15 August 2010 11:04:58AM 4 points [-]

Is accepting multi-universes important to the SIAI argument?

Definitely not, for the purposes of public relations at least. It may make some difference when actually doing AI work.

If it is important then you will have to continue referring to it but if it is not then it would be better not to sound like you have science fiction type ideas.

Good point. Cryonics probably comes with a worse Sci. Fi. vibe but is unfortunately less avoidable.

Comment author: multifoliaterose 15 August 2010 11:13:21AM 5 points [-]

Cryonics probably comes with a worse Sci. Fi. vibe

This is a large part of what I implicitly had in mind making my cryonics post (which I guess really rubbed you the wrong way). You might be interested in taking a look at the updated version if you haven't already done so - I hope it's more clear than it was before.

Comment author: multifoliaterose 15 August 2010 10:27:00AM 4 points [-]

Good question. I'll get back to you on this when I get a chance, I should do a little bit of research on the topic first. The two examples that you've seen are the main ones that I have in mind that have been stated in public, but there may be others that I'm forgetting.

There are some other examples that I have in mind from my private correspondence with Michael Vassar. He's made some claims which I personally do not find at all credible. (I don't want to repeat these without his explicit permission.) I'm sold on the cause of existential risk reduction, so the issue in my top level post does not apply here. But in the course of the correspondence I got the impression that he may say similar things in private to other people who are not sold on the cause of existential risk.

Comment deleted 15 August 2010 02:35:51PM *  [-]
Comment author: Jonathan_Graehl 16 August 2010 10:29:49PM *  4 points [-]

The trauma caused by imagining torture blackmail is hard to relate to for most people (including me), because it's so easy to not take an idea like infinite torture blackmail seriously, on the grounds that the likelihood of ever actually encountering such a scenario seems vanishingly small.

I guess those who are disturbed by the idea have excellent imaginations, or more likely, emotional systems that can be fooled into trying to evaluate the idea of infinite torture ("hell").

Therefore, I agree that it's possible to make fun of people on this basis. I myself lean more toward accommodation. Sure, I think those hurt by it should have just avoided the discussion, but perhaps having EY speak for them and officially ban something gave them some catharsis. I feel like I'm beginning to make fun now, so I'll stop.

Comment author: ciphergoth 16 August 2010 06:30:03PM 5 points [-]

The form of blanking out you use isn't secure. Better to use pure black rectangles.

Comment author: RobinZ 16 August 2010 06:48:13PM *  4 points [-]
Comment author: SilasBarta 16 August 2010 06:55:01PM *  20 points [-]

Amusing anecdote: There was a story about this issue on Slashdot one time, where someone possessing kiddy porn had obscured the faces by doing a swirl distortion, but investigators were able to sufficiently reverse this by doing an opposite swirl and so were able to identify the victims.

Then someone posted a comment to say that if you ever want to avoid this problem, you need to do something like a Gaussian blur, which deletes the information contained in that portion of the image.

Somebody replied to that comment and said, "Yeah. Or, you know, you could just not molest children."

Comment author: wedrifid 16 August 2010 07:20:35PM 3 points [-]

Somebody replied to that comment and said, "Yeah. Or, you know, you could just not molest children."

Brilliant.

Comment author: wedrifid 16 August 2010 07:22:37PM 2 points [-]

Nice link. (It's always good to read articles where 'NLP' doesn't refer, approximately, to Jedi mind tricks.)

Comment author: timtyler 16 August 2010 06:38:41PM 3 points [-]

That document was knocking around on a public website for several days.

Using very much security would probably be pretty pointless.

Comment author: timtyler 15 August 2010 02:44:35PM *  5 points [-]

Perhaps that was a marketing effort.

After all, everyone likes to tell the tale of the forbidden topic and the apprentice being insulted. You are spreading the story around now - increasing the mystery and intrigue of these mythical events about which (almost!) all records have been deleted. The material was left in public for a long time - creating plenty of opportunities for it to "accidentally" leak out.

By allowing partly obfuscated forbidden materials to emerge, you may be contributing to the community folklaw, spreading and perpetuating the intrigue.

Comment author: jimrandomh 15 August 2010 02:55:20PM 4 points [-]

Please stop doing this. You are adding spaced repetition to something that I, and others, positively do not want to think about. That is a real harm and you do not appear to have taken it seriously.

Comment author: XiXiDu 15 August 2010 03:03:14PM 4 points [-]

I'm sorry, but people like Wei force me to do this as they make this whole movement look like being completely down-to-earth, when in fact most people, if they knew about the full complexity of beliefs within this community, would laugh out loud.

Comment author: wedrifid 16 August 2010 03:56:47AM *  11 points [-]

You have a good point. It would be completely unreasonable to ban topics in such a manner while simultaneously expecting to maintain an image of being down to earth or particularly credible to intelligent external observers. It also doesn't reflect well on the SIAI if their authorities claim they cannot consider relevant risks because due to psychological or psychiatric difficulties. That is incredibly bad PR. It is exactly the kind of problem this post discusses.

Comment author: HughRistik 16 August 2010 11:57:41PM 4 points [-]

That is incredibly bad PR.

Since the success of an organization is partly dependent on its PR, a rational donor should be skeptical of donating to an organization with bad PR. Any organization soliciting donations should keep this principle in mind.

Comment author: rhollerith_dot_com 17 August 2010 03:52:51PM *  7 points [-]

Since the success of an organization is partly dependent on its PR, a rational donor should be skeptical of donating to an organization with bad PR.

So let me see if I understand: if an organization uses its income to make a major scientific breakthrough or to prevent a million people from starving, but does not pay enough attention to avoiding bad PR with the result that the organization ends (but the productive employees take the skills they have accumulated there to other organizations), that is a bad organization, but if an organization in the manner of most non-profits focuses on staying in existence as long as possible to provide a secure personal income for its leaders, which entails paying close attention to PR, that is a good organization?

Well, let us take a concrete example: Doug Engelbart's lab at SRI International. Doug wasted too much time mentoring the young researchers in his lab with the result that he did not pay enough attention to PR and his lab was forced to close. Most of the young researchers got jobs at Xerox PARC and continued to develop Engelbart's vision of networked personal computers with graphical user interfaces, work that directly and incontrovertibly inspired the Macintosh computer. But let's not focus on that. Let's focus on the fact that Engelbart is a failure because he no longer runs an organization because the organization failed because Engelbart did not pay enough attention to PR and to the other factors needed to ensure the perpetuation of the organization.

Comment author: timtyler 16 August 2010 04:58:08PM *  3 points [-]

I still have a hard time believing it actually happened. I have heard that there's no such thing as bad publicity - but surely nobody would pull this kind of stunt deliberately. It just seems to be such an obviously bad thing to do.

Comment author: katydee 16 August 2010 01:02:34AM 5 points [-]

The "laugh test" is not rational. I think that, if the majority of people fully understood the context of such statements, they would not consider them funny.

Comment author: wedrifid 16 August 2010 03:45:28AM 8 points [-]

The context asked 'what kind of things a typical smart person would find uncredible'. This is a perfect example of such a thing.

Comment author: JoshuaZ 16 August 2010 01:07:42AM 1 point [-]

You don't seem to realize that claims like the ones in the post in question are a common sort of claim to make people vulnerable to neuroses develop further problems. Regardless whether or not the claims are at all reasonable, repeatedly referencing them this way is likely to cause further psychological harm. Please stop.

Comment author: Vladimir_M 16 August 2010 04:27:11AM *  20 points [-]

JoshuaZ:

You don't seem to realize that claims like the ones in the post in question are a common sort of claim to make people vulnerable to neuroses develop further problems. Regardless whether or not the claims are at all reasonable, repeatedly referencing them this way is likely to cause further psychological harm.

However, it seems that in general, the mere fact that certain statements may cause psychological harm to some people is not considered a sufficient ground for banning or even just discouraging such statements here. For example, I am sure that many religious people would find certain views often expressed here shocking and deeply disturbing, and I have no doubt that many of them could be driven into serious psychological crises by exposure to such arguments, especially if they're stated so clearly and poignantly that they're difficult to brush off or rationalize away. Or, to take another example, it's very hard to scare me with hypotheticals, but the post "The Strangest Thing An AI Could Tell You" and the subsequent thread came pretty close; I'm sure that at least a few readers of this blog didn't sleep well if they happened to read that right before bedtime.

So, what exact sorts of potential psychological harm constitute sufficient grounds for proclaiming a topic undesirable? Is there some official policy about this that I've failed to acquaint myself with?

Comment author: JoshuaZ 16 August 2010 03:10:15PM 6 points [-]

That's a very valid set of points and I don't have a satisfactory response.

Comment author: multifoliaterose 15 August 2010 08:06:45AM *  6 points [-]

I suggested what to do about this problem in my post: withhold funding from SIAI, and make it clear to them why you're withholding funding from them, and promise to fund them if the issue is satisfactorily resolved to incentivize them to improve.

Comment author: CarlShulman 15 August 2010 10:25:10AM 2 points [-]

Will you do this?

Comment author: multifoliaterose 15 August 2010 10:35:36AM 7 points [-]

I'm definitely interested in funding an existential risk organization. SIAI would have to be a lot more transparent than it is now right now for me to be interested in funding SIAI. For me personally, it wouldn't be enough for SIAI to just take measures to avoid poisoning the meme, I would need to see a lot more evidence that SIAI is systematically working to maximize its impact on existential risk reduction.

As things stand I prefer to hold out for a better organization. But if SIAI exhibited transparency and accountability of levels similar to those of GiveWell (welcoming and publically responding to criticism regularly, regularly posting detailed plans of action, seeking out feedback from subject matter specialists and making this public when possible, etc.) I would definitely fund SIAI and advocate that others do so as well.

Comment author: Wei_Dai 15 August 2010 10:48:30AM 11 points [-]

"transparency"? I thought the point of your post was that SIAI members should refrain from making some of their beliefs easily available to the public?

Comment author: multifoliaterose 15 August 2010 11:00:18AM *  7 points [-]

I see, maybe I should have been more clear. The point of my post is that SIAI members should not express controversial views without substantiating them with abundant evidence. If SIAI provided compelling evidence that Eliezer's work has higher expected value to humanity than what virtually everybody else is doing, then I would think Eliezer's comment appropriate.

As things stand SIAI has not provided such evidence. Eliezer himself may have such evidence, but if so he's either unwilling or unable to share it.

Comment author: CarlShulman 15 August 2010 12:41:20PM 8 points [-]

higher expected value to humanity than what virtually everybody else is doing,

For what definitions of "value to humanity" and "virtually everybody else"?

If "value to humanity" is assessed as in Bostrom's Astronomical Waste paper, that hugely favors effects on existential risk vs alleviating current suffering or increasing present welfare (as such, those also have existential risk effects). Most people don't agree with that view, so asserting that as a privileged frame can be seen as a hostile move (attacking the value systems of others in favor of a value system according to which one's area of focus is especially important). Think of the anger directed at vegetarians, or those who guilt-trip others about not saving African lives. And of course, it's easier to do well on a metric that others are mostly not focused on optimizing.

Dispute about what best reduces existential risk, and annoyance at overly confident statements there is a further issue, but I think that asserting uncommon moral principles (which happen to rank one's activities as much more valuable than most people would rank them) is a big factor on its own.

Comment author: Wei_Dai 17 August 2010 12:57:54AM *  9 points [-]

There are a lot of second and higher order effects in PR. You can always shape your public statements for one audience and end up driving away (or failing to convince) another one that's more important. If Eliezer had shied away from stating some of the more "uncredible" ideas because there wasn't enough evidence to convince a typical smart person, it would surely prompt questions of "what do you really think about this?" or fail to attract people who are currently interested in SIAI because of those ideas.

If SIAI provided compelling evidence that Eliezer's work has higher expected value to humanity than what virtually everybody else is doing, then I would think Eliezer's comment appropriate.

Suppose Eliezer hadn't made that claim, and somebody asks him, "do you think the work SIAI is doing has higher expected value to humanity than what everybody else is doing?", which somebody is bound to, given that Eliezer is asking for donations from rationalists. What is he supposed to say? "I can't give you the answer because I don't have enough evidence to convince a typical smart person?"

I think you make a good point that it's important to think about PR, but I'm not at all convinced that the specific advice you give are the right ones.

Comment author: multifoliaterose 17 August 2010 05:27:28AM 5 points [-]

Thanks for your feedback. Several remarks:

You can always shape your public statements for one audience and end up driving away (or failing to convince) another one that's more important.

This is of course true. I myself am fairly certain that SIAI's public statements are driving away the people who it's most important to interest in existential risk.

Suppose Eliezer hadn't made that claim, and somebody asks him, "do you think the work SIAI is doing has higher expected value to humanity than what everybody else is doing?", which somebody is bound to, given that Eliezer is asking for donations from rationalists. What is he supposed to say? "I can't give you the answer because I don't have enough evidence to convince a typical smart person?"

•It's standard public relations practice to reveal certain information only if asked.

•An organization that has the strongest case for room for more funding need not be an organization that's doing something of higher expected value to humanity than what everybody else is doing. In particular, I simultaneously believe that there are politicians who have higher expected value to humanity than all existential risk researchers alive and that the cause of existential risk has the greatest room for more funding.

•One need not be confident in one's belief that funding one's organization has highest expected value to humanity to believe that funding one's organization has highest expected to humanity. A major issue that I have with Eliezer's rhetoric is that he projects what I perceive to be an unreasonably high degree of confidence in his beliefs.

•Another major issue with Eliezer's rhetoric that I have is that even putting issues of PR aside, I personally believe that funding SIAI does not have anywhere near the highest expected value to humanity out of all possible uses of money. So from my point of view, I see no upside to Eliezer making extreme claims of the sort that he has - it looks to me as though Eliezer is making false claims and damaging public relations for existential risk as a result.

I will be detailing my reasons for thinking that SIAI's research does not have high expected value in a future post.

Comment author: Vladimir_Nesov 17 August 2010 03:09:03PM 3 points [-]

One need not be confident in one's belief [...]

The level of certainty is not up for grabs. You are as confident as you happen to be, this can't be changed. You can change the appearance, but not your actual level of confidence. And changing the apparent level of confidence is equivalent to lying.

Comment author: Emile 17 August 2010 03:26:37PM 2 points [-]

But it isn't perceived as so by the general public - it seems to me that the usual perception of "confidence" has more to do with status than with probability estimates.

The non-technical people I work with often say that I use "maybe" and "probably" too much (I'm a programmer - "it'll probably work" is a good description of how often it does work in practice) - as if having confidence in one's statements was a sign of moral fibre, and not a sign of miscalibration.

Actually, making statements with high confidence is a positive trait, but most people address this by increasing the confidence they express, not by increasing their knowledge until they can honestly make high-confidence statements. And our culture doesn't correct for that, because errors of calibration are not immediatly obvious (as they would be if, say, we had a widespread habit of betting on various things).

Comment author: timtyler 15 August 2010 11:21:13AM 5 points [-]

If there really was "abundant evidence" there probably wouldn't be much of a controversy.

Comment author: rhollerith_dot_com 15 August 2010 12:16:41PM *  5 points [-]

Eliezer himself may have such evidence [that Eliezer's work has higher expected value to humanity than what virtually everybody else is doing], but if so he's either unwilling or unable to share it.

Now that is unfair.

Since 1997, Eliezer has published (mostly on mailing lists and blogs but also in monographs) an enormous amount (at least ten novels worth unless I am very mistaken) of writings supporting exactly that point. Of course most of this material is technical, but unlike the vast majority of technical prose, it is accessible to non-specialists and non-initiates with enough intelligence, a solid undergraduate education as a "scientific generalist" and a lot of free time on their hands because in his writings Eliezer is constantly "watching out for" the reader who does not yet know what he knows. (In other words, it is uncommonly good technical exposition.)

Comment author: multifoliaterose 15 August 2010 04:29:02PM *  10 points [-]

So my impression has been that the situation is that

(i) Eliezer's writings contain a great deal of insightful material.

(ii) These writings do not substantiate the idea that [that Eliezer's work has higher expected value to humanity than what virtually everybody else is doing].

I say this having read perhaps around a thousand pages of what Eliezer has written. I consider the amount of reading that I've done to be a good "probabilistic proof" that the points (i) and (ii) apply to the portion of his writings that I haven't read.

That being said, if there are any particular documents that you would point me to which you feel do provide a satisfactory evidence for the idea [that Eliezer's work has higher expected value to humanity than what virtually everybody else is doing], I would be happy to examine them.

I'm unwilling to read the whole of his opus given how much of it I've already read without being convinced. I feel that the time that I put into reducing existential risk can be used to better effect in other ways.

Comment author: JamesAndrix 15 August 2010 11:48:23PM *  4 points [-]

It would help to know what steps in the probabilistic proof don't have high probability for you.

For example, you might think that the singularity has a good probability of being relatively smooth and some kind of friendly, even without FAI. or you might think that other existential risks may still be a bigger threat, or you may think that Eliezer isn't putting a dent in the FAI problem.

Or some combination of these and others.

Comment author: Perplexed 16 August 2010 04:21:55AM *  7 points [-]

This might be a convenient place to collect a variety of reasons why people are FOOM denialists. From my POV: 1. I am skeptical that safeguards against UFAI (unFAI) will not work. In part because: 2. I doubt that the "takeoff" will be "hard". Because: 3. I am pretty sure the takeoff will require repeatedly doubling and quadrupling hardware, not just autorewriting software. 4. And hence an effective safeguard would be to simply not give the machine its own credit card! 5. And in any case, the Moore's law curve for electronics does not arise from delays in thinking up clever ideas, it arises from delays in building machines to incredibly high tolerances. 6. Furthermore, even after the machine has more hardware, it doesn't yet have higher intelligence until it reads lots more encyclopedias and proves for itself many more theorems. These things take time. 7. And finally, I have yet to see the argument that an FAI protects us from a future UFAI. That is, how does the SIAI help us? 8. Oh, and I do think that the other existential risks, particularly war and economic collapse, put the UFAI risk pretty far down the priority list. Sure, those other risks may not be quite so existential, but if they don't kill us, they will at least prevent an early singularity.

Edit added two days later: Since writing this, I thought about it some more, shut up for a moment, and did the math. I still think that it is unlikely that the first takeoff will be a hard one; so hard that it gets out of control. But I now estimate something like a 10% chance that the first takeoff will be hard, and I estimate something like a 30% chance that at least one of the first couple dozen takeoffs will be hard. Multiply that by an estimated 10% chance that a hard takeoff will take place without adequate safeguards in place, and another 10% chance that a safeguardless hard takeoff will go rogue, and you get something like a 0.3% chance of a disaster of Forbin Project magnitude. Completely unacceptable.

Originally, I had discounted the chance that a simple software change could cause the takeoff; I assumed you would need to double and redouble the hardware capability. What I failed to notice was that a simple "tuning" change to the (soft) network connectivity parameters - changing the maximum number of inputs per "neuron" from 8 to 7, say, could have an (unexpected) effect on performance of several orders of magnitude simply by suppressing wasteful thrashing or some such thing.

Comment author: multifoliaterose 16 August 2010 03:25:14AM 6 points [-]

Yes, I agree with you. I plan on making my detailed thoughts on these points explicit. I expect to be able to do so within a month.

But for a short answer, I would say that the situation is mostly that I think that:

Eliezer isn't putting a dent in the FAI problem.

Comment author: XiXiDu 15 August 2010 12:31:35PM 4 points [-]

Can you be more specific than "it's somewhere beneath an enormous amount of 13 years of material from the very same person whose arguments are scrutinized for evidence"?

This is not sufficient to scare people up to the point of having nightmares and ask them for most of their money.

Comment author: whpearson 15 August 2010 01:05:49PM *  3 points [-]

I'm planning to fund FHI rather than SIAI, when I have a stable income (although my preference is for a different organisation that doesn't exist)

My position is roughly this.

  • The nature of intelligence (and its capability for FOOMing) is poorly understood

  • The correct actions to take depend upon the nature of intelligence.

As such I would prefer to fund an institute that questioned the nature of intelligence, rather than one that has made up its mind that a singularity is the way forward. And it is not just the name that makes me think that SIAI has settled upon this view.

And because the nature of intelligence is the largest wild card in the future of humanity, I would prefer FHI to concentrate on that. Rather than longevity etc.

Comment author: timtyler 15 August 2010 11:36:55AM *  2 points [-]

I suggested what to do about this problem in my post: withhold funding from SIAI.

Right - but that's only advice for those who are already donating. Others would presumably seek reform or replacement. The decision there seems non-trivial.

Comment author: Thomas 15 August 2010 09:49:00AM 1 point [-]

Just take the best of anybody and discard the rest. Yudkowsky has some very good points (about 80% of his writings, by my view) - take them and say thank you.

When he or the SIAI missed the point, to put it mildly, you know it better anyway, don't you?

Comment author: multifoliaterose 15 August 2010 10:36:49AM 7 points [-]

I agree that Yudkowsky has some very good points.

My purpose in making the top level post is as stated: to work against poisoning the meme.