I am a relative newbie commenter here, and my interest in this site has so far been limited to using it as a fun forum where it's possible to discuss all kinds of sundry topics with exceptionally smart people. However, I have read a large part of the background sequences, and I'm familiar with the main issues of concern here, so even though it might sound impertinent coming from someone without any status in this community, I can't resist commenting on this article.
To put it bluntly, I think the main point of the article is, if anything, an understatement. Let me speak from personal experience. From the perspective of this community, I am a sort of person who should be exceptionally easy to get interested and won over to its cause, considering both my intellectual background and my extreme openness to contrarian viewpoints and skepticism towards the official academic respectability as a criteron of truth and intellectual soundness. Yet, to be honest, even though I find a lot of the writing and discussion here extremely interesting, and the writings of Yudkowsky (in addition to others such as Bostrom, Hanson, etc.) have convinced me that technology-related existential risks should ...
I agree completely. I still read LessWrong because I am a relatively long-time reader, and thus I know that most of the people here are sane. Otherwise, I would conclude that there is some cranky process going on here. Still, the Roko affair caused me to significantly lower my probabilities assigned to SIAI success and forced me to seriously consider the hypothesis that Eliezer Yudkowsky went crazy.
By the way, I have a little bit disturbing feeling that too little of the newer material here is actually devoted to refining the art of human rationality, as the blog's header proudly states, while instead the posts often discuss relatively narrow list of topics which are only tangentially related to rationality. E.g. cryonics, AI stuff, evolutionary psychology, Newcomb-like scenarios.
Agreed.
One good sign here is that LW, unlike most other non-mainstream organizations, doesn't really function like a cult. Once one person starts being critical, critics start coming out of the woodwork. I have my doubts about this place sometimes too, but it has a high density of knowledgeable and open-minded people, and I think it has a better chance than anyone of actually acknowledging and benefiting from criticism.
I've tended to overlook the weirder stuff around here, like the Roko feud -- it got filed under "That's confusing and doesn't make sense" rather than "That's an outrage." But maybe it would be more constructive to change that attitude.
I disagree with your assessment. Let's just look at Lw for starters.
Eileen Barker:
On Eileen Barker:
Much like in 2 above, many people have chosen to sign up for cryonics based on advice from the likes of Eliezer and Robin; indeed, Eliezer has advised that anyone not smart enough to do the math should just trust him on this.
I believe that most LW posters are not signed up for cryonics (myself included), and there is substantial disagreement about whether it's a good idea. And that disagreement has been well received by the "cult", judging by the karma scores involved.
Several us/them distinctions have been made and are not open for discussion. For example, theism is a common whipping-boy, and posts discussing the virtues of theism are generally not welcome.
Theism has been discussed. It is wrong. But Robert Aumann's work is still considered very important; theists are hardly dismissed as "satanic," to use Barker's word.
Of Barker's criteria, 2-4 of 6 apply to the LessWrong community, and only one ("Leaders and movements who are unequivocally focused on achieving a certain goal") applies strongly.
On Shirley Harrison:
...I'm not sure if 'from above' qualifies, but Eliezer thinks he has a special mission that he is uniquely qualifie
I'm definitely not trying to attack anyone (and you're right my comment could be read that way). But I'm also not just curious. I figured this was the answer. Lots of time spent thinking, writing and producing influential publications on FAI is about all the qualifications one can reasonably expect (producing a provable mathematical formalization of friendliness is the kind of thing no one is qualified to do before they do it and the AI field in general is relatively new and small). And Eliezer is obviously a really smart guy. He's probably even the most likely person to solve it. But the effort to address the friendliness issue seems way too focused on him and the people around him. You shouldn't expect any one person to solve a Hard problem. Insight isn't that predictable especially when no one in the field has solved comparable problems before. Maybe Einstein was the best bet to formulate a unified field theory but a) he never did and b) he had actually had comparable insights in the past. Part of the focus on Eliezer is just an institutional and financial thing, but he and a lot of people here seem to encourage this state of affairs.
No one looks at open problems in other fields this way.
From Ten Years to a Positive Singularity:
And computer scientists haven’t understood the self – because it isn’t about computer science. It’s about the emergent dynamics that happen when you put a whole bunch of general and specialized pattern recognition agents together – a bunch of agents created in a way that they can really cooperate – and when you include in the mix agents oriented toward recognizing patterns in the society as a whole.
and
The goal systems of humans are pretty unpredictable, but a software mind like Novamente is different – the goal system is better-defined. So one reasonable approach is to make the first Novamente a kind of Oracle. Give it a goal system with one top-level goal: To answer peoples’ questions, in a way that’s designed to give them maximum understanding.
From The Singularity Institute's Scary Idea (And Why I Don't Buy It):
It's possible that with sufficient real-world intelligence tends to come a sense of connectedness with the universe that militates against squashing other sentiences.
From Chance and Consciousness:
...At the core of this theory are two very simple ideas:
1) that consciousness is absolute freedom, pure spontaneity and lawl
That was... surprisingly surprising. Thank you.
For reasons like those you listed, and also out of some unverbalized frustration, in the last week I've been thinking pretty seriously whether I should leave LW and start hanging out somewhere else online. I'm not really interested in the Singularity, existential risks, cognitive biases, cryonics, un/Friendly AI, quantum physics or even decision theory. But I do like the quality of discussions here sometimes, and the mathematical interests of LW overlap a little with mine: people around here enjoy game theory and computability theory, though sadly not nearly as much as I do.
What other places on the Net are there for someone like me? Hacker News and Reddit look like dumbed-down versions of LW, so let's not talk about those. I solved a good bit of Project Euler once, the place is tremendously enjoyable but quite narrow-focused. The n-Category Cafe is, sadly, coming to a halt. Math Overflow looks wonderful and this question by Scott Aaronson nearly convinced me to drop everything and move there permanently. The Polymath blog is another fascinating place that is so high above LW that I feel completely underqualified to join. Unfortunately,...
My new blog "Azimuth" may not be mathy enough for you, but if you like the n-Category Cafe, it's possible you may like this one too. It's more focused on technology, environmental issues, and the future. Someday soon you'll see an interview with Eliezer! And at some point we'll probably get into decision theory as applied to real-world problems. We haven't yet.
(I don't think the n-Category Cafe is "coming to a halt", just slowing down - my change in interests means I'm posting a lot less there, and Urs Schreiber is spending most of his time developing the nLab.)
What are the scenarios where someone unfamiliar with this website would hear about Roko's deleted post?
I suppose it could be written about dramatically (because it was dramatic!) but I don't think anyone is going to publish such an account. It was bad from the perspective of most LWers -- a heuristic against censorship is a good heuristic.
This whole thing is ultimately a meta discussion about moderation policy. Why should this discussion about banned topics be that much interesting than a post on Hacker News that is marked as dead? Hacker News generally doesn't allow discussion of why stories were marked dead. The moderators are anonymous and have unquestioned authority.
If Less Wrong had a mark as dead function (on HN unregistered users don't see dead stories, but registered users can opt-in to see them), I suspect Eliezer would have killed Roko's post instead of deleting it to avoid the concerns of censorship, but no one has written that LW feature yet.
As a solid example of what a not-PR disaster it was, I doubt that anyone at the Singularity Summit that isn't a regular Less Wrong reader (the majority of attendees) has heard that Eliezer deleted a post. It's just not the kind of t...
informing SIAI that you will fund them if and only if they require their staff to exhibit a high degree of vigilance about the possibility of poisoning the existential risk meme by making claims that people find uncredible
I believe you are completely ignoring the status-demolishing effects of hypocrisy and insincerity.
When I first started watching Blogging Heads discussions featuring Eliezer I would often have moments where I held my breath thinking "Oh god, he can't address that directly without sounding nuts, here comes the abhorrent back-peddling and waffling". Instead he met it head on with complete honesty and did so in a way I've never seen other people able to pull off - without sounding nuts at all. In fact, sounding very reasonable. I've since updated enough that I no longer wince and hold my breath, I smile and await the triumph.
If, as most people (and nearly all politicians) do, he would have waffled and presented an argument that he doesn't honestly hold, but that is more publicly acceptable, I'd feel disappointed and a bit sickened and I'd tune out the rest of what he has to say.
Hypocrisy is transparent. People (including neurotypical people) very easil...
When I first started watching Blogging Heads discussions featuring Eliezer I would often have moments where I held my breath thinking "Oh god, he can't address that directly without sounding nuts, here comes the abhorrent back-peddling and waffling". Instead he met it head on with complete honesty
I am so glad that someone notices and appreciates this.
I feel that anyone advocating for public hypocrisy among the SIAI staff is working to disintegrate the organization (even if unintentionally).
Agreed.
I'll state my own experience and perception, since it seems to be different from that of others, as evidenced in both the post and the comments. Take it for what it's worth; maybe it's rare enough to be disregarded.
The first time I heard about SIAI -- which was possibly the first time I had heard the word "singularity" in the technological sense -- was whenever I first looked at the "About" page on Overcoming Bias, sometime in late 2006 or early 2007, where it was listed as Eliezer Yudkowsky's employer. To make this story short, the whole reason I became interested in this topic in the first place was because I was impressed by EY -- specifically his writings on rationality on OB (now known as the Sequences here on LW). Now of course most of those ideas were hardly original with him (indeed many times I had the feeling he was stating the obvious, albeit in a refreshing, enjoyable way) but the fact that he was able to write them down in such a clear, systematic, and readable fashion showed that he understood them thoroughly. This was clearly somebody who knew how to think.
Now, when someone has made that kind of demonstration of rationality, I just don't have much...
I STRONGLY suspect that there is a enormous gulf between finding out things on your own and being directed to them by a peer.
When you find something on your own (existential risk, cryonics, whatever), you get to bask in your own fortuitousness, and congratulate yourself on being smart enough to understand it's value. You get a boost in (perceived) status, because not only do you know more than you did before, you know things other people don't know.
But when someone else has to direct you to it, it's much less positive. When you tell someone about existential risk or cryonics or whatever, the subtext is "look, you're weren't able to figure this out by yourself, let me help you". No matter how nicely you phrase it, there's going to be resistance because it comes with a drop in status - which they can avoid by not accepting whatever you're selling. It actually might be WORSE with smart people who believe that they have most things "figured out".
Thanks for your thoughtful comment.
To make this story short, the whole reason I became interested in this topic in the first place was because I was impressed by EY -- specifically his writings on rationality on OB (now known as the Sequences here on LW). Now of course most of those ideas were hardly original with him (indeed many times I had the feeling he was stating the obvious, albeit in a refreshing, enjoyable way) but the fact that he was able to write them down in such a clear, systematic, and readable fashion showed that he understood them thoroughly. This was clearly somebody who knew how to think.
I know some people who have had this sort of experience. My claim is not that Eliezer has uniformly repelled people from thinking about existential risk. My claim is that on average Eliezer's outlandish claims repel people from thinking about existential risk.
Do I simply have an unusual personality that makes me willing to listen to strange-sounding claims?
My guess would be that this is it. I'm the same way.
(But why wouldn't they as well, if they're "smart"?)
It's not clear that willingness to listen to strange-sounding claims exhibits correlation with inst...
For my part, I keep wondering how long it's going to be before someone throws his "If you don't sign up your kids for cryonics then you are a lousy parent" remark at me, to which I will only be able to say that even he says stupid things sometimes.
(Yes, I'd encourage anyone to sign their kids up for cryonics; but not doing so is an extremely poor predictor of whether or not you treat your kids well in other ways, which is what the term should mean by any reasonable standard).
Also, keep in mind that reading the sequences requires nontrivial effort-- effort which even moderately skeptical people might be unwilling to expend. Hopefully Eliezer's upcoming rationality book will solve some of that problem, though. After all, even if it contains largely the same content, people are generally much more willing to read one book rather than hundreds of articles.
And I would indeed expect IQ to correlate positively with what you might call openness.
My own experience is that the correlation is not very high. Most of the people who I've met who are as smart as me (e.g. in the sense of having high IQ) are not nearly as open as I am.
I didn't realize at all that by "smart" you meant "instrumentally rational";
I did not intend to equate intelligence with instrumental rationality. The reason why I mentioned instrumental rationality is that ultimately what matters is to get people with high instrumental rationality (whether they're open minded or not) interested in existential risk.
My point is that people who are closed minded should not be barred from consideration as potentially useful existential risk researchers, that although people are being irrational to dismiss Eliezer as fast as they do, that doesn't mean that they're holistically irrational. My own experience has been that my openness has both benefits and drawbacks.
...The point of my comment was that reading his writings reveals a huge difference between Eliezer and UFO conspiracy theorists, a difference that should be more than noticeable to anyone with an I
I don't mean to dismiss the points of this post, but all of those points do need to be reinterpreted in light of the fact that I'd rather have a few really good rationalists as allies than a lot of mediocre rationalists who think "oh, cool" and don't do anything about it. Consider me as being systematically concerned with the top 5% rather than the average case. However, I do still care about things like propagation velocities because that affects what population size the top 5% is 5% of, for example.
For Congress to implement good policy in this area would be performance vastly exceeding what we've previously seen from them. They called prediction markets terror markets. I expect more of the same, and expect to have little effect on them.
The flipside though is if we can frame the issue in a way that there's no obvious Democrat or Republican position, then we can, as Robin Hanson puts it, "pull the rope sideways".
The very fact that much of the existential risk stuff is "strange sounding" relative to what most people are used to really thinking about in the context of political arguments might thus act as a positive.
It must be said that the reason no-one from SingInst has commented here is they're all busy running the Singularity Summit, a well-run conference full of AGI researchers, the one group that SingInst cares about impressing more than any other. Furthermore, Eliezer's speech was well received by those present.
I'm not sure whether attacking SingInst for poor public relations during the one week when everyone is busy with a massive public relations effort is either very ironic or very Machiavellian.
I'm new to all this singularity stuff - and as an anecdotal data point, I'll say a lot of it does make my kook bells go off - but with an existential threat like uFAI, what does the awareness of the layperson count for? With global warming, even if most of any real solution involves the redesign of cities and development of more efficient energy sources, individuals can take some responsibility for their personal energy consumption or how they vote. uFAI is a problem to be solved by a clique of computer and cognitive scientists. Who needs to put thought into the possibility of misbuilding an AI other than people who will themselves engage in AI research? (This is not a rhetorical question - again, I'm new to this.)
There is, of course, the question of fundraising. ("This problem is too complicated for you to help with directly, but you can give us money..." sets off further alarm bells.) But from that perspective someone who thinks you're nuts is no worse than someone who hasn't heard of you. You can ramp up the variance of people's opinions and come out better financially.
Awareness on the part of government funding agencies (and the legislators and executive branch people with influence over them), technology companies and investors, and political and military decisionmakers (eventually) could all matter quite a lot. Not to mention bright young people deciding on their careers and research foci.
While we're in a thread with "Public Relations" in its title, I'd like to point out that calling an AGI a "god", even metaphorically or by (some) definition, is probably a very bad idea. Calling anything a god will (obviously) tend to evoke religious feelings (an acute mind-killer), not to mention that sort of writing isn't going to help much in combating the singularity-as-religion pattern completion.
I am one of those who haven't been convinced by the SIAI line. I have two main objections.
First, EY is concerned about risks due to technologies that have not yet been developed; as far as I know, there is no reliable way to make predictions about the likelihood of the development of new technologies. (This is also the basis of my skepticism about cryonics.) If you're going to say "Technology X is likely to be developed" then I'd like to see your prediction mechanism and whether it's worked in the past.
Second, shouldn't an organization worried about the dangers of AI be very closely in touch with AI researchers in computer science departments? Sure, there's room for pure philosophy and mathematics, but you'd need some grounding in actual AI to understand what future AIs are likely to do.
I think multifoliaterose is right that there's a PR problem, but it's not just a PR problem. It seems, unfortunately, to be a problem with having enough justification for claims, and a problem with connecting to the world of professional science. I think the PR problems arise from being too disconnected from the demands placed on other scientific or science policy organizations. People who study other risks, say epidemic disease, have to get peer-reviewed, they have to get government funding -- their ideas need to pass a round of rigorous criticism. Their PR is better by necessity.
First, EY is concerned about risks due to technologies that have not yet been developed; as far as I know, there is no reliable way to make predictions about the likelihood of the development of new technologies.
As was mentioned in other threads, SIAI's main arguments rely on disjunctions and antipredictions more than conjunctions and predictions. That is, if several technology scenarios lead to the same broad outcome, that's a much stronger claim than one very detailed scenario.
For instance, the claim that AI presents a special category of existential risk is supported by such a disjunction. There are several technologies today which we know would be very dangerous with the right clever 'recipe'– we can make simple molecular nanotech machines, we can engineer custom viruses, we can hack into some very sensitive or essential computer systems, etc. What these all imply is that a much smarter agent with a lot of computing power is a severe existential threat if it chooses to be.
In other words, I largely agree with Ben Goertzel's assertion that there is a fundamental difference between "narrow AI" and AI research that might eventually lead to machines capable of cognition, but I'm not sure I have good evidence for this argument.
Although one should be very, very careful not to confuse the opinions of someone like Goertzel with those of the people (currently) at SIAI, I think it's fair to say that most of them (including, in particular, Eliezer) hold a view similar to this. And this is the location -- pretty much the only important one -- of my disagreement with those folks. (Or, rather, I should say my differing impression from those folks -- to make an important distinction brought to my attention by one of the folks in question, Anna Salamon.) Most of Eliezer's claims about the importance of FAI research seem obviously true to me (to the point where I marvel at the fuss that is regularly made about them), but the one that I have not quite been able to swallow is the notion that AGI is only decades away, as opposed to a century or two. And the reason is essentially disagreement on the above point.
At first glance this may seem puzzling, since...
Consider me updated. Thank you for taking my brief and relatively unhelpful comments seriously, and for explaining your intended point. While I disagree that the swiftest route to AGI will involve lots of small modules, it's a complicated topic with many areas of high uncertainty; I suspect you are at least as informed about the topic as I am, and will be assigning your opinions more credence in the future.
The proper reason to request clarification is in order to not make the mistake again
I reject out of hand any proposed rule of propriety that stipulates people must pretend to be naive supplicants.
When people ask me for an explanation of a downvote I most certainly do not take it for granted that by so doing they are entering into my moral reality and willing to accept my interpretation of what is right and what is a 'mistake'. If I choose to explain reasons for a downvote I also don't expect them to henceforth conform to my will. They can choose to keep doing whatever annoying thing they were doing (there are plenty more downvotes where that one came from.)
There is more than one reason to ask for clarification for a downvote - even "I'm just kinda curious" is a valid reason. Sometimes votes just seem bizarre and not even Machiavellian reasoning helps explain the pattern. I don't feel obliged to answer any such request but I do so if convenient. I certainly never begrudge others the opportunity to ask if they do so politely.
Yes social status is a part of the reason for the karma system -- but it is not something you have an inherent right to. Otherwise there would be no point to it!
Not what Kompo was saying.
I'm having trouble understanding how exactly you think the AGI problem is different from any really hard math problem. Take P != NP, for instance the attempted proof that's been making the rounds on various blogs. If you've skimmed any of the discussion you can see that even this attempted proof piggybacks on "vast amounts of 'ordinary' intellectual labor," largely consisting of mapping out various complexity classes and their properties and relations. There's probably been at least 30 years of complexity theory research required to make that proof attempt even possible.
I think you might be able to argue that even if we had an excellent theoretical model of an AGI, that the engineering effort required to actually implement it might be substantial and require several decades of work (e.g. Von Neumann architecture isn't suitable for AGI implementation, so a great deal of computer engineering has to be done).
If this is your position, I think you might have a point, but I still don't see how the effort is going to take 1 or 2 centuries. A century is a loooong time. A century ago humans barely had powered flight.
By no means do I want to downplay the difficulty of P vs NP; all the same, I think we have different meanings of "vast" in mind.
I hate to go all existence proofy on you, but we have an existence proof of a general intelligence - accidentally sneezed out by natural selection, no less, which has severe trouble building freely rotating wheels - and no existence proof of a proof of P != NP. I don't know much about the field, but from what I've heard, I wouldn't be too surprised if proving P != NP is harder than building FAI for the unaided human mind. I wonder if Scott Aaronson would agree with me on that, even though neither of us understand the other's field? (I just wrote him an email and asked, actually; and this time remembered not to say my opinion before asking for his.)
I think controlling Earth's destiny is only modestly harder than understanding a sentence in English - in the same sense that I think Einstein was only modestly smarter than George W. Bush. EY makes a similar point.
You sound to me like someone saying, sixty years ago: "Maybe some day a computer will be able to play a legal game of chess - but simultaneously defeating multiple grandmasters, that strains credibility, I'm afraid." But it only took a few decades to get from point A to point B. I doubt that going from "understanding English" to "controlling the Earth" will take that long.
Have we seen any results (or even progress) come from the SIAI Challenge Grants, which included a Comprehensive Singularity FAQ and many academic papers dealing directly with the topics of concern? These should hopefully be less easy to ridicule and provide an authoritative foundation after the peer review process.
Edit: And if they fail to come to fruition, then we have some strong evidence to doubt SIAI's effectiveness.
whpearson mentioned this already, but if you think that the most important thing we can be doing right now is publicizing an academically respectable account of existential risk, then you should be funding the Future of Humanity Institute.
Funding SIAI is optimal only if you think that the pursuit of Friendly AI is by far the most important component of existential risk reduction, and indeed they're focusing on persuading more people of this particular claim. As you say, by focusing on something specific, radical and absurd, they run more of a risk of being dismissed entirely than does FHI, but their strategy is still correct given the premise.
Funding SIAI is optimal only if you think that the pursuit of Friendly AI is by far the most important component of existential risk reduction
This seems to assume that existential risk reduction is the only thing people care about. I doubt I am the only person who wants more from the universe than eliminating risk of humans going extinct. I would trade increased chance of extinction for a commensurate change in the probable outcomes if we survive. Frankly I would consider it insane not to be willing make such a trade.
Warning: Shameless Self Promotion ahead
Perhaps part of the difficulty here is the attempt to spur a wide rationalist community on the same site frequented by rationalists with strong obscure positions on obscure topics.
Early in Lesswrong discussion of FAI was discouraged so that it didn't just become a site about FAI and the singularity, but a forum about human rationality more generally.
I can't track down an article[s] from EY about how thinking about AI can be too absorbing, and how in order properly create a community, you have to truly put aside the ulterior motive of advancing FAI research.
It might be wise for us to again deliberately shift our focus away from FAI and onto human rationality and how it can be applied more widely (say to science in general.)
Enter the SSP: For months now I've been brainstorming a community to educate people on the creation and use of 3D printers, with the eventual goal of making much better 3D printers. So this is a different big complicated problem with a potential high payoff, and it ties into many fields, provides tangible previews of the singularity, can benefit from the involvement of people with almost any skill set, and seems to be much ...
I like your post. I wouldn't go quite so far as to ascribe outright negative utility to SIAI donations - I believe you underestimate just how much potential social influence money provides. I suspect my conclusion there would approximately mirror Vassar's.
It would not be at all surprising if the founders of Less Wrong have a similar unusual abundance of the associated with Aspergers Syndrome. I believe that more likely than not, the reason why Eliezer has missed the point that I raise in this post is social naivete on his part rather than willful self-deception.
(Typo: I think you meant to include 'traits' or similar in there.)
While Eliezer occasionally takes actions that seem clearly detrimental to his cause I do suggest that Eliezer is at least in principle aware of the dynamics you discuss. His alter ego "Harry Potter" has had similar discussion with his Draco in his fanfiction.
Also note that appearing too sophisticated would be extremely dangerous. If Eliezer or SIAI gains the sort of status and credibility you would like them to seek they open themselves to threats from governments and paramilitary organisations. If you are trying to take over the world it is fa...
I raised a similar point on the IEET existential risk mailing list in a reply to James J. Hughes:
...Michael
For the record, I have no problem with probability estimates. I am less and less willing to offer them myself however since we have the collapse of the Soviet Union etc. as evidence of a chaotic and unpredictable nature of history, Ray’s charts not-with-standing.
What I find a continuing source of amazement is that there is a subculture of people half of whom believe that AI will lead to the solving of all mankind’s problems (which me might call Kurzweilian S^) and the other half of which is more or less certain (75% certain) that it will lead to annihilation. Lets call the latter the SIAI S^.
Yet you SIAI S^ invite these proponents of global suicide by AI, K-type S^, to your conferences and give them standing ovations.
And instead of waging desperate politico-military struggle to stop all this suicidal AI research you cheerlead for it, and focus your efforts on risk mitigation on discussions of how a friendly god-like AI could save us from annihilation.
You are a deeply schizophrenic little culture, which for a sociologist like me is just fascinating.
But as someone deeply concerne
Yet you SIAI S^ invite these proponents of global suicide by AI, K-type S^, to your conferences and give them standing ovations. ... You are a deeply schizophrenic little culture, which for a sociologist like me is just fascinating.
This is a perfect example of where the 'outside view' can go wrong. Even the most basic 'inside view' of the topic would make it overwhelmingly obvious why a "75% certain of death by AI" folks could be allied (or the same people!) as the "solve all problems through AI" group. Splitting the two positions prematurely and trying to make a simple model of political adversity like that is just naive.
I personally guess >= 75% for AI death and also advocate FAI research. Preventing AI development indefinitely via desperate politico-military struggle would just not work in the long term. Trying would be utter folly. Nevermind the even longer term which would probably result in undesirable outcomes even if humanity did manage to artificially stunt its own progress in such a manner.
(The guy also uses 'schizophrenic' incorrectly.)
But what if you're increasing existential risk, because encouraging SIAI staff to censor themselves will make them neurotic and therefore less effective thinkers? We must all withhold karma from multifoliaterose until this undermining stops! :-)
When I'm talking to someone I respect (and want to admire me), I definitely feel an urge to distance myself from EY. I feel like I'm biting a social bullet in order to advocate for SIAI-like beliefs or action.
What's more, this casts a shadow over my actual beliefs.
This is in spite of the fact that I love EY's writing, and actually enjoy his fearless geeky humor ("hit by a meteorite" is indeed more fun than the conventional "hit by a bus").
The fear of being represented by EY is mostly due to what he's saying, not how he's saying it. That is, even if he were always dignified and measured, he'd catch nearly as much flak. If he'd avoided certain topics entirely, that would have made a significant difference, but on the other hand, he's effectively counter-signaled that he's fully honest and uncensored in public (of course he is probably not, exactly), which I think is also valuable.
I think EY can win by saying enough true things, convincingly, that smart people will be persuaded that he's credible. It's perhaps true that better PR will speed the process - by enough for it to be worth it? That's up to him.
The comments in this diavlog with Scott Aaronson - while...
I believe that more likely than not, the reason why Eliezer has missed the point that I raise in this post is social naivete on his part rather than willful self-deception.
I find it impossible to believe that the author of Harry Potter and the Methods of Rationality is oblivious to the first impression he creates. However, I can well believe that he imagines it to be a minor handicap which will fade in importance with continued exposure to his brilliance (as was the fictional case with HP). The unacknowledged problem in the non-fictional case, of course, is in maintaining that continued exposure.
I am personally currently skeptical that the singularity represents existential risk. But having watched Eliezer completely confuse and irritate Robert Wright, and having read half of the "debate" with Hanson, I am quite willing to hypothesize that the explanation of what the singularity is (and why we should be nervous about it) ought to come from anybody but Eliezer. He speaks and writes clearly on many subjects, but not that one.
Perhaps he would communicate more successfully on this topic if he tried a dialog format. But it would have to be one in which his constructed interlocutors are convincing opponents, rather than straw men.
I think largish fraction of the population have worries about human extinction / the end of the world. Very few associate this with the phrase "existential risk" -- I for one had never heard the term until after I had started reading about the technological singularity and related ideas. Perhaps rebranding of a sort would help you further the cause. Ditto for FAI - I think 'Ethical Artficial Intelligence' would get the idea across well enough and might sound less flakey to certain audiences.
"Ethical Artificial Intelligence" sounds great and makes sense without having to know the background of the technological singularity as "Friendly Artificial Intelligence" does. Every time I try to mention FAI to someone without any background on the topic I always have to take two steps back in the conversation and it becomes quickly confusing. I think I could mention Ethical AI and then continue on with whatever point I was making without any kind of background and it would still make the right connections.
I also expect it would appeal to a demographic likely to support the concept as well. People who worry about ethical food, business, healthcare etc... would be likely to worry about existential risk on many levels.
In fact I think I'll just go ahead and start using Ethical AI from now on. I'm sure people in the FAI community would understand what I'm talking about.
Given how superficially insane Eliezer's beliefs seem he has done a fantastic job of attracting support for his views.
Eliezer is popularizing his beliefs, not directly through his own writings, but by attracting people (such as conference speakers and this comment writer who is currently writing a general-audience book) who promote understanding of issues such as intelligence explosion, unfriendly AI and cryonics.
Eliezer is obviously not neurotypical. The non-neurotypical have a tough time making arguments that emotionally connect. Given that Eliezer has a massive non-comparative advantage in making such arguments we shouldn't expect him to spend his time trying to become slightly better at doing so.
Eliezer might not have won the backing of people such as super-rationalist self-made tech billionaire Peter Thiel had Eliezer devoted less effort to rational arguments.
I disagree strongly with this post. In general, it is a bad idea to refrain from making claims that one believes are true simply because those claims will make people less likely to listen to other claims. That direction lies the downwards spiral of emotional manipulation, rhetoric, and other things not conducive to rational discourse.
Would one under this logic encourage the SIAI to make statements that are commonly accepted but wrong in order to make people more likely to listen to the SIAI? If not, what is the difference?
This page is now the 8th result for a google search for 'existential risk' and the 4th result for 'singularity existential risk'
Regardless of the effect SIAI may have had on the public image of existential risk reduction, it seems this is unlikely to be helpful.
Edit: it is now 7th and first, respectively. This is plusungood.
This is partially because Google gives a ranking boost to things it sees as recent, so it may not stay that well ranked.
Solid, bold post.
Eliezer's comments on his personal importance to humanity remind me of the Total Perspective Device from Hitchhiker's. Everyone who gets perspective from the TPD goes mad; Zaphod Beeblebrox goes in and finds out he's the most important person in human history.
Eliezer's saying he's Zaphod Beeblebrox. Maybe he is, but I'm betting heavily against that for the reasons outlined in the post. I expect AI progress of all sorts to come from people who are able to dedicate long, high-productivity hours to the cause, and who don't believe that they a...
Um, I wasn't basing my conclusion on multifoliaterose's statements. I had made the Zaphod Beeblebrox analogy due to the statements you personally have made. I had considered doing an open thread comment on this very thing.
Which of these statements do you reject?:
FAI is the most important project on earth, right now, and probably ever.
FAI may be the difference between a doomed multiverse of [very large number] of sentient beings. No project in human history is of greater importance.
You are the most likely person - and SIAI the most likely agency, because of you - to accomplish saving the multiverse.
Number 4 is unnecessary for your being the most important person on earth, but:
And then you've blamed multi for this. He is trying to help an important cause; both multifoliaterose and XiXiDu are, in my opinion, acting in a manner they believe will help the existential risk cause.
And your final statement, that multifoliaterose is damaging an important cause's PR appears entirely deaf to multi's post. He's trying to help the...
My understanding of JRMayne's remark is that he himself construes your statements in the way that I mentioned in my post.
If JRMayne has misunderstood you, you can effectively deal with the situation by making a public statement about what you meant to convey.
Note that you have not made a disclaimer which rules out the possibility that you claim that you're the most important person in human history. I encourage you to make such a disclaimer if JRMayne has misunderstood you.
Gosh, I find this all quite cryptic.
Suppose I, as Lord Chief Prosecutor of the Heathens say:
All heathens should be jailed.
Mentally handicapped Joe is a heathen; he barely understands that there are people, much less the One True God.
One of my opponents says I want Joe jailed. I have not actually uttered that I want Joe jailed, and it would be a soldier against me if I had, because that's an unpopular position. This is a mark of a political argument gone wrong?
I'm trying to find another logical conclusion to XiXiDu's cited statements (or a raft of others in the same vein.) Is there one I don't see? Is it just that you're probably the most important entity in history, but, you know, maybe not? Is it that there's only a 5% chance that you're the most important person in human history?
I have not argued that you should not say these things, BTW. I have argued that you probably should not think them, because they are very unlikely to be true.
Damnit! My smug self assurance that I could postpone thinking about these issues seriously because I'm an SIAI donor .... GONE! How am I supposed to get any work done now?
Seriously though, I do wish the SIAI toned down its self importance and incredible claims, however true they are. I realize, of course, that dulling some claims to appear more credible is approaching a Dark Side type strategy, but... well, no buts. I'm just confused.
During graduate school I've met many smart people who I wish would take existential risk more seriously. Most such people who have heard of Eliezer do not find his claims credible. My understanding is that the reason for this is that Eliezer has made some claims which they perceive to be falling under the above rubric, and the strength of their negative reaction to these has tarnished their mental image of all of Eliezer's claims.
Can you tell us more about how you've seen people react to Yudkowsky? That these negative reactions are significant is crucia...
Negative reactions to Yudkowsky from various people (academics concerned with x-risk), just within the past few weeks:
I also have an extreme distaste for Eliezer Yudkowsky, and so I have a hard time forcing myself to cooperate with any organization that he is included in, but that is a personal matter.
You know, maybe I'm not all that interested in any sort of relationship with SIAI after all if this, and Yudkowsky, are the best you have to offer.
...
There are certainly many reasons to doubt the belief system of a cult based around the haphazard musings of a high school dropout, who has never written a single computer program but professes to be an expert on AI. As you point out none of the real AI experts are crying chicken little, and only a handful of AI researchers, cognitive scientists or philosophers take the FAI idea seriously.
...
Wow, that's an incredibly arrogant put-down by Eliezer..SIAI won't win many friends if he puts things like that...
...
...he seems to have lost his mind and written out of strong feelings. I disagree with him on most of these matters.
...
...Questions of priority - and the relative intensity of suffering between members of different speci
who has never written a single computer program
utterly false, wrote my first one at age 5 or 6, in BASIC on a ZX-81 with 4K of RAM
The fact that a lot of these reactions are based on false info is worth noting. It doesn't defeat any arguments directly, but it says that the naive model where everything happens because of the direct perception of actions I directly control is false.
Is it likely that someone who's doing interesting work that's publicly available wouldn't attract some hostility?
But I realize my experience may be atypical and there could be an abundance of avoidable Yudkowsky-hatred where I'm not looking, so would like to know more about that.
Yudkowsky-hatred isn't the risk, Yudkowsky-mild-contempt is. People engage with things they hate, sometimes it brings respect and attention to both parties (by polarizing a crowd that would otherwise be indifferent.) But you never want to be exposed to mild contempt.
I can think of some examples of conversations about Eliezer that would fit the category but it is hard to translate them to text. The important part of the reaction was non-verbal. Cryonics was one topic and the problem there wasn't that it was uncredible but that it was uncool. Another topic is the old "thinks he can know something about Friendly AIs when he hasn't even made an AI yet" theme. Again, I've seen that reaction evident through mannerisms that in no way translate to text. You can convey that people aren't socially relevant without anything so crude as saying stuff.
Cryonics was one topic and the problem there wasn't that it was uncredible but that it was uncool.
[insert the obvious bad pun here]
Can you tell us more about how you've seen people react to Yudkowsky? That these negative reactions are significant is crucial to your proposal, but I have rarely seen negative reactions to Yudkowsky (and never in person) so my first availability-heuristic-naive reaction is to think it isn't a problem. But I realize my experience may be atypical and there could be an abundance of avoidable Yudkowsky-hatred where I'm not looking, so would like to know more about that.
I haven't seen examples of Yudkowsky-hatred. But I have regularly seen people ridicule him. Recalling Hanson's view that a lot human behavior is really signaling and vying for status, I interpret this ridicule as an functioning to lower Eliezer's status to compensate for what people perceive as inappropriate status grubbing on his part.
Most of the smart people who I know (including myself) perceive him as exhibiting a high degree of overconfidence in the validity of his views about the world.
This leads some of them conceptualize him as a laughingstock; as somebody who's totally oblivious and feel that the idea that we should be thinking about artificial intelligence is equally worthy of ridicule. I personally am qui...
I haven't seen examples of Yudkowsky-hatred. But I have regularly seen people ridicule him
Ditto.
I know of a lot of very smart people (ok, less than 10, but still, more than 1) who essentially read Eliezer's AI writings as a form of entertainment, and don't take them even slightly seriously. This is partly because of the Absurdity Heuristic, but I think it's also because of Eliezer's writing style, and statements like the one in the initial post.
I personally fall somewhere between these people and, say, someone who has spent a summer at the SIAI on the 'taking Eliezer seriously' scale - I think he (and the others) probably have a point, and I at least know that they intend to be taken seriously, but I've never gotten round to doing anything about it.
It sounds to me like half of the perceived public image problem comes from apparently blurred lines between the SIAI and LessWrong, and between the SIAI and Eliezer himself. These could be real problems - I generally have difficulty explaining one of the three without mentioning the other two - but I'm not sure how significant it is.
The ideal situation would be that people would evaluate SIAI based on its publications, the justification of the research areas, and whether the current and proposed projects satisfy those goals best, are reasonably costed, and...
I don't find persuasive your arguments that the following policy suggestion has high impact (or indeed is something to worry about, in comparison with other factors):
"requiring [SIAI] staff to exhibit a high degree of vigilance about the possibility of poisoning the existential risk meme by making claims that people find uncredible"
(Note that both times I qualified the suggestion to contact SIAI (instead of starting a war) with "if you have a reasonable complaint/usable suggestion for improvement".)
The one "uncredible" claim mentioned - about Eliezer being "hit by a meteorite" - sounds as though it is the kind of thing he might plausibly think. Not too much of a big deal, IMO.
As with many charities, it is easy to think the SIAI might be having a negative effect - simply because it occupies the niche of another organisation that could be doing a better job - but what to do? Things could be worse as well - probably much worse.
I'm definitely interested in funding an existential risk organization. SIAI would have to be a lot more transparent than it is now right now for me to be interested in funding SIAI. For me personally, it wouldn't be enough for SIAI to just take measures to avoid poisoning the meme, I would need to see a lot more evidence that SIAI is systematically working to maximize its impact on existential risk reduction.
As things stand I prefer to hold out for a better organization. But if SIAI exhibited transparency and accountability of levels similar to those of GiveWell (welcoming and publically responding to criticism regularly, regularly posting detailed plans of action, seeking out feedback from subject matter specialists and making this public when possible, etc.) I would definitely fund SIAI and advocate that others do so as well.
"transparency"? I thought the point of your post was that SIAI members should refrain from making some of their beliefs easily available to the public?
There are a lot of second and higher order effects in PR. You can always shape your public statements for one audience and end up driving away (or failing to convince) another one that's more important. If Eliezer had shied away from stating some of the more "uncredible" ideas because there wasn't enough evidence to convince a typical smart person, it would surely prompt questions of "what do you really think about this?" or fail to attract people who are currently interested in SIAI because of those ideas.
If SIAI provided compelling evidence that Eliezer's work has higher expected value to humanity than what virtually everybody else is doing, then I would think Eliezer's comment appropriate.
Suppose Eliezer hadn't made that claim, and somebody asks him, "do you think the work SIAI is doing has higher expected value to humanity than what everybody else is doing?", which somebody is bound to, given that Eliezer is asking for donations from rationalists. What is he supposed to say? "I can't give you the answer because I don't have enough evidence to convince a typical smart person?"
I think you make a good point that it's important to think about PR, but I'm not at all convinced that the specific advice you give are the right ones.
higher expected value to humanity than what virtually everybody else is doing,
For what definitions of "value to humanity" and "virtually everybody else"?
If "value to humanity" is assessed as in Bostrom's Astronomical Waste paper, that hugely favors effects on existential risk vs alleviating current suffering or increasing present welfare (as such, those also have existential risk effects). Most people don't agree with that view, so asserting that as a privileged frame can be seen as a hostile move (attacking the value systems of others in favor of a value system according to which one's area of focus is especially important). Think of the anger directed at vegetarians, or those who guilt-trip others about not saving African lives. And of course, it's easier to do well on a metric that others are mostly not focused on optimizing.
Dispute about what best reduces existential risk, and annoyance at overly confident statements there is a further issue, but I think that asserting uncommon moral principles (which happen to rank one's activities as much more valuable than most people would rank them) is a big factor on its own.
So my impression has been that the situation is that
(i) Eliezer's writings contain a great deal of insightful material.
(ii) These writings do not substantiate the idea that [that Eliezer's work has higher expected value to humanity than what virtually everybody else is doing].
I say this having read perhaps around a thousand pages of what Eliezer has written. I consider the amount of reading that I've done to be a good "probabilistic proof" that the points (i) and (ii) apply to the portion of his writings that I haven't read.
That being said, if there are any particular documents that you would point me to which you feel do provide a satisfactory evidence for the idea [that Eliezer's work has higher expected value to humanity than what virtually everybody else is doing], I would be happy to examine them.
I'm unwilling to read the whole of his opus given how much of it I've already read without being convinced. I feel that the time that I put into reducing existential risk can be used to better effect in other ways.
Yes, I agree with you. I plan on making my detailed thoughts on these points explicit. I expect to be able to do so within a month.
But for a short answer, I would say that the situation is mostly that I think that:
Eliezer isn't putting a dent in the FAI problem.
You didn't succeed in communicating your problem, otherwise someone else would have explained earlier. I had been reading your posts on the issue and didn't have even the tiniest hint of an idea that the piece you were missing was an explanation of bayesian reasoning until just before writing that comment, and even then was less optimistic about the comment doing anything for you than I had been for earlier comments. I'm still puzzled and unsure whether it actually was Bayesian reasoning or something else in the comment that apparently helped you. if it was you should read http://yudkowsky.net/rational/bayes and some of the post here tagged "bayesian".
AI will be developed by a small team (at this time) in secret
I find this very unlikely as well, but Anna Salamon once put it as something like "9 Fields-Medalist types plus (an eventual) methodological revolution" which made me raise my probability estimate from "negligible" to "very small", which I think given the potential payoffs, is enough for someone to be exploring the possibility seriously.
I have a suspicion that Eliezer isn't privately as confident about this as he appears, and his apparent confidence is itself a PR strategy.
That formal theory involving infinite/near infinite computing power has anything to do with AI and computing in the real world.
Turing's theories involving infinite computing power contributed to building actual computers, right? I don't see why such theories wouldn't be useful stepping stones for building AIs as well. There's a lot of work on making AIXI practical, for example (which may be disastrous if they succeeded since AIXI wasn't designed to be Friendly).
If this is really something that a typical smart person finds hard to believe at first, it seems like it would be relatively easy to convince them otherwise.
Amusing anecdote: There was a story about this issue on Slashdot one time, where someone possessing kiddy porn had obscured the faces by doing a swirl distortion, but investigators were able to sufficiently reverse this by doing an opposite swirl and so were able to identify the victims.
Then someone posted a comment to say that if you ever want to avoid this problem, you need to do something like a Gaussian blur, which deletes the information contained in that portion of the image.
Somebody replied to that comment and said, "Yeah. Or, you know, you could just not molest children."
You have a good point. It would be completely unreasonable to ban topics in such a manner while simultaneously expecting to maintain an image of being down to earth or particularly credible to intelligent external observers. It also doesn't reflect well on the SIAI if their authorities claim they cannot consider relevant risks because due to psychological or psychiatric difficulties. That is incredibly bad PR. It is exactly the kind of problem this post discusses.
JoshuaZ:
You don't seem to realize that claims like the ones in the post in question are a common sort of claim to make people vulnerable to neuroses develop further problems. Regardless whether or not the claims are at all reasonable, repeatedly referencing them this way is likely to cause further psychological harm.
However, it seems that in general, the mere fact that certain statements may cause psychological harm to some people is not considered a sufficient ground for banning or even just discouraging such statements here. For example, I am sure that many religious people would find certain views often expressed here shocking and deeply disturbing, and I have no doubt that many of them could be driven into serious psychological crises by exposure to such arguments, especially if they're stated so clearly and poignantly that they're difficult to brush off or rationalize away. Or, to take another example, it's very hard to scare me with hypotheticals, but the post "The Strangest Thing An AI Could Tell You" and the subsequent thread came pretty close; I'm sure that at least a few readers of this blog didn't sleep well if they happened to read that right before bedtime.
So, what exact sorts of potential psychological harm constitute sufficient grounds for proclaiming a topic undesirable? Is there some official policy about this that I've failed to acquaint myself with?
This post reminds me of the talk at this year's H+ summit by Robert Tercek. Amongst other things, he was pointing out how the PR battle over transhumanist issues was already lost in popular culture, and that the transhumanists were not helping matters by putting people with very freaky ideas in the spotlight.
I wonder if there are analogous concerns here.
Aside from the body of the article, which is just "common" sense, given the author's opinion against the current policies of SIAI, I found the final paragraph interesting because I also exhibit "an unusually high abundance of the traits associated with Aspergers Syndrome." Perhaps possessing that group of traits gives one a predilection to seriously consider existential risk reduction by being socially detached enough to see the bigger picture. Perhaps LW is somewhat homogenously populated with this "certain kind" of people. So, how do we gain credibility with normal people?
Basically, we need a PR campaign. It needs to be tightly focused: Just existential risk, don't try to sell the whole worldview at once (keep inferential distance in mind). Maybe it shouldn't even be through SIAI; maybe we should create a separate foundation called The Foundation to Reduce Existential Risk (or something). ("What do you do?" "We try to make sure the human race is still here in 1000 years. Can we interest you in our monthly donation plan?")
And if our PR campaign even slightly reduces the chances of a nuclear war or an unfriendly AI, it could be one of the most important things anyone has ever done.
Who do we know who has the resources to make such a campaign?
Huh, interesting. I wrote something very similar on my blog a while ago. (That was on cryonics, not existential risk reduction, and it goes on about cryonics specifically. But the point about rhetoric is much the same.)
Anyways, I agree. At the very least, some statements made by smart people (including Yudkowsky) have had the effect of increasing my blanket skepticism in some areas. On the other hand, such statements have me thinking more about the topics in question than I might have otherwise, so maybe that balances out. Then again, I'm more willin...
Come on... Who does not love being a social outcast? I made a decision when I was about 12 that rather then trying to conform to other people's expectations of me, I was going to do / express support for exactly what I thought made sense, even if something I supported was related to something I could not, and then get to know people who seemed to be making similar decisions. Its arrogant and has numerous flaws, but it has generally worked for me. Social status and popularity are overrated, compared to the benefits of meeting a large number of people you can interact with freely.
The discussion reassures me that EY is not, for anyone here, a cult leader.
I haven't evaluated SIAI carefully yet, but they do open themselves up to these sort of attacks when they advocate concentrating charitable giving to the marginally most efficient utility generator (up to $1M).
I haven't evaluated SIAI carefully yet, but they do open themselves up to these sort of attacks when they advocate concentrating charitable giving to the marginally most efficient utility generator (up to $1M).
To not advocate that would seem to set them up for attacks on their understanding of economics.
I suggest "and the SIAI is the marginally most efficient utility generator" is the one that opens them up to attacks. (I'm not saying that they shouldn't make that claim.)
Just take the best of anybody and discard the rest. Yudkowsky has some very good points (about 80% of his writings, by my view) - take them and say thank you.
When he or the SIAI missed the point, to put it mildly, you know it better anyway, don't you?
But there are (somewhat) wise individuals who have not yet thought carefully about existential risk. They're forced to heuristically decide whether or not thinking about it more makes sense. Given what they know at present, it may be rational for them to dismiss Eliezer as being like a very smart version of the UFO conspiracy theorists or something like that. Because of the halo effect issue that Yvain talks about, this may lower their willingness to consider existential risk at all.
Most people do not systematically go through every statement that some particular person has made. If someone has heard primarily negative things about somebody else, then that reduces the chance of them even bothering to look at the person's other writings. This is quite rational behavior, since there are a lot of people out there and one's time is limited.
My understanding of Michael Vassar's position is that the people who are dissuaded from thinking about existential risk because of remarks like Eliezer's are too irrational for it to be worthwhile for them to be thinking about existential risk.
Seems like a reasonable position to me.
An important part of existential risk reduction is making sure that people who are likely to work on AI, or fund it, have read the sequences, and are at least aware of how most possible minds are not minds we would want, and of how dangerous recursive self-improvement could be.
My understanding of Michael Vassar's position is that the people who are dissuaded from thinking about existential risk because of remarks like Eliezer's are too irrational for it to be worthwhile for them to be thinking about existential risk.
Seems like a reasonable position to me.
Really? I don't understand this position at all. The vast majority of the planet isn't very rational and the people with lots of resources are often not rational. If one can get some of those people to direct their resources in the right directions then that's still a net win for preventing existential risk even if they aren't very rational. If say a hundred million dollars more gets directed to existential risk even if much of that goes to the less likely existential risks that's still an overall reduction in existential risk and a general increase to the sanity waterline.
[Added 02/24/14: Some time after writing this post, I discovered that it was based on a somewhat unrepresentative picture of SIAI. I still think that the concerns therein were legitimate, but they had less relative significance than I had thought at the time. SIAI (now MIRI) has evolved substantially since 2010 when I wrote this post, and the criticisms made in the post don't apply to MIRI as presently constituted.]
A common trope on Less Wrong is the idea that governments and the academic establishment have neglected to consider, study and work against existential risk on account of their shortsightedness. This idea is undoubtedly true in large measure. In my opinion and in the opinion of many Less Wrong posters, it would be very desirable to get more people thinking seriously about existential risk. The question then arises: is it possible to get more people thinking seriously about existential risk? A first approximation to an answer to this question is "yes, by talking about it." But this answer requires substantial qualification: if the speaker or the speaker's claims have low credibility in the eyes of the audience then the speaker will be almost entirely unsuccessful in persuading his or her audience to think seriously about existential risk. Speakers who have low credibility in the eyes of an audience member decrease the audience member's receptiveness to thinking about existential risk. Rather perversely, speakers who have low credibility in the eyes of a sufficiently large fraction of their audience systematically raise existential risk by decreasing people's inclination to think about existential risk. This is true whether or not the speakers' claims are valid.
As Yvain has discussed in his excellent article titled The Trouble with "Good"
When Person X makes a claim which an audience member finds uncredible, the audience member's brain (semiconsciously) makes a mental note of the form "Boo for Person X's claims!" If the audience member also knows that Person X is an advocate of existential risk reduction, the audience member's brain may (semiconsciously) make a mental note of the type "Boo for existential risk reduction!"
The negative reaction to Person X's claims is especially strong if the audience member perceives Person X's claims as arising from a (possibly subconscious) attempt on Person X's part to attract attention and gain higher status, or even simply to feel as though he or she has high status. As Yvain says in his excellent article titled That other kind of status:
I'm presently a graduate student in pure mathematics. During graduate school I've met many smart people who I wish would take existential risk more seriously. Most such people who have heard of Eliezer do not find his claims credible. My understanding is that the reason for this is that Eliezer has made some claims which they perceive to be falling under the above rubric, and the strength of their negative reaction to these has tarnished their mental image of all of Eliezer's claims. Since Eliezer supports existential risk reduction, I believe that this has made them less inclined to think about existential risk than they were before they heard of Eliezer.
There is also a social effect which compounds the issue which I just mentioned. The issue which I just mentioned makes people who are not directly influenced by the issue that I just mentioned less likely to think seriously about existential risk on account of their desire to avoid being perceived as associated with claims that people find uncredible.
I'm very disappointed that Eliezer has made statements such as:
which are easily construed as claims that his work has higher expected value to humanity than the work of virtually all humans in existence. Even if such claims are true, people do not have the information that they need to verify that such claims are true, and so virtually everybody who could be helping out assuage existential risk find such claims uncredible. Many such people have an especially negative reaction to such claims because they can be viewed as arising from a tendency toward status grubbing, and humans are very strongly wired to be suspicious of those who they suspect to be vying for inappropriately high status. I believe that such people who come into contact with Eliezer's statements like the one I have quoted above are less statistically likely to work to reduce existential risk than they were before coming into contact with such statements. I therefore believe that by making such claims, Eliezer has increased existential risk.
I would go further than that and say that that I presently believe that donating to SIAI has negative expected impact on existential risk reduction on account of that SIAI staff are making uncredible claims which are poisoning the existential risk reduction meme. This is a matter on which reasonable people can disagree. In a recent comment, Carl Shulman expressed the view that though SIAI has had some negative impact on the existential risk reduction meme, the net impact of SIAI on the existential risk meme is positive. In any case, there's definitely room for improvement on this point.
Last July I made a comment raising this issue and Vladimir_Nesov suggested that I contact SIAI. Since then I have corresponded with Michael Vassar about this matter. My understanding of Michael Vassar's position is that the people who are dissuaded from thinking about existential risk because of remarks like Eliezer's are too irrational for it to be worthwhile for them to be thinking about existential risk. I may have misunderstood Michael's position and encourage him to make a public statement clarifying his position on this matter. If I have correctly understood his position, I do not find Michael Vassar's position on this matter credible.
I believe that if Carl Shulman is right, then donating to SIAI has positive expected impact on existential risk reduction. I believe that that even if this is the case, a higher expected value strategy is to withold donations from SIAI and informing SIAI that you will fund them if and only if they require their staff to exhibit a high degree of vigilance about the possibility of poisoning the existential risk meme by making claims that people find uncredible. I suggest that those who share my concerns adopt the latter policy until their concerns have been resolved.
Before I close, I should emphasize that my post should not be construed as an attack on Eliezer. I view Eliezer as an admirable person and don't think that he would ever knowingly do something that raises existential risk. Roko's Aspergers Poll suggests a strong possibility that the Less Wrong community exhibits an unusually high abundance of the traits associated with Aspergers Syndrome. It would not be at all surprising if the founders of Less Wrong have a similar unusual abundance of the traits associated with Aspergers Syndrome. I believe that more likely than not, the reason why Eliezer has missed the point that I raise in this post is social naivete on his part rather than willful self-deception.