Existential Risk and Public Relations

36 Post author: multifoliaterose 15 August 2010 07:16AM

[Added 02/24/14: Some time after writing this post, I discovered that it was based on a somewhat unrepresentative picture of SIAI. I still think that the concerns therein were legitimate, but they had less relative significance than I had thought at the time. SIAI (now MIRI) has evolved substantially since 2010 when I wrote this post, and the criticisms made in the post don't apply to MIRI as presently constituted.

A common trope on Less Wrong is the idea that governments and the academic establishment have neglected to consider, study and work against existential risk on account of their shortsightedness. This idea is undoubtedly true in large measure. In my opinion and in the opinion of many Less Wrong posters, it would be very desirable to get more people thinking seriously about existential risk. The question then arises: is it possible to get more people thinking seriously about existential risk? A first approximation to an answer to this question is "yes, by talking about it." But this answer requires substantial qualification: if the speaker or the speaker's claims have low credibility in the eyes of the audience then the speaker will be almost entirely unsuccessful in persuading his or her audience to think seriously about existential risk. Speakers who have low credibility in the eyes of an audience member decrease the audience member's receptiveness to thinking about existential risk. Rather perversely, speakers who have low credibility in the eyes of a sufficiently large fraction of their audience systematically raise existential risk by decreasing people's inclination to think about existential risk. This is true whether or not the speakers' claims are valid.

As Yvain has discussed in his excellent article titled The Trouble with "Good"

To make an outrageous metaphor: our brains run a system rather like Less Wrong's karma. You're allergic to cats, so you down-vote "cats" a couple of points. You hear about a Palestinian committing a terrorist attack, so you down-vote "Palestinians" a few points. Richard Dawkins just said something especially witty, so you up-vote "atheism". High karma score means seek it, use it, acquire it, or endorse it. Low karma score means avoid it, ignore it, discard it, or condemn it.

When Person X makes a claim which an audience member finds uncredible, the audience member's brain (semiconsciously) makes a mental note of the form "Boo for Person X's claims!"  If the audience member also knows that Person X is an advocate of existential risk reduction, the audience member's brain may (semiconsciously) make a mental note of the type "Boo for existential risk reduction!"

The negative reaction to Person X's claims is especially strong if the audience member perceives Person X's claims as arising from a (possibly subconscious) attempt on Person X's part to attract attention and gain higher status, or even simply to feel as though he or she has high status. As Yvain says in his excellent article titled That other kind of status:

But many, maybe most human actions are counterproductive at moving up the status ladder. 9-11 Conspiracy Theories are a case in point. They're a quick and easy way to have most of society think you're stupid and crazy. So is serious interest in the paranormal or any extremist political or religious belief. So why do these stay popular?

[...]

a person trying to estimate zir social status must balance two conflicting goals. First, ze must try to get as accurate an assessment of status as possible in order to plan a social life and predict others' reactions. Second, ze must construct a narrative that allows them to present zir social status as as high as possible, in order to reap the benefits of appearing high status.

[...]

In this model, people aren't just seeking status, they're (also? instead?) seeking a state of affairs that allows them to believe they have status. Genuinely having high status lets them assign themselves high status, but so do lots of other things. Being a 9-11 Truther works for exactly the reason mentioned in the original quote: they've figured out a deep and important secret that the rest of the world is too complacent to realize.

I'm presently a graduate student in pure mathematics. During graduate school I've met many smart people who I wish would take existential risk more seriously. Most such people who have heard of Eliezer do not find his claims credible. My understanding is that the reason for this is that Eliezer has made some claims which they perceive to be falling under the above rubric, and the strength of their negative reaction to these has tarnished their mental image of all of Eliezer's claims. Since Eliezer supports existential risk reduction, I believe that this has made them less inclined to think about existential risk than they were before they heard of Eliezer.

There is also a social effect which compounds the issue which I just mentioned. The issue which I just mentioned makes people who are not directly influenced by the issue that I just mentioned less likely to think seriously about existential risk on account of their desire to avoid being perceived as associated with claims that people find uncredible.

I'm very disappointed that Eliezer has made statements such as:

If I got hit by a meteorite now, what would happen is that Michael Vassar would take over sort of taking responsibility for seeing the planet through to safety...Marcello Herreshoff would be the one tasked with recognizing another Eliezer Yudkowsky if one showed up and could take over the project, but at present I don't know of any other person who could do that...

which are easily construed as claims that his work has higher expected value to humanity than the work of virtually all humans in existence. Even if such claims are true, people do not have the information that they need to verify that such claims are true, and so virtually everybody who could be helping out assuage existential risk find such claims uncredible. Many such people have an especially negative reaction to such claims because they can be viewed as arising from a tendency toward status grubbing, and humans are very strongly wired to be suspicious of those who they suspect to be vying for inappropriately high status.  I believe that such people who come into contact with Eliezer's statements like the one I have quoted above are less statistically likely to work to reduce existential risk than they were before coming into contact with such statements. I therefore believe that by making such claims, Eliezer has increased existential risk.

I would go further than that and say that that I presently believe that donating to SIAI has negative expected impact on existential risk reduction on account of that SIAI staff are making uncredible claims which are poisoning the existential risk reduction meme.  This is a matter on which reasonable people can disagree. In a recent comment, Carl Shulman expressed the view that though SIAI has had some negative impact on the existential risk reduction meme, the net impact of SIAI on the existential risk meme is positive. In any case, there's definitely room for improvement on this point.

Last July I made a comment raising this issue and Vladimir_Nesov suggested that I contact SIAI. Since then I have corresponded with Michael Vassar about this matter. My understanding of Michael Vassar's position is that the people who are dissuaded from thinking about existential risk because of remarks like Eliezer's are too irrational for it to be worthwhile for them to be thinking about existential risk. I may have misunderstood Michael's position and encourage him to make a public statement clarifying his position on this matter. If I have correctly understood his position, I do not find Michael Vassar's position on this matter credible.

I believe that if Carl Shulman is right, then donating to SIAI has positive expected impact on existential risk reduction. I believe that that even if this is the case, a higher expected value strategy is to withold donations from SIAI and informing SIAI that you will fund them if and only if they require their staff to exhibit a high degree of vigilance about the possibility of poisoning the existential risk meme by making claims that people find uncredible. I suggest that those who share my concerns adopt the latter policy until their concerns have been resolved.

Before I close, I should emphasize that my post should not be construed as an attack on Eliezer. I view Eliezer as an admirable person and don't think that he would ever knowingly do something that raises existential risk. Roko's Aspergers Poll suggests a strong possibility that the Less Wrong community exhibits an unusually high abundance of the traits associated with Aspergers Syndrome. It would not be at all surprising if the founders of Less Wrong have a similar unusual abundance of the traits associated with Aspergers Syndrome. I believe that more likely than not, the reason why Eliezer has missed the point that I raise in this post is social naivete on his part rather than willful self-deception.

Comments (613)

Comment author: pnrjulius 12 June 2012 02:58:57AM 0 points [-]

Basically, we need a PR campaign. It needs to be tightly focused: Just existential risk, don't try to sell the whole worldview at once (keep inferential distance in mind). Maybe it shouldn't even be through SIAI; maybe we should create a separate foundation called The Foundation to Reduce Existential Risk (or something). ("What do you do?" "We try to make sure the human race is still here in 1000 years. Can we interest you in our monthly donation plan?")

And if our PR campaign even slightly reduces the chances of a nuclear war or an unfriendly AI, it could be one of the most important things anyone has ever done.

Who do we know who has the resources to make such a campaign?

Comment author: Eneasz 24 August 2010 05:46:22PM 30 points [-]

informing SIAI that you will fund them if and only if they require their staff to exhibit a high degree of vigilance about the possibility of poisoning the existential risk meme by making claims that people find uncredible

I believe you are completely ignoring the status-demolishing effects of hypocrisy and insincerity.

When I first started watching Blogging Heads discussions featuring Eliezer I would often have moments where I held my breath thinking "Oh god, he can't address that directly without sounding nuts, here comes the abhorrent back-peddling and waffling". Instead he met it head on with complete honesty and did so in a way I've never seen other people able to pull off - without sounding nuts at all. In fact, sounding very reasonable. I've since updated enough that I no longer wince and hold my breath, I smile and await the triumph.

If, as most people (and nearly all politicians) do, he would have waffled and presented an argument that he doesn't honestly hold, but that is more publicly acceptable, I'd feel disappointed and a bit sickened and I'd tune out the rest of what he has to say.

Hypocrisy is transparent. People (including neurotypical people) very easily see when others are making claims they don't personally believe, and they universally despise such actions. Politicians and lawyers are among the most hated groups in modern societies, in large part because of this hypocrisy. They are only tolerated because they are seen as a necessary evil.

Right now, People Working To Reduce Existential Risk are not seen as necessary. So it's highly unlikely that hypocrisy among them would be tolerated. They would repel anyone currently inclined to help, and their hypocrisy wouldn't draw in any new support. The answer isn't to try to deceive others about your true beliefs, it is to help make those beliefs more credible among the incredulous.

I feel that anyone advocating for public hypocrisy among the SIAI staff is working to disintegrate the organization (even if unintentionally).

Comment author: pnrjulius 12 June 2012 02:59:28AM 2 points [-]

On the other hand... people say they hate politicians and then vote for them anyway.

So hypocrisy does have upsides, and maybe we shouldn't dismiss it so easily.

Comment author: CuSithBell 12 June 2012 03:35:00AM 0 points [-]

On the other hand... people say they hate politicians and then vote for them anyway.

Who are they going to vote for instead?

Comment author: pnrjulius 12 June 2012 03:40:03AM 2 points [-]

Well yes, exactly. If it takes a certain degree of hypocrisy to get campaign contributions, advertising, etc., and it takes these things to get elected... then you're going to have to have a little hypocrisy in order to win.

And we do want to win, right? We want to actually reduce existential risk, and not just feel like we are?

If you can find a way to persuade people (and win elections, never forget that making policy in a democracy means winning elections) that doesn't involve hypocrisy, I'm all ears.

Comment author: Carinthium 23 November 2010 09:19:06AM *  0 points [-]

The above is a good comment, but 26 karma? How did it deserve that?

Comment author: wnoise 24 November 2010 02:06:58AM 1 point [-]

Karma (despite the name) has very little to do with "deserve". All it really means is that 26 (now 25) more people desire more content like this than desire less content like this.

Comment author: Carinthium 24 November 2010 02:26:46AM -1 points [-]

On the other hand, it is a good thing to shift the Karma system to better resemble a system based on merit- i.e. they should vote down the comment up to a point because although it is a good one it doesn't deserve it's very high score.

Comment author: wnoise 24 November 2010 05:19:53PM *  6 points [-]

Why should something that is mildly liked by many not have a higher score than something that is highly liked by fewer?

In any case, it's rather hard to do. How do you propose to make your standards for a good comment the one other people use? Each individual sets their own level at which they will up- or down-vote a comment or post. They can indeed take into account the current score of a post, but that does rather poorly as others come by and change it. Should the first guy who up-voted that check back and see if it is now too highly rated? That seems hardly worth his time. And pretty much by definition, the guy who voted it from 25 to 26 was happier with the score at 26 than at 25, so at least one person does think it was worth 26.

And what happens as norms change as to what a "good score" is as more comments have more eyeballs and voters looking at them?

Or we could all just take karma beyond "net positive" and "net negative" a whole lot less seriously.

Complaining about a given score and the choices of others certainly isn't likely to go much of anywhere.

Comment author: Eliezer_Yudkowsky 24 August 2010 06:30:57PM 17 points [-]

When I first started watching Blogging Heads discussions featuring Eliezer I would often have moments where I held my breath thinking "Oh god, he can't address that directly without sounding nuts, here comes the abhorrent back-peddling and waffling". Instead he met it head on with complete honesty

I am so glad that someone notices and appreciates this.

I feel that anyone advocating for public hypocrisy among the SIAI staff is working to disintegrate the organization (even if unintentionally).

Agreed.

Comment author: complexmeme 19 August 2010 02:43:21AM *  1 point [-]

Huh, interesting. I wrote something very similar on my blog a while ago. (That was on cryonics, not existential risk reduction, and it goes on about cryonics specifically. But the point about rhetoric is much the same.)

Anyways, I agree. At the very least, some statements made by smart people (including Yudkowsky) have had the effect of increasing my blanket skepticism in some areas. On the other hand, such statements have me thinking more about the topics in question than I might have otherwise, so maybe that balances out. Then again, I'm more willing to wrestle with my skepticism than most, and I'm still probably a "mediocre rationalist" (to put it in Eliezer's terms).

Comment author: [deleted] 19 August 2010 02:35:43AM *  3 points [-]

It sounds to me like half of the perceived public image problem comes from apparently blurred lines between the SIAI and LessWrong, and between the SIAI and Eliezer himself. These could be real problems - I generally have difficulty explaining one of the three without mentioning the other two - but I'm not sure how significant it is.

The ideal situation would be that people would evaluate SIAI based on its publications, the justification of the research areas, and whether the current and proposed projects satisfy those goals best, are reasonably costed, and are making progress.

Whoever actually holds these as the points to be evaluated will find the list of achievements. Individual projects all have detailed proposals and a budget breakdown, since donors can choose to donate directly to one research project or another.

Finally, a large number of those projects are academic papers. If you dig a bit, you'll find that many of these papers are submitted at academic and industry conferences. Hosting the Singularity Summit doesn't hurt either.

It doesn't make sense to downplay a researcher's strange viewpoints if those viewpoints seem valid. Eliezer believes his viewpoint to be valid. LessWrong, a project of his, has a lot of people who agree with his ideas. There are also people who disagree with some of his ideas, but the point is that it shouldn't matter. LessWrong is a project of SIAI, not the organization itself. Support on this website of his ideas should have little to do with SIAI's support of his ideas.

Your points seem to be that claims made by Eliezer and upheld by the SIAI don't appear credible due to insufficient argument, and due to one person's personality. You can argue all you want about how he is viewed. You can debate the published papers' worth. But the two shouldn't be equated. This despite the fact that he's written half of the publications.

Here are the questions (that tie to your post) which I think are worth discussing on public relations, if not the contents of the publications:

  • Do people equate "The views of Eliezer Yudkowsky" with "The views of SIAI"? Do people view the research program or organization as "his" project?
  • Which people, and to what extent?
  • Is this good or bad, and how important is it?

The optimal answer to those questions is the one that leads the most AI researchers to evaluate the most publications with the respect of serious scrutiny and consideration.

I'll repeat that other people have published papers with the SIAI, that their proposals are spelled out, that some papers are presented at academic and industry conferences, and that the SIAI's Singularity Summit hosts speakers who do not agree with all of Eliezer's opinions, who nonetheless associate with the organization by attendance.

Comment author: nonhuman 21 August 2010 03:37:13AM 4 points [-]

I feel it's worth pointing out that just because something should be, doesn't mean it is. You state:

Your points seem to be that claims made by Eliezer and upheld by the SIAI don't appear credible due to insufficient argument, and due to one person's personality. You can argue all you want about how he is viewed. You can debate the published papers' worth. But the two shouldn't be equated.

I agree with the sentiment, but how practical is it? Just because it would be incorrect to equate Eliezer and the SIAI doesn't meant that people won't do it. Perhaps it would be reasonable to say that the people who fail to make the distinction are also the people on whom it's not worth expending the effort trying to explicate the situation, but I suspect that it is still the case that the majority of people are going to have a hard time not making that equation if they even try at all.

The purpose of this article, I would presume to say, is that public relations actually does serve a valid and useful purpose. It is not a wasted effort to ensure that the ideas that one considers true, or at least worthwhile, are presented in the sort of light that encourages people to take them seriously. This is something that I think many people of a more intellectual bent often fail to consider; though some of us might actually invest time and effort into determining for ourselves whether an idea is good or not, I would say the majority do not and instead rely on trusted sources to guide them (with often disastrous results).

Again, it may just be that we don't care about those people (and it's certainly tempting to go that way), but there may be times when quantity of supporters, in addition to quality, could be useful.

Comment author: [deleted] 21 August 2010 06:32:48PM 2 points [-]

We don't disagree on any point that I can see. I was contrasting an ideal way of looking at things (part of what you quoted) from how people might actually see things (my three bullet-point questions).

As much as I enjoy Eliezer's thoughts and respect his work, I'm also of the opinion that one of the tasks the SIAI must work on (and almost certainly is working on) is keeping his research going while making the distinction between the two entities more obvious. But to whom? The research community should be the first and primary target.

Coming back from the Summit, I feel that they're taking decent measures toward this. The most important thing to do is for the other SIAI names to be known. Michael Vassar's is the easiest to get people to hold because of the name of his role, and he was acting as the SIAI face more than Eliezer was. At this point, a dispute would make the SIAI look unstable - they need positive promotion of leadership and idea diversity, more public awareness of their interactions with academia, and that's about it.

Housing a clearly promoted second research program would solve this problem. If only there was enough money, and a second goal which didn't obviously conflict with the first, and the program still fit under the mission statement. I don't know if that is possible. Money aside, I think that it is possible. Decision theoretic research with respect to FAI is just one area of FAI research. Utterly essential, but probably not all there is to do.

Comment author: [deleted] 19 August 2010 02:46:19AM 6 points [-]

To top it off, the SIAI is responsible for getting James Randi's seal of approval on the Singularity being probable. That's not poisoning the meme, not one bit.

Comment author: [deleted] 18 August 2010 03:46:37PM *  8 points [-]

I think largish fraction of the population have worries about human extinction / the end of the world. Very few associate this with the phrase "existential risk" -- I for one had never heard the term until after I had started reading about the technological singularity and related ideas. Perhaps rebranding of a sort would help you further the cause. Ditto for FAI - I think 'Ethical Artficial Intelligence' would get the idea across well enough and might sound less flakey to certain audiences.

Comment author: josh0 21 August 2010 03:53:41AM 6 points [-]

It may be true that many are worried about 'the end of the world', however consider how many of them think that it was predicted by the Mayan calandar to occur on Dec. 21 2012, and how many actively want it to happen because they believe it will herald the coming of God's Kingdom on Earth, Olam Haba, or whatever.

We could rebrand 'existential risk' as 'end time' and gain vast numbers of followers. But I doubt that would actually be desirable.

I do think that Ethical Artificial Intelligence would strike a better chord with most than Friendly, though. 'Friendly' does sound a bit unserious.

Comment author: zemaj 19 August 2010 10:09:06AM 9 points [-]

"Ethical Artificial Intelligence" sounds great and makes sense without having to know the background of the technological singularity as "Friendly Artificial Intelligence" does. Every time I try to mention FAI to someone without any background on the topic I always have to take two steps back in the conversation and it becomes quickly confusing. I think I could mention Ethical AI and then continue on with whatever point I was making without any kind of background and it would still make the right connections.

I also expect it would appeal to a demographic likely to support the concept as well. People who worry about ethical food, business, healthcare etc... would be likely to worry about existential risk on many levels.

In fact I think I'll just go ahead and start using Ethical AI from now on. I'm sure people in the FAI community would understand what I'm talking about.

Comment author: James_Miller 18 August 2010 03:30:54PM 7 points [-]

Given how superficially insane Eliezer's beliefs seem he has done a fantastic job of attracting support for his views.

Eliezer is popularizing his beliefs, not directly through his own writings, but by attracting people (such as conference speakers and this comment writer who is currently writing a general-audience book) who promote understanding of issues such as intelligence explosion, unfriendly AI and cryonics.

Eliezer is obviously not neurotypical. The non-neurotypical have a tough time making arguments that emotionally connect. Given that Eliezer has a massive non-comparative advantage in making such arguments we shouldn't expect him to spend his time trying to become slightly better at doing so.

Eliezer might not have won the backing of people such as super-rationalist self-made tech billionaire Peter Thiel had Eliezer devoted less effort to rational arguments.

Comment author: FAWS 18 August 2010 03:37:27PM 1 point [-]

Given that Eliezer has a massive non-comparative advantage in making such arguments we shouldn't expect him to spend his time trying to become slightly better at doing so.

Do you mean comparative disadvantage? Otherwise I can't make sense of what you are trying to say. Not that I'd agree with that anyway, Eliezer is very good rhetorically, and I'm suspicious of psychological diagnoses performed over the internet.

Comment author: James_Miller 18 August 2010 03:44:55PM *  2 points [-]

By "massive non-comparative advantage" I meant he doesn't have a comparative advantage.

I have twice talked with Eliezer in person, seen in person a few of his talks, watched several videos of him talking and for family reasons I have read a huge amount about the non-neurotypical.

Comment author: FAWS 18 August 2010 04:07:54PM 2 points [-]

By "massive non-comparative advantage" I meant he doesn't have a comparative advantage.

??? So you mean he has a massive absolute advantage, but is also so hugely better at other things compared to normal people it's still not worth his time??? Or does that actually mean that he has an absolute advantage of unspecified size, that happens to be very much non-comparative? What someone only vaguely familiar with economic terminology like me might call a "massively non-comparative advantage"?

Comment author: Eliezer_Yudkowsky 18 August 2010 02:28:13PM 17 points [-]

I don't mean to dismiss the points of this post, but all of those points do need to be reinterpreted in light of the fact that I'd rather have a few really good rationalists as allies than a lot of mediocre rationalists who think "oh, cool" and don't do anything about it. Consider me as being systematically concerned with the top 5% rather than the average case. However, I do still care about things like propagation velocities because that affects what population size the top 5% is 5% of, for example.

Comment author: pnrjulius 12 June 2012 03:05:19AM -1 points [-]

We live in a democracy! How can you not be concerned with 95% of the population? They rule you.

If we lived in some sort of meritocratic aristocracy, perhaps then we could focus our efforts on only the smartest 5%.

As it is, it's the 95% who decide what happens in our elections, and its our elections who decides what rules get made, what projects get funded. The President of the United States could unleash nuclear war at any time. He's not likely to---but he could. And if he did push that button, it's over, for all of us. So we need to be very concerned about who is in charge of that button, and that means we need to be very concerned about the people who elect him.

Right now, 46% of them think the Earth is 6000 years old. This worldview comes with a lot of other anti-rationalist baggage like faith and the Rapture. And it runs our country. Is it just me, or does this seem like a serious problem, one that we should probably be working to fix?

Comment author: multifoliaterose 18 August 2010 04:17:02PM *  6 points [-]

Agree with the points of both of ChistianKI and XiXiDu.

As for really good rationalists, I have the impression that even when it comes to them you inadvertently alienate them with higher than usual frequency on account of saying things that sound quite strange.

I think (but am not sure) that you would benefit from spending more time understanding what goes on in neurotypical people's minds. This would carry not only social benefits (which you may no longer need very much at this point) but also epistemological benefits.

However, I do still care about things like propagation velocities because that affects what population size the top 5% is 5% of, for example.

I'm encouraged by this remark.

Comment author: ChristianKl 18 August 2010 03:09:18PM 2 points [-]

If we think existential risk reduction is important than we should care about whether politicians think that existential risk reduction is a good idea. I don't think that a substantial number of US congressman are what you consider to be good rationalists.

Comment author: Eliezer_Yudkowsky 18 August 2010 04:28:20PM 12 points [-]

For Congress to implement good policy in this area would be performance vastly exceeding what we've previously seen from them. They called prediction markets terror markets. I expect more of the same, and expect to have little effect on them.

Comment author: Psy-Kosh 18 August 2010 08:58:00PM 9 points [-]

The flipside though is if we can frame the issue in a way that there's no obvious Democrat or Republican position, then we can, as Robin Hanson puts it, "pull the rope sideways".

The very fact that much of the existential risk stuff is "strange sounding" relative to what most people are used to really thinking about in the context of political arguments might thus act as a positive.

Comment author: XiXiDu 18 August 2010 02:58:32PM 9 points [-]

Somewhere you said that you are really happy to be finally able to concentrate directly on the matters you deem important and don't have to raise money anymore. This obviously worked, so you won't have to change anything. But if you ever need to raise more money for a certain project, my question is how much of the money you already get comes from people you would consider mediocre rationalists?

I'm not sure if you expect to ever need a lot of money for a SIAI project, but if you solely rely on those few really good rationalists then you might have a hard time in that case.

People like me will probably always stay on your side, whether you tell them they are idiots. But I'm not sure if that might be enough in a scenario where donations are important.

Comment author: Larks 17 August 2010 10:11:48PM *  6 points [-]

This page is now the 8th result for a google search for 'existential risk' and the 4th result for 'singularity existential risk'

Regardless of the effect SIAI may have had on the public image of existential risk reduction, it seems this is unlikely to be helpful.

Edit: it is now 7th and first, respectively. This is plusungood.

Comment author: jimrandomh 18 August 2010 03:32:02AM 8 points [-]

This is partially because Google gives a ranking boost to things it sees as recent, so it may not stay that well ranked.

Comment author: Larks 18 August 2010 05:00:28AM 0 points [-]

Right & upvoted.

Comment author: multifoliaterose 18 August 2010 04:09:40AM 0 points [-]

Yes, good point.

Comment author: multifoliaterose 17 August 2010 10:15:46PM 1 point [-]

I disagree. I think that my post does a good job of highlighting the fact that public aversion to thinking about existential risk reduction is irrational.

Comment author: Larks 17 August 2010 10:25:23PM *  5 points [-]

The post (as I parse it) has two points:

  • The public are irrational with respect to existential risk
  • Donating to SIAI has negative expected impact on existential risk reduction

The former is fine, but the latter seems more likely to damage SIAI and existential risk reduction. It's not desirable that when someone does their initial google one of the first things they find is infighting and attacks on SIAI as essentially untrustworthy. Rather, they should find actual articles about the singularity, the dangers it poses, and the work being done.

As you so accurately quote Yvain, for the average reading this is not an intelligent critique of the public relations of SingInst. This is 'boo Eliezer!'

Comment author: multifoliaterose 17 August 2010 10:36:21PM 1 point [-]

The former is fine, but the latter seems more likely to damage SIAI and existential risk reduction. It's not desirable that when someone does their initial google one of the first things they find is infighting and attacks on SIAI as essentially untrustworthy. Rather, they should find actual articles about the singularity, the dangers it poses, and the work being done.

I agree that this article is not one of the first that should appear when people Google the singularity or existential risk. I'm somewhat perplexed as to how this happened?

Despite this issue, I think that the benefits of my posting on this topic outweigh the costs. I believe that ultimately whether or not humans avoid global catastrophic risk depends much more on people's willingness to think about the topic than it does on SIAI's reputation. I don't believe that my post will lower readers' interest thinking about in existential risk.

Comment author: Larks 17 August 2010 06:40:47AM 17 points [-]

It must be said that the reason no-one from SingInst has commented here is they're all busy running the Singularity Summit, a well-run conference full of AGI researchers, the one group that SingInst cares about impressing more than any other. Furthermore, Eliezer's speech was well received by those present.

I'm not sure whether attacking SingInst for poor public relations during the one week when everyone is busy with a massive public relations effort is either very ironic or very Machiavellian.

Comment author: rabidchicken 17 August 2010 06:17:31AM *  0 points [-]

Come on... Who does not love being a social outcast? I made a decision when I was about 12 that rather then trying to conform to other people's expectations of me, I was going to do / express support for exactly what I thought made sense, even if something I supported was related to something I could not, and then get to know people who seemed to be making similar decisions. Its arrogant and has numerous flaws, but it has generally worked for me. Social status and popularity are overrated, compared to the benefits of meeting a large number of people you can interact with freely.

Comment author: KrisC 17 August 2010 06:30:07AM 1 point [-]

This works fine as long as you don't find yourself operating within a hierarchy.

Comment author: Jonathan_Graehl 16 August 2010 11:12:05PM 0 points [-]

The discussion reassures me that EY is not, for anyone here, a cult leader.

I haven't evaluated SIAI carefully yet, but they do open themselves up to these sort of attacks when they advocate concentrating charitable giving to the marginally most efficient utility generator (up to $1M).

Comment author: rabidchicken 17 August 2010 06:22:31AM 0 points [-]

EY is not a cult leader, he is a Lolcat herder.

Comment author: wedrifid 17 August 2010 06:37:50AM 0 points [-]

You have not behaved like a troll thus far, some of your contributions have been useful. Please don't go down that path now.

Comment author: jsalvatier 20 August 2010 08:15:26PM 1 point [-]

I am confused: his comment reads like a joke, how is that trollish? I smiled.

Comment author: rabidchicken 17 August 2010 09:41:28PM 1 point [-]

That was a useless and stupid thing to say even if I am a troll, my apologies.

Comment author: wedrifid 17 August 2010 05:11:22AM 7 points [-]

I haven't evaluated SIAI carefully yet, but they do open themselves up to these sort of attacks when they advocate concentrating charitable giving to the marginally most efficient utility generator (up to $1M).

To not advocate that would seem to set them up for attacks on their understanding of economics.

I suggest "and the SIAI is the marginally most efficient utility generator" is the one that opens them up to attacks. (I'm not saying that they shouldn't make that claim.)

Comment author: ciphergoth 17 August 2010 07:28:17AM 7 points [-]

In a saner world every charity would claim this. Running a charity that you think generates utility less efficiently than some existing charity would be madness.

Comment author: Eliezer_Yudkowsky 18 August 2010 03:16:39PM 4 points [-]

In a sane world where everyone had the same altruistic component of their values, the marginal EU of all utilities would roughly balance up to the cost of discriminating them more closely. I'd have to think about what would happen if everyone had different altruistic components of their values; but if large groups of people had the same values, then there would exist some class of charities that was marginally balanced with respect to those values, and people from that group would expend the cost to pick out a member of that class but then not look too much harder. If everyone who works for a charity is optimistic and claims that their charity alone is the most marginally efficient in the group, that raises the cost of discriminating among them and they will become more marginally unbalanced.

Comment author: ciphergoth 18 August 2010 03:40:41PM 3 points [-]

This more detailed analysis doesn't I think detract from my main point: in broad terms, it's not weird that SIAI claim to be the most efficient way to spend altruistically, it's weird that all charities don't claim this.

Comment author: Eliezer_Yudkowsky 18 August 2010 04:26:34PM 1 point [-]

I agree with your main point and was refining it.

Comment author: Vladimir_Nesov 17 August 2010 07:59:32PM *  5 points [-]

Running a charity that you think generates utility less efficiently than some existing charity would be madness.

Many charities could have close marginal worth, and rational allocation of resources would keep them that way. A charity that is less efficient could still perform a useful function, merely needing a decrease in funding, and not disbanding.

And you can't have statically super-efficient charities either, because marginal worth decreases with more funding. For example, a baseline of hundred million dollars SIAI yearly budget might drive marginal efficiency of a dollar donation lower than of other causes.

Comment author: Larks 17 August 2010 06:22:19AM *  3 points [-]

If they/we didn't think SIAI was the most efficient utility generator and didn't dispand & work for Givewell or whatever, they'd be guilty of failing to act as utility maximisers.

The belief that SIAI is the best utility generator may be incorrect, but you can't criticise someone from SIAI for making it beyond criticising them for being at SIAI, a criticism that no-one seems to make.

Comment author: wedrifid 17 August 2010 06:32:06AM 3 points [-]

If they/we didn't think SIAI was the most efficient utility generator and didn't dispand & work for Givewell or whatever, they'd be guilty of failing to act as utility maximisers.

Technically not true.SIAI could actually be the optimal way for them specifically to generate utility while at the same time being not the optimal place for people to donate. For example, they could use their efforts to divert charitable donations from even worse sources to themselves and then pass it on to Givewell.

Comment author: Larks 17 August 2010 06:43:40AM 1 point [-]

I think that would be illegal, though I'm not as familiar with US rules with regard to this as UK ones. More importantly, that argument seems to rely on an unfairly expansive interpritation of what it is to work for SIAI: diverting money away from SIAI doesn't count.

Comment author: multifoliaterose 17 August 2010 06:26:55AM 0 points [-]
Comment author: Jonathan_Graehl 17 August 2010 05:33:52AM *  1 point [-]

Sure; that's more or less what I meant. Even calling attacks these bids by SIAI competitors to in fact offer better marginal-utility efficiency was a little over-dramatic on my part.

I have only one objection to the economic argument: "assume there is already sufficient diversification in improving or maintaining human progress; then you should only give to SIAI" is a simplification that only works if the majority aren't convinced by that argument. I guess there's practically speaking no danger of that happening.

In other words, SIAI's claim can only be plausible if they promise to adjust their allocation of effort to ensure some diversity, in the unlikely event that they end up receiving humongous amounts of money (and I'm sure they'll say that they will).

By the way, I don't mean to say that an individual diversifying their charitable spending, or for globally there to be diversity in charitable spending, is an end in itself. I just feel comforted that some of it is the kind that reduces overall risk (because the perceived-most-efficient group turns out to have a blind spot in retrospect due to politics, group-think, laziness, or any number of human weaknesses).

Comment author: Jonathan_Graehl 16 August 2010 08:58:49PM 10 points [-]

When I'm talking to someone I respect (and want to admire me), I definitely feel an urge to distance myself from EY. I feel like I'm biting a social bullet in order to advocate for SIAI-like beliefs or action.

What's more, this casts a shadow over my actual beliefs.

This is in spite of the fact that I love EY's writing, and actually enjoy his fearless geeky humor ("hit by a meteorite" is indeed more fun than the conventional "hit by a bus").

The fear of being represented by EY is mostly due to what he's saying, not how he's saying it. That is, even if he were always dignified and measured, he'd catch nearly as much flak. If he'd avoided certain topics entirely, that would have made a significant difference, but on the other hand, he's effectively counter-signaled that he's fully honest and uncensored in public (of course he is probably not, exactly), which I think is also valuable.

I think EY can win by saying enough true things, convincingly, that smart people will be persuaded that he's credible. It's perhaps true that better PR will speed the process - by enough for it to be worth it? That's up to him.

The comments in this diavlog with Scott Aaronson - while some are by obvious axe-grinders - are critical of EY's manner. People appear to hate nothing more than (what they see as) undeserved confidence. Who knows how prevalent this adverse reaction to EY is, since the set of commenters is self-selecting.

People who are floundering in a debate with EY (e.g. Jason Lanier) seem to think they can bank on a "you crazy low-status sci-fi nerd" rebuttal to EY. This can score huge with lazy or unintellectual people if it's allowed to succeed.

Comment author: Eneasz 24 August 2010 05:52:00PM 1 point [-]

This can score huge with lazy or unintellectual people if it's allowed to succeed.

What is the likelihood that lazy or unintellectual people would have ever done anything to reduce existential risk regardless of any particular advocate for/against?

Comment author: JoshuaZ 24 August 2010 06:00:52PM *  3 points [-]

They might give money to the people will actually use do the reduction in existential risk. I'd also note that even more people who are generally intellectuals or at least think of themselves as intellectuals, this sort of argument can if phrased in the right way still impact them; scifi is still a very low status association for many of those people.

Comment author: Jonathan_Graehl 24 August 2010 06:51:39PM 1 point [-]

I think Eneasz is right, but I agree with you that we should care about the support of ordinary people and those who choose to specialize elsewhere.

I was thinking also of the motivational effect of average people's (dis)approval on the gifted. Sure, many intellectual milestones were first reached by those who either needed less to be accepted, or drew their in-group/out-group boundary more tightly around themselves, but social pressure matters.

Comment author: thomblake 16 August 2010 02:17:13PM 3 points [-]

This post reminds me of the talk at this year's H+ summit by Robert Tercek. Amongst other things, he was pointing out how the PR battle over transhumanist issues was already lost in popular culture, and that the transhumanists were not helping matters by putting people with very freaky ideas in the spotlight.

I wonder if there are analogous concerns here.

Comment author: mranissimov 16 August 2010 07:24:11AM 3 points [-]

Just to check... have I said any "naughty" things analogous to the Eliezer quote above?

Comment author: wedrifid 16 August 2010 07:57:52AM 1 point [-]

Not to my knowledge... but Eliezer makes his words far more prominent that you do.

Comment author: mranissimov 04 September 2010 03:24:10AM 1 point [-]

Only on LessWrong. In the wider world, more people actually read my words!

Comment author: JamesAndrix 16 August 2010 03:02:28AM 12 points [-]

Warning: Shameless Self Promotion ahead

Perhaps part of the difficulty here is the attempt to spur a wide rationalist community on the same site frequented by rationalists with strong obscure positions on obscure topics.

Early in Lesswrong discussion of FAI was discouraged so that it didn't just become a site about FAI and the singularity, but a forum about human rationality more generally.

I can't track down an article[s] from EY about how thinking about AI can be too absorbing, and how in order properly create a community, you have to truly put aside the ulterior motive of advancing FAI research.

It might be wise for us to again deliberately shift our focus away from FAI and onto human rationality and how it can be applied more widely (say to science in general.)

Enter the SSP: For months now I've been brainstorming a community to educate people on the creation and use of 3D printers, with the eventual goal of making much better 3D printers. So this is a different big complicated problem with a potential high payoff, and it ties into many fields, provides tangible previews of the singularity, can benefit from the involvement of people with almost any skill set, and seems to be much safer than advancing AI, nanotech, or genetic engineering.

I had already intended to introduce rationality concepts where applicable and link a lot to Lesswrong. but if a few LWers were willing to help, It could become a standalone community of people committed to thinking clearly about complex technical and social problems, with a latent obsession with 3D printers.

Comment author: komponisto 16 August 2010 12:38:38AM *  32 points [-]

I'll state my own experience and perception, since it seems to be different from that of others, as evidenced in both the post and the comments. Take it for what it's worth; maybe it's rare enough to be disregarded.

The first time I heard about SIAI -- which was possibly the first time I had heard the word "singularity" in the technological sense -- was whenever I first looked at the "About" page on Overcoming Bias, sometime in late 2006 or early 2007, where it was listed as Eliezer Yudkowsky's employer. To make this story short, the whole reason I became interested in this topic in the first place was because I was impressed by EY -- specifically his writings on rationality on OB (now known as the Sequences here on LW). Now of course most of those ideas were hardly original with him (indeed many times I had the feeling he was stating the obvious, albeit in a refreshing, enjoyable way) but the fact that he was able to write them down in such a clear, systematic, and readable fashion showed that he understood them thoroughly. This was clearly somebody who knew how to think.

Now, when someone has made that kind of demonstration of rationality, I just don't have much problem listening to whatever they have to say, regardless of how "outlandish" it may seem in the context of most human discourse. Maybe I'm exceptional in this respect, but I've never been under the impression that only "normal-sounding" things can be true or important. At any rate, I've certainly never been under that impression to such an extent that I would be willing to dismiss claims made by the author of The Simple Truth and A Technical Explanation of a Technical Explanation, someone who understands things like the gene-centered view of evolution and why MWI exemplifies rather than violates Occam's Razor, in the context of his own professional vocation!

I really don't understand what the difference is between me and the "smart people" that you (and XiXiDu) know. In fact maybe they should be more inclined to listen to EY and SIAI; after all, they probably grew up reading science fiction, in households where mild existential risks like global warming were taken seriously. Are they just not as smart as me? Am I unusually susceptible to following leaders and joining cults? (Don't think so.) Do I simply have an unusual personality that makes me willing to listen to strange-sounding claims? (But why wouldn't they as well, if they're "smart"?)

Why can't they just read the darn sequences and pick up on the fact that these people are worth listening to?

Comment author: hegemonicon 17 August 2010 08:34:31PM *  30 points [-]

I STRONGLY suspect that there is a enormous gulf between finding out things on your own and being directed to them by a peer.

When you find something on your own (existential risk, cryonics, whatever), you get to bask in your own fortuitousness, and congratulate yourself on being smart enough to understand it's value. You get a boost in (perceived) status, because not only do you know more than you did before, you know things other people don't know.

But when someone else has to direct you to it, it's much less positive. When you tell someone about existential risk or cryonics or whatever, the subtext is "look, you're weren't able to figure this out by yourself, let me help you". No matter how nicely you phrase it, there's going to be resistance because it comes with a drop in status - which they can avoid by not accepting whatever you're selling. It actually might be WORSE with smart people who believe that they have most things "figured out".

Comment author: multifoliaterose 16 August 2010 09:18:21AM *  9 points [-]

Thanks for your thoughtful comment.

To make this story short, the whole reason I became interested in this topic in the first place was because I was impressed by EY -- specifically his writings on rationality on OB (now known as the Sequences here on LW). Now of course most of those ideas were hardly original with him (indeed many times I had the feeling he was stating the obvious, albeit in a refreshing, enjoyable way) but the fact that he was able to write them down in such a clear, systematic, and readable fashion showed that he understood them thoroughly. This was clearly somebody who knew how to think.

I know some people who have had this sort of experience. My claim is not that Eliezer has uniformly repelled people from thinking about existential risk. My claim is that on average Eliezer's outlandish claims repel people from thinking about existential risk.

Do I simply have an unusual personality that makes me willing to listen to strange-sounding claims?

My guess would be that this is it. I'm the same way.

(But why wouldn't they as well, if they're "smart"?)

It's not clear that willingness to listen to strange-sounding claims exhibits correlation with instrumental rationality, or what the sign of that correlation is. People who are willing to listen to strange-sounding claims statistically end up hanging out with UFO conspiracy theorists, New Age people, etc. more often than usual. Statistically, people who make strange-sounding claims are not worth listening to. Too much willingness to listen to strange-sounding claims can easily result in one wasting large portions of one's life.

Why can't they just read the darn sequences and pick up on the fact that these people are worth listening to?

See my remarks above.

Comment author: komponisto 16 August 2010 11:10:39AM 7 points [-]

Thank you for your thoughtful reply; although, as will be evident, I'm not quite sure I actually got the point across.

(But why wouldn't they as well, if they're "smart"?)

It's not clear that willingness to listen to strange-sounding claims exhibits correlation with instrumental rationality,

I didn't realize at all that by "smart" you meant "instrumentally rational"; I was thinking rather more literally in terms of IQ. And I would indeed expect IQ to correlate positively with what you might call openness. More precisely, although I would expect openness to be only weak evidence of high IQ, I would expect high IQ to be more significant evidence of openness.

People who are willing to listen to strange-sounding claims statistically end up hanging out with UFO conspiracy theorists, New Age people, etc...

Why can't they just read the darn sequences and pick up on the fact that these people are worth listening to?

See my remarks above.

The point of my comment was that reading his writings reveals a huge difference between Eliezer and UFO conspiracy theorists, a difference that should be more than noticeable to anyone with an IQ high enough to be in graduate school in mathematics. Yes, of course, if all you know about a person is that they make strange claims, then you should by default assume they're a UFO/New Age type. But I submit that the fact that Eliezer has written things like these decisively entitles him to a pass on that particular inference, and anyone who doesn't grant it to him just isn't very discriminating.

Comment author: multifoliaterose 16 August 2010 12:23:40PM 5 points [-]

One more point - though I could immediately recognize that there's something important to some of what Eliezer says, the fact that he makes outlandish claims did make me take longer to get around to thinking seriously about existential risk. This is because of a factor that I mention in my post which I quote below.

There is also a social effect which compounds the issue which I just mentioned. The issue which I just mentioned makes people who are not directly influenced by the issue that I just mentioned less likely to think seriously about existential risk on account of their desire to avoid being perceived as associated with claims that people find uncredible.

I'm not proud that I'm so influenced, but I'm only human. I find it very plausible that there are others like me.

Comment author: multifoliaterose 16 August 2010 12:04:13PM *  11 points [-]

And I would indeed expect IQ to correlate positively with what you might call openness.

My own experience is that the correlation is not very high. Most of the people who I've met who are as smart as me (e.g. in the sense of having high IQ) are not nearly as open as I am.

I didn't realize at all that by "smart" you meant "instrumentally rational";

I did not intend to equate intelligence with instrumental rationality. The reason why I mentioned instrumental rationality is that ultimately what matters is to get people with high instrumental rationality (whether they're open minded or not) interested in existential risk.

My point is that people who are closed minded should not be barred from consideration as potentially useful existential risk researchers, that although people are being irrational to dismiss Eliezer as fast as they do, that doesn't mean that they're holistically irrational. My own experience has been that my openness has both benefits and drawbacks.

The point of my comment was that reading his writings reveals a huge difference between Eliezer and UFO conspiracy theorists, a difference that should be more than noticeable to anyone with an IQ high enough to be in graduate school in mathematics.

Math grad students can see a huge difference between Eliezer and UFO conspiracy theorists - they recognize that Eliezer's intellectually sophisticated. They're still biased to dismiss him out of hand. See bentram's comment

Edit: You might wonder where the bias to dismiss Eliezer comes from. I think it comes mostly from conformity, which is, sadly, very high even among very smart people.

Comment author: wedrifid 16 August 2010 12:52:27PM 0 points [-]

Edit: You might wonder where the bias to dismiss Eliezer comes from. I think it comes mostly from conformity, which is, sadly, very high even among very smart people.

I would perhaps expand 'conformity' to include neighbouring social factors - in-group/outgroup, personal affiliation/alliances, territorialism, etc.

Comment author: komponisto 16 August 2010 12:33:25PM *  4 points [-]

My point is that people who are closed minded should not be barred from consideration as potentially useful existential risk researchers

You may be right about this; perhaps Eliezer should in fact work on his PR skills. At the same time, we shouldn't underestimate the difficulty of "recruiting" folks who are inclined to be conformists; unless there's a major change in the general sanity level of the population, x-risk talk is inevitably going to sound "weird".

Math grad students can see a huge difference between Eliezer and UFO conspiracy theorists - they recognize that Eliezer's intellectually sophisticated. They're still biased to dismiss him out of hand

This is a problem; no question about it.

Comment author: multifoliaterose 16 August 2010 12:39:14PM *  6 points [-]

At the same time we shouldn't underestimate the difficulty of "recruiting" folks who are inclined to be conformists; unless there's a major change in the general sanity level of the population, x-risk talk is inevitably going to sound "weird".

I agree with this. It's all a matter of degree. Maybe at present one has to be in the top 1% of the population in nonconformity to be interested in existential risk and with better PR one could reduce the level of nonconformity required to the top 5% level.

(I don't know whether these numbers are right, but this is the sort of thing that I have in mind - I find it very likely that there are people who are nonconformist enough to potentially be interested in existential risk but too conformist to take it seriously unless the people who are involved seem highly credible.)

Comment author: ciphergoth 16 August 2010 10:48:59AM 12 points [-]

For my part, I keep wondering how long it's going to be before someone throws his "If you don't sign up your kids for cryonics then you are a lousy parent" remark at me, to which I will only be able to say that even he says stupid things sometimes.

(Yes, I'd encourage anyone to sign their kids up for cryonics; but not doing so is an extremely poor predictor of whether or not you treat your kids well in other ways, which is what the term should mean by any reasonable standard).

Comment author: James_Miller 18 August 2010 02:55:30PM *  8 points [-]

Given Eliezer's belief about the probability of cryonics working and belief that others should understand that cryonics has a high probability of working, his statement that "If you don't sign up your kids for cryonics then you are a lousy parent" is not just correct but trivial.

One of the reasons I so enjoy reading Less Wrong is Eliezer's willingness to accept and announce the logical consequences of his beliefs.

Comment author: pcm 21 August 2010 08:09:32PM 0 points [-]

If "you" refers to a typical parent in the US, then it's sensible (but hardly trivial). But it could easily be interpreted as referring to parents who are poor enough that they should give higher priority to buying a safer car, moving to a neighborhood with a lower crime rate, etc.

Eliezer's writings about cryonics may help him attract more highly rational people to work with him, but will probably reduce his effectiveness at warning people working on other AGI projects of the risks. I think he has more potential to reduce existential risk via the latter approach.

Comment author: ciphergoth 18 August 2010 03:00:15PM *  4 points [-]

There is a huge gap between "you are doing your kids a great disservice" and "you are a lousy parent": "X is an act of a lousy parent" to me implies that it is a good predictor of other lousy parent acts.

EDIT: BTW I should make clear that I plan to try to persuade some of my friends to sign up themselves and both their kids for cryonics, so I do have skin in the game...

Comment author: FAWS 18 August 2010 03:04:41PM *  7 points [-]

I'm not completely sure I disagree with that, but do you have the same attitude towards parents who try to heal treatable cancer with prayer and nothing else, but are otherwise great parents?

Comment author: ciphergoth 18 August 2010 03:31:11PM 4 points [-]

I think that would be a more effective predictor of other forms of lousiness: it means you're happy to ignore the advice of scientific authority in favour of what your preacher or your own mad beliefs tell you, which can get you into trouble in lots of other ways.

That said, this is a good counter, and it does make me wonder if I'm drawing the right line. For one thing, what do you count as a single act? If you don't get cryonics for your first child, it's a good predictor that you won't for your second either, so does that count? So I think another aspect of it is that to count, something has to be unusually bad. If you don't get your kids vaccinated in the UK in 2010, that's lousy parenting, but if absolutely everyone you ever meet thinks that vaccines are the work of the devil, then "lousy" seems too strong a term for going along with it.

Comment author: shokwave 23 November 2010 12:56:56AM 0 points [-]

If you don't get your kids vaccinated in the UK in 2010, that's lousy parenting, but if absolutely everyone you ever meet thinks that vaccines are the work of the devil, then "lousy" seems too strong a term for going along with it.

True. However, if absolutely everyone you ever meet thinks vaccines are evil except for one doctor and that doctor has science on his side, and you choose not to get your kids vaccinated because of "going along with" social pressures, then "lousy parent" is exactly the right strength of term. And that's really the case here. Not absolutely everyone thinks cryonics is wrong or misguided. And if you can't sort the bullshit and wishful thinking from the science, then you're doing your child a disservice.

Comment author: multifoliaterose 16 August 2010 11:47:35AM 4 points [-]

Yes, this is the sort of thing that I had in mind in making my cryonics post - as I said in the revised version of my post, I have a sense that a portion of the Less Wrong community has the attitude that cryonics is "moral" in some sort of comprehensive sense.

Comment author: James_Miller 18 August 2010 03:00:47PM 5 points [-]

If you believe that thousands of people die unnecessarily every single day then of course you think cryonics is a moral issue.

If people in the future come to believe that we should have know that cryonics would probably work then they might well conclude that our failure to at least offer cryonics to terminally ill children was (and yes I know what I'm about to write sounds extreme and will be off-putting to many) Nazi-level evil.

Comment author: multifoliaterose 18 August 2010 03:42:52PM 1 point [-]

I've thought carefully about this matter and believe that there's good reason to doubt your prediction. I will detail my thoughts on this matter in a later top level post.

Comment author: James_Miller 18 August 2010 03:50:20PM 0 points [-]

I would like the opportunity to make timely comments on such a post, but I will be traveling until Aug 27th and so request you don't post before then.

Comment author: multifoliaterose 18 August 2010 03:51:08PM 0 points [-]

Sure, sounds good.

Comment author: katydee 16 August 2010 10:31:46AM 9 points [-]

Also, keep in mind that reading the sequences requires nontrivial effort-- effort which even moderately skeptical people might be unwilling to expend. Hopefully Eliezer's upcoming rationality book will solve some of that problem, though. After all, even if it contains largely the same content, people are generally much more willing to read one book rather than hundreds of articles.

Comment author: MaoShan 15 August 2010 08:26:59PM 3 points [-]

Aside from the body of the article, which is just "common" sense, given the author's opinion against the current policies of SIAI, I found the final paragraph interesting because I also exhibit "an unusually high abundance of the traits associated with Aspergers Syndrome." Perhaps possessing that group of traits gives one a predilection to seriously consider existential risk reduction by being socially detached enough to see the bigger picture. Perhaps LW is somewhat homogenously populated with this "certain kind" of people. So, how do we gain credibility with normal people?

Comment author: Vladimir_M 15 August 2010 06:58:39PM *  46 points [-]

I am a relative newbie commenter here, and my interest in this site has so far been limited to using it as a fun forum where it's possible to discuss all kinds of sundry topics with exceptionally smart people. However, I have read a large part of the background sequences, and I'm familiar with the main issues of concern here, so even though it might sound impertinent coming from someone without any status in this community, I can't resist commenting on this article.

To put it bluntly, I think the main point of the article is, if anything, an understatement. Let me speak from personal experience. From the perspective of this community, I am a sort of person who should be exceptionally easy to get interested and won over to its cause, considering both my intellectual background and my extreme openness to contrarian viewpoints and skepticism towards the official academic respectability as a criteron of truth and intellectual soundness. Yet, to be honest, even though I find a lot of the writing and discussion here extremely interesting, and the writings of Yudkowsky (in addition to others such as Bostrom, Hanson, etc.) have convinced me that technology-related existential risks should be taken much more seriously than they presently are, I still keep encountering things in this community that set off various red flags, which are undoubtedly taken by many people as a sign of weirdness and crackpottery, and thus alienate huge numbers of potential quality audience.

Probably the worst such example I've seen was the recent disturbance in which Roko was subjected to abuse that made him leave. When I read the subsequent discussions, it surprised me that virtually nobody here appears to be aware what an extreme PR disaster it was. Honestly, for someone unfamiliar with this website who has read about that episode, it would be irrational not to conclude that there's some loony cult thing going on here, unless he's also presented with enormous amounts of evidence to the contrary in the form of a selection of the best stuff that this site has to offer. After these events, I myself wondered whether I want to be associated with an outlet where such things happen, even just as an occasional commenter. (And not to even mention that Roko's departure is an enormous PR loss in its own right, in that he was one of the few people here who know how to write in a way that's interesting and appealing to people who aren't hard-core insiders.)

Even besides this major PR fail, I see many statements and arguments here that may be true, or at least not outright unreasonable, but should definitely be worded more cautiously and diplomatically if they're given openly for the whole world to see. I'm not going to get into details of concrete examples -- in particular, I do not concur unconditionally with any of the specific complaints from the above article -- but I really can't help but conclude that lots of people here, including some of the most prominent individuals, seem oblivious as to how broader audiences, even all kinds of very smart, knowledgeable, and open-minded people, will perceive what they write and say. If you want to have a closed inner circle where specific background knowledge and attitudes can be presumed, that's fine -- but if you set up a large website attracting lots of visitors and participants to propagate your ideas, you have to follow sound PR principles, or otherwise its effect may well end up being counter-productive.

Comment author: prase 16 August 2010 04:01:47PM 21 points [-]

I agree completely. I still read LessWrong because I am a relatively long-time reader, and thus I know that most of the people here are sane. Otherwise, I would conclude that there is some cranky process going on here. Still, the Roko affair caused me to significantly lower my probabilities assigned to SIAI success and forced me to seriously consider the hypothesis that Eliezer Yudkowsky went crazy.

By the way, I have a little bit disturbing feeling that too little of the newer material here is actually devoted to refining the art of human rationality, as the blog's header proudly states, while instead the posts often discuss relatively narrow list of topics which are only tangentially related to rationality. E.g. cryonics, AI stuff, evolutionary psychology, Newcomb-like scenarios.

Comment author: Morendil 16 August 2010 04:26:50PM 4 points [-]

By the way, I have a little bit disturbing feeling that too little of the newer material here is actually devoted to refining the art of human rationality

Part of that mission is to help people overcome the absurdity heuristic, and to help them think carefully about topics that normally trigger a knee-jerk reflex of dismissal on spurious grounds; it is in this sense that cryonics and the like are more than tangentially related to rationality.

I do agree with you that too much of the newer material keeps returning to those few habitual topics that are "superstimuli" for the heuristic. This perhaps prevents us from reaching out to newer people as effectively as we could. (Then again, as LW regulars we are biased in that we mostly look at what gets posted, when what may matter more for attracting and keeping new readers is what gets promoted.)

A site like YouAreNotSoSmart may be more effective in introducing these ideas to newcomers, to the extent that it mostly deals with run-of-the-mill topics. What makes LW valuable which YANSS lacks is constructive advice for becoming less wrong.

Comment author: prase 16 August 2010 05:15:48PM 1 point [-]

Thanks for the link, I haven't known YANSS.

As for overcoming absurdity heuristics, more helpful would be to illustrate its inaproppriateness (is this a real word?) on thoughts which are seemingly absurd while having a lot of data proving them right, rather than predictions like Singularity which are mostly based on ... just different heuristics.

Comment author: Kevin 16 August 2010 10:01:47AM *  9 points [-]

What are the scenarios where someone unfamiliar with this website would hear about Roko's deleted post?

I suppose it could be written about dramatically (because it was dramatic!) but I don't think anyone is going to publish such an account. It was bad from the perspective of most LWers -- a heuristic against censorship is a good heuristic.

This whole thing is ultimately a meta discussion about moderation policy. Why should this discussion about banned topics be that much interesting than a post on Hacker News that is marked as dead? Hacker News generally doesn't allow discussion of why stories were marked dead. The moderators are anonymous and have unquestioned authority.

If Less Wrong had a mark as dead function (on HN unregistered users don't see dead stories, but registered users can opt-in to see them), I suspect Eliezer would have killed Roko's post instead of deleting it to avoid the concerns of censorship, but no one has written that LW feature yet.

As a solid example of what a not-PR disaster it was, I doubt that anyone at the Singularity Summit that isn't a regular Less Wrong reader (the majority of attendees) has heard that Eliezer deleted a post. It's just not the kind of thing that actually makes a PR disaster... honestly if this was a PR issue it might be a net positive because it would lead some people to hear of LW that otherwise would never have heard of Less Wrong. Please don't take that as a reason to make this a PR issue.

Eliezer succeeded in the sense that it is very unlikely that people in the future on Less Wrong are going to make stupid emotionally abhorrent posts about weird decision theory torture scenarios. He failed in that he could have handled the situation better.

If anyone would like to continue talking about Less Wrong moderation policy, the place to talk about it is the Meta Thread (though you'd probably want to make a new one (good for +[20,50] karma!) instead of discussing it in an out of season thread)

Comment author: homunq 31 August 2010 03:37:26PM 6 points [-]

As someone who had over 20 points of karma obliterated for reasons I don't fully understand, for having posted something which apparently strayed too close to a Roko post which I never read in its full version, I can attest that further and broader discussion of the moderation policy would be beneficial. I still don't really know what happened. Of course I have vague theories , and I've received a terse and unhelpful response from EY (a link to a horror story about a "riddle" which kills - a good story which I simply don't accept as a useful parable of reality), but nothing clear. I do not think that I have anything of outstanding value to offer this community, but I suspect that Roko, little I, and the half-dozen others like us which probably exist, are a net loss to the community if driven away, especially if not being seen as cultlike is valuable.

Comment author: Airedale 31 August 2010 05:49:37PM *  3 points [-]

As someone who had over 20 points of karma obliterated for reasons I don't fully understand, for having posted something which apparently strayed too close to a Roko post which I never read in its full version, I can attest that further and broader discussion of the moderation policy would be beneficial.

I believe you lost 20 karma because you had 2 net downvotes on your post at the time it was deleted (and those votes still affect your total karma, although the post cannot be further upvoted or downvoted). The loss of karma did not result directly from the deletion of the post, except for the fact that the deletion froze the post’s karma at the level it was at when it was deleted.

I only looked briefly at your post, don’t remember very much about it, and am only one reader here, but from what I recall, your post did not seem so obviously good that it would have recovered from those two downvotes. Indeed, my impression is that it’s more probable that if the post had been left up longer, it would have been even more severely downvoted than it was at the time of deletion, as is the case with the many people’s first posts. I’m not very confident about that, but there certainly would have been that risk.

All that being said, I can understand if you would rather have taken the risk of an even greater hit to karma if it would have meant that people were able to read and comment on your post. I can also sympathize with your desire for a clearer moderation policy, although unless EY chose to participate in the discussion, I don’t think clearer standards would emerge, because it’s ultimately EY’s call whether to delete a post or comment. (I think there are a couple others with moderation powers, but it’s my understanding that they would not independently delete a non-troll/spam post).

Comment author: homunq 01 September 2010 12:58:19PM 3 points [-]

I think it was 30 karma points (3 net downvotes), though I'm not sure. And I believe that it is entirely possible that some of those downvotes (more than 3, because I had at least 3 upvotes) were for alleged danger, not for lack of quality. Most importantly, if the post hadn't been deleted, I could have read the comments which presumably would have given me some indication of the reason for those downvotes.

Comment author: Will_Newsome 16 August 2010 09:48:34AM 2 points [-]

Looking at my own posts I see a lot of this problem; that is, the problem of addressing only far too small an audience. Thank you for pointing it out.

Comment author: [deleted] 15 August 2010 07:19:48PM 11 points [-]

Agreed.

One good sign here is that LW, unlike most other non-mainstream organizations, doesn't really function like a cult. Once one person starts being critical, critics start coming out of the woodwork. I have my doubts about this place sometimes too, but it has a high density of knowledgeable and open-minded people, and I think it has a better chance than anyone of actually acknowledging and benefiting from criticism.

I've tended to overlook the weirder stuff around here, like the Roko feud -- it got filed under "That's confusing and doesn't make sense" rather than "That's an outrage." But maybe it would be more constructive to change that attitude.

Comment author: timtyler 17 August 2010 05:52:40PM *  1 point [-]

Singularitirianism, transumanism, cryonics, etc probably qualify as cults under at least some of the meanings of the term: http://en.wikipedia.org/wiki/Cult Cults do not necessarily lack critics.

Comment author: WrongBot 17 August 2010 06:37:37PM 2 points [-]

The wikipedia page on Cult Checklists includes seven independent sets of criteria for cult classification, provided by anti-cult activists who have strong incentives to cast as wide a net as possible. Singularitarianism, transhumanism, and cryonics fit none of those of lists. In most cases, it isn't even close.

Comment author: thomblake 17 August 2010 07:04:24PM *  12 points [-]

I disagree with your assessment. Let's just look at Lw for starters.

Eileen Barker:

  1. It would be hard to make a case for this one; a tendency to congregate geographically (many people joining the SIAI visiting fellows, and having meetups) is hardly cutting oneself off from others; however, there is certainly some tendency to cut ourselves off socially - note for example the many instances of folks worrying they will not be able to find a sufficiently "rationalist" significant other.
  2. Huge portions of the views of reality of many people here have been shaped by this community, and Eliezer's posts in particular; many of those people cannot understand the math or argumentation involved but trust Eliezer's conclusions nonetheless.
  3. Much like in 2 above, many people have chosen to sign up for cryonics based on advice from the likes of Eliezer and Robin; indeed, Eliezer has advised that anyone not smart enough to do the math should just trust him on this.
  4. Several us/them distinctions have been made and are not open for discussion. For example, theism is a common whipping-boy, and posts discussing the virtues of theism are generally not welcome.
  5. Nope. Though some would credit Eliezer with trying to become or create God.
  6. Obviously. Less Wrong is quite focused on rationality (though that should not be odd) and Eliezer is rather... driven in his own overarching goal.

Based on that, I think Eileen Barker's list would have us believe Lw is a likely cult.

Shirley Harrison:

  1. I'm not sure if 'from above' qualifies, but Eliezer thinks he has a special mission that he is uniquely qualified to fulfill.
  2. While 'revealed' is not necessarily accurate in some senses, the "Sequences" are quite long and anyone who tries to argue is told to "read the Sequences". Anyone who disagrees even after reading the Sequences is often considered too stupid to understand them.
  3. Nope
  4. Many people here develop feelings of superiority over their families and/or friends, and are asked to imagine a future where they are alienated from family and friends due to their not having signed up for cryonics.
  5. This one is questionable. But surely Eliezer is trying the advanced technique of sharing part of his power so that we will begin to see the world the way he does.
  6. There is volunteer effort at Lw, and posts on Lw are promoted to direct volunteer effort towards SIAI. Some of the effort of SIAI goes to paying Eliezer.
  7. No sign of this
  8. "Exclusivity - 'we are right and everyone else is wrong'". Very yes.

Based on that, I think Shirley Harrison's list would have us believe Lw is a likely cult.

Similar analysis using the other lists is left as an exercise for the reader.

Comment author: Perplexed 18 November 2010 07:10:36PM *  2 points [-]

the "Sequences" are quite long and anyone who tries to argue is told to "read the Sequences". Anyone who disagrees even after reading the Sequences is often considered too stupid to understand them.

I have to disagree that this "smugness" even remotely reaches the level that is characteristic of a cult.

As someone who has frequently expressed disagreement with the "doctrine" here, I have occasionally encountered both reactions that you mention. But those sporadic reactions are not much of a barrier to criticism - any critic who persists here will eventually be engaged intelligently and respectfully, assuming that the critic tries to achieve a modicum of respect and intelligence on his own part. Furthermore, if the critic really engages with what his interlocutors here are saying, he will receive enough upvotes to more than repair the initial damage to his karma

Comment author: David_Gerard 18 November 2010 09:16:46PM *  2 points [-]

Yes. LessWrong is not in fact hidebound by groupthink. I have lots of disagreement with the standard LessWrong belief cluster, but I get upvotes if I bother to write well, explain my objections clearly and show with my reference links that I have some understanding of what I'm objecting to. So the moderation system - "vote up things you want more of" - works really well, and I like the comments here.

This has also helped me control my unfortunate case of asshole personality disorder elsewhere I see someone being wrong on the Internet. It's amazing what you can get away with if you show your references.

Comment author: Zvi 31 August 2010 09:01:09PM 5 points [-]

I found this amusing because by those standards, cults are everywhere. For example, I run a professional Magic: The Gathering team and am pretty sure I'm not a cult leader. Although that does sound kind of neat. Observe:

Eileen Barker: 1. When events are close we spend a lot of time socially seperate from others so as to develop and protect our research. On occasion 'Magic colonies' form for a few weeks. It's not substantially less isolating than what SIAI dos. Check. 2. I have imparted huge amounts of belief about a large subset of our world, albeit a smaller one than Eliezer is working on. Partial Check. 3. I make reasonably import, on the level of the Cryonics decision if Cryonics isn't worthwhile, decisions for my teammates and do what I need to do to make sure they follow them far more than they would without me. Check. 4. We identify other teams as 'them' reasonably often, and certain other groups are certainly viewed as the enemy. Check. 5. Nope, even fainter argument than Eliezer. 6. Again, yes, obviously.

Shirley Harrison: 1. I claim a special mission that I am uniquely qualified to fufill. Not as important of one, but still. Check. 2. My writings count at least as much as the sequences. Check. 3. Not intentionally, but often new recruits have little idea what to expect. Check plus. 4. Totalitarian rules structure, and those who game too much often alienate friends and family. I've seen it many times, and far less of a cheat than saying that you'll be alienated from them when they are all dead and you're not because you got frozen. Check. 5. I make people believe what I want with the exact same techniques we use here. If anything, I'm willing to use slightly darker arts. Check. 6. We make the lower level people do the grunt work, sure. Check. 7. Based on some of the deals I've made, one looking to demonize could make a weak claim. Check plus. 8. Exclusivity. In spades. Check.

I'd also note that the exercise left to the reader is much harder, because the other checklists are far harder to fudge.

Comment author: WrongBot 17 August 2010 08:25:15PM *  14 points [-]

On Eileen Barker:

Much like in 2 above, many people have chosen to sign up for cryonics based on advice from the likes of Eliezer and Robin; indeed, Eliezer has advised that anyone not smart enough to do the math should just trust him on this.

I believe that most LW posters are not signed up for cryonics (myself included), and there is substantial disagreement about whether it's a good idea. And that disagreement has been well received by the "cult", judging by the karma scores involved.

Several us/them distinctions have been made and are not open for discussion. For example, theism is a common whipping-boy, and posts discussing the virtues of theism are generally not welcome.

Theism has been discussed. It is wrong. But Robert Aumann's work is still considered very important; theists are hardly dismissed as "satanic," to use Barker's word.

Of Barker's criteria, 2-4 of 6 apply to the LessWrong community, and only one ("Leaders and movements who are unequivocally focused on achieving a certain goal") applies strongly.


On Shirley Harrison:

I'm not sure if 'from above' qualifies, but Eliezer thinks he has a special mission that he is uniquely qualified to fulfill.

I can't speak for Eliezer, but I suspect that if there were a person who was obviously more qualified than him to tackle some aspect of FAI, he would acknowledge it and welcome their contributions.

While 'revealed' is not necessarily accurate in some senses, the "Sequences" are quite long and anyone who tries to argue is told to "read the Sequences". Anyone who disagrees even after reading the Sequences is often considered too stupid to understand them.

No. The sequences are not infallible, they have never been claimed as such, and intelligent disagreement is generally well received.

Many people here develop feelings of superiority over their families and/or friends, and are asked to imagine a future where they are alienated from family and friends due to their not having signed up for cryonics.

What you describe is a prosperous exaggeration, not "[t]otalitarianism and alienation of members from their families and/or friends."

There is volunteer effort at Lw, and posts on Lw are promoted to direct volunteer effort towards SIAI. Some of the effort of SIAI goes to paying Eliezer.

Any person who promotes a charity at which they work is pushing a cult, by this interpretation. Eliezer isn't "lining his own pockets"; if someone digs up the numbers, I'll donate $50 to a charity of your choice if it turns out that SIAI pays him a salary disproportionally greater (2 sigmas?) than the average for researchers at comparable non-profits.

So that's 2-6 of Harrison's checklist items for LessWrong, none of them particularly strong.

My filters would drop LessWrong in the "probably not a cult" category, based off of those two standards.

Comment author: Jack 18 November 2010 08:23:06PM 3 points [-]

I can't speak for Eliezer, but I suspect that if there were a person who was obviously more qualified than him to tackle some aspect of FAI, he would acknowledge it and welcome their contributions.

What exactly are Eliezer's qualifications supposed to be?

Comment author: jimrandomh 18 November 2010 08:38:20PM 2 points [-]

What exactly are Eliezer's qualifications supposed to be?

You mean, "What are Eliezer's qualifications?" Phrasing it that way makes it sound like a rhetorical attack rather than a question.

To answer the question itself: lots of time spent thinking and writing about it, and some influential publications on the subject.

Comment author: Jack 18 November 2010 09:44:05PM *  7 points [-]

I'm definitely not trying to attack anyone (and you're right my comment could be read that way). But I'm also not just curious. I figured this was the answer. Lots of time spent thinking, writing and producing influential publications on FAI is about all the qualifications one can reasonably expect (producing a provable mathematical formalization of friendliness is the kind of thing no one is qualified to do before they do it and the AI field in general is relatively new and small). And Eliezer is obviously a really smart guy. He's probably even the most likely person to solve it. But the effort to address the friendliness issue seems way too focused on him and the people around him. You shouldn't expect any one person to solve a Hard problem. Insight isn't that predictable especially when no one in the field has solved comparable problems before. Maybe Einstein was the best bet to formulate a unified field theory but a) he never did and b) he had actually had comparable insights in the past. Part of the focus on Eliezer is just an institutional and financial thing, but he and a lot of people here seem to encourage this state of affairs.

No one looks at open problems in other fields this way.

Comment author: XiXiDu 19 November 2010 12:57:25PM 1 point [-]

...producing a provable mathematical formalization of friendliness [...] And Eliezer is obviously a really smart guy. He's probably even the most likely person to solve it.

I haven't seen any proves of his math skills that would justify this statement. By what evidence have you arrived at the conclusion that he can do it at all, even approach it? The sequences and the SIAI publications certainly show that he was able to compile a bunch of existing ideas into a coherent framework of rationality, yet there is not much novelty to be found anywhere.

Comment author: multifoliaterose 18 November 2010 11:27:15PM 0 points [-]

Great comment.

Comment author: Vladimir_Nesov 18 November 2010 10:09:41PM *  5 points [-]

No one looks at open problems in other fields this way.

Yes, the situation isn't normal or good. But this isn't a balanced comparison, since we don't currently have a field, too few people understand the problem and had seriously thought about it. This gradually changes, and I expect will be visibly less of a problem in another 10 years.

Comment author: XiXiDu 18 November 2010 09:03:27PM *  0 points [-]

To answer the question itself: lots of time spent thinking and writing about it, and some influential publications on the subject.

How influential are his publications if they could not convince Ben Goertzel (SIAI/AGI researcher), someone who has read Yudkowsky's publications and all of the LW sequences? You could argue that he and other people don't have the smarts to grasp Yudkowsky's arguments, but who does? Either Yudkowsky is so smart that some academics are unable to appreciate his work or there is another problem. How are we, we who are far below his level, supposed to evaluate if we should believe what Yudkowsky says if we are neither smart enough to do so nor able to subject his work to empirical criticism?

The problem here is that telling someone that Yudkowsky spent a lot of time thinking and writing about something is not a qualification. Further it does not guarantee that he would acknowledge and welcome the contributions of others who disagree.

Comment author: WrongBot 18 November 2010 10:58:01PM 3 points [-]

Ben Goertzel believes in psychic phenomenon (see here for details), so his failure to be convinced by Eliezer is not strong evidence against the correctness of Eliezer's stance.

For what it's worth, Eliezer has been influential/persuasive enough to get the SIAI created and funded despite having absolutely no academic qualifications. He's also responsible for coining "Seed AI".

Comment author: jimrandomh 18 November 2010 09:36:41PM *  5 points [-]

The motivated cognition here is pretty thick. Writing is influential when many people are influenced by it. It doesn't have to be free of people who disagree with it to be influential, and it doesn't even have to be correct.

How are we, we who are far below his level, supposed to evaluate if we should believe what Yudkowsky says if we are neither smart enough to do so nor able to subject his work to empirical criticism?

Level up first. I can't evaluate physics research, so I just accept that I can't tell which of it is correct; I don't try to figure it out from the politics of physicists arguing with each other, because that doesn't work.

Comment author: gwern 18 November 2010 06:29:41PM *  6 points [-]

Eliezer was compensated $88,610 in 2008 according to the Form 990 filed with the IRS and which I downloaded from GuideStar.

Wikipedia tells me that the median 2009 income in Redwood where Eliezer lives is $69,000.

(If you are curious, Tyler Emerson in Sunnyvale (median income 88.2k) makes 60k; Susan Fonseca-Klein also in Redwood was paid 37k. Total employee expenses is 200k, but the three salaries are 185k; I don't know what accounts for the difference. The form doesn't seem to say.)

Comment author: Sniffnoy 18 August 2010 12:04:10AM 3 points [-]

No. The sequences are not infallible, they have never been claimed as such, and intelligent disagreement is generally well received.

In particular, there seems to be a lot of disagreement about the metaethics sequence, and to a lesser extent about timeless physics.

Comment author: cousin_it 17 August 2010 07:55:41PM *  12 points [-]

That was... surprisingly surprising. Thank you.

For reasons like those you listed, and also out of some unverbalized frustration, in the last week I've been thinking pretty seriously whether I should leave LW and start hanging out somewhere else online. I'm not really interested in the Singularity, existential risks, cognitive biases, cryonics, un/Friendly AI, quantum physics or even decision theory. But I do like the quality of discussions here sometimes, and the mathematical interests of LW overlap a little with mine: people around here enjoy game theory and computability theory, though sadly not nearly as much as I do.

What other places on the Net are there for someone like me? Hacker News and Reddit look like dumbed-down versions of LW, so let's not talk about those. I solved a good bit of Project Euler once, the place is tremendously enjoyable but quite narrow-focused. The n-Category Cafe is, sadly, coming to a halt. Math Overflow looks wonderful and this question by Scott Aaronson nearly convinced me to drop everything and move there permanently. The Polymath blog is another fascinating place that is so high above LW that I feel completely underqualified to join. Unfortunately, none of these are really conducive to posting new results, and moving into academia IRL is not something I'd like to do (I've been there, thanks).

Any other links? Any advice? And please, please, nobody take this comment as a denigration of LW or a foot-stomping threat. I love you all.

Comment author: David_Gerard 18 November 2010 09:20:45PM 1 point [-]

I love your posts, so having seen this comment I'm going to try to write up my nascent sequence on memetic colds, aka sucker shoots, just for you. (And everyone.)

Comment author: cousin_it 18 November 2010 11:24:41PM 1 point [-]

Thanks!

Comment author: DanielVarga 21 August 2010 08:22:55PM 1 point [-]

I'm not really interested in the Singularity, existential risks, cognitive biases, cryonics, un/Friendly AI, quantum physics or even decision theory. But I do like the quality of discussions here sometimes, and the mathematical interests of LW overlap a little with mine: people around here enjoy game theory and computability theory, though sadly not nearly as much as I do.

Same for me. My interests are more similar to your interests than to classic LW themes. There are probably many others here in the same situation. But I hope that the list of classic LW themes is not set in stone. I think people like us should try to broaden the spectrum of LW. If this attempt fails, please send me the address of the new place where you hang out online. :) But I am optimistic.

Comment author: [deleted] 21 August 2010 06:59:24PM 1 point [-]

"Leaving" LW is rather strong. Would that mean not posting? Not reading the posts, or the comments? Or just reading at a low enough frequency that you decouple your sense of identity from LW?

I've been trying to decide how best to pump new life into The Octagon section of the webcomic collective forum Koala Wallop. The Octagon started off when Dresden Codak was there, and became the place for intellectual discussion and debate. The density of math and computer theoretic enthusiasts is an order of magnitude lower than here or the other places you mentioned, and those who know such stuff well are LW lurkers or posters too. There was an overkill of politics on The Octagon, the levels of expertise on subjects are all over the spectrum, and it's been slowing down for a while, but I think a good push will revive it. The main thing is that it lives inside of a larger forum, which is a silly, fun sort of community. The subforum simply has a life of it's own.

Not that I claim any ownership over it, but:

I'm going to try to more clearly brand it as "A friendly place to analytically discuss fantastic, strange or bizarre ideas."

Comment author: Kevin 19 August 2010 08:08:18AM 4 points [-]

Make a top level post about the kind of thing you want to talk about. It doesn't have to be an essay, it could just be a question ("Ask Less Wrong") or a suggested topic of conversation.

Comment author: John_Baez 19 August 2010 07:58:44AM *  15 points [-]

My new blog "Azimuth" may not be mathy enough for you, but if you like the n-Category Cafe, it's possible you may like this one too. It's more focused on technology, environmental issues, and the future. Someday soon you'll see an interview with Eliezer! And at some point we'll probably get into decision theory as applied to real-world problems. We haven't yet.

(I don't think the n-Category Cafe is "coming to a halt", just slowing down - my change in interests means I'm posting a lot less there, and Urs Schreiber is spending most of his time developing the nLab.)

Comment author: Vladimir_Nesov 19 August 2010 04:23:32PM 2 points [-]
Comment author: cousin_it 19 August 2010 08:38:29AM *  3 points [-]

Wow.

Hello.

I didn't expect that. It feels like summoning Gauss, or something.

Thank you a lot for twf!

Comment author: ciphergoth 19 August 2010 08:02:22AM 0 points [-]

The markup syntax here is a bit unusual and annoying - click the "Help" button at the bottom right of the edit window to get guidance on how to include hyperlinks. Unlike every other hyperlinking system, the text goes first and the URL second!

Comment author: Sniffnoy 18 August 2010 12:05:26AM 0 points [-]

Of course, MathOverflow isn't really a place for discussion...

Comment author: JoshuaZ 17 August 2010 08:05:19PM 0 points [-]

At least as far as math is concerned, people not in academia can publish papers. As for the Polymath blog, I'd actually estimate that you are at about the level of most Polymath contributors, although most of the impressive work there seems to be done by a small fraction of the people there.

Comment author: cousin_it 17 August 2010 08:14:36PM *  2 points [-]

About Polymath: thanks! (blushes)

I have no fetish for publishing papers or having an impressive CV or whatever. The important things, for me, are these: I want to have meaningful discussions about my areas of interest, and I want my results to be useful to somebody. I have received more than a fair share of "thank yous" here on LW for clearing up mathy stuff, but it feels like I could be more useful... somewhere.

Comment author: ciphergoth 17 August 2010 07:34:39PM 2 points [-]

Anyone who disagrees even after reading the Sequences is often considered too stupid to understand them.

I've not seen this happening - examples?

Comment author: JGWeissman 17 August 2010 07:43:08PM 7 points [-]

I think it would be more accurate to say that anyone who after reading the sequences still disagrees, but is unable to explain where they believe the sequences have gone wrong, is not worth arguing with.

With this qualification, it no longer seems like evidence of being cult.

Comment author: JGWeissman 17 August 2010 07:34:06PM *  2 points [-]

This would be easier to parse if you quoted the individual criteria you are evaluating right before the evaluation, eg:

1.

A movement that separates itself from society, either geographically or socially;

It would be hard to make a case for this one; a tendency to congregate geographically (many people joining the SIAI visiting fellows, and having meetups) is hardly cutting oneself off from others; however, there is certainly some tendency to cut ourselves off socially - note for example the many instances of folks worrying they will not be able to find a sufficiently "rationalist" significant other.

Comment author: timtyler 17 August 2010 06:52:49PM *  -2 points [-]

That's the pejorative usage. There is also:

"Cult also commonly refers to highly devoted groups, as in:

  • Cult, a cohesive group of people devoted to beliefs or practices that the surrounding culture or society considers to be outside the mainstream

    • Cult of personality, a political leader and his following, voluntary or otherwise
    • Destructive cult, a group which exploits and destroys its members or even non-members
    • Suicide cult, a group which practices mass self-destruction, as occurred at Jonestown
    • Political cult, a political group which shows cult-like features"

http://en.wikipedia.org/wiki/Cults_of_personality

http://en.wikipedia.org/wiki/Cult_following

http://en.wikipedia.org/wiki/Cult_%28religious_practice%29

Comment author: Jordan 15 August 2010 06:33:45PM 6 points [-]

Damnit! My smug self assurance that I could postpone thinking about these issues seriously because I'm an SIAI donor .... GONE! How am I supposed to get any work done now?

Seriously though, I do wish the SIAI toned down its self importance and incredible claims, however true they are. I realize, of course, that dulling some claims to appear more credible is approaching a Dark Side type strategy, but... well, no buts. I'm just confused.

Comment author: multifoliaterose 16 August 2010 09:08:17AM *  0 points [-]

Edit: I misunderstood what Jordan was trying to say - the previous version of this comment is irrelevant to the present discussion and so I've deleted it.

Comment author: Jordan 16 August 2010 01:47:32PM *  2 points [-]

Deciding that the truth unconditionally deserves top priority seems to me to be an overly convenient easy way out of confronting the challenges of demanded by instrumental rationality.

No one is claiming that honesty deserves top priority. I would lie to save someone's life, or to make a few million dollars, etc. In the context of SIAI though, or any organization, being manipulative can severely discredit you.

I believe that when one takes into account unintended consequences, when Eliezer makes his most incredible claims he lowers overall levels of epistemic rationality rather than raising overall levels of epistemic rationality.

If he were to go back on his incredible claims, or even only make more credible claims in the future, how would he reconcile the two when confronted? If someone new to Eliezer read his tame claims, then went back and read his older, more extreme claims, what would they think? To many people this would enforce the idea that SIAI is a cult, and that they are refining their image to be more attractive.

All of that said, I do understand where you're coming from intuitively, and I'm not convinced that scaling back some of the SIAI claims would ever have a negative effect. Certainly, though, a public policy conversation about it would cast a pretty manipulative shade over SIAI. Hell, even this conversation could cast a nasty shade to some onlookers (to many people trying to judge SIAI, the two of us might be a sufficiently close proxy, even though we have no direct connections).

Comment author: multifoliaterose 16 August 2010 02:44:08PM 2 points [-]

Okay, I misunderstood where you were coming from earlier, I thought you were making a general statement about the importance of stating one's beliefs. Sorry about that.

In response to your present comments, I would say that though the phenomenon that you have in mind may be a PR issue, I think it would be less of a PR issue than what's going on right now.

One thing that I would say is that I think that Eliezer would come across as much more credible simply by accompanying his weird sounding statements with disclaimers of the type "I know that what I'm saying probably sounds pretty 'out there' and understand if you don't believe me, but I've thought about this hard, and I think..." See my remark here.

Comment author: Jordan 16 August 2010 05:57:54PM *  5 points [-]

I mostly agree, although I'm still mulling it and think the issue is more complicated than it appears. One nitpick:

"I know that what I'm saying probably sounds pretty 'out there' and understand if you don't believe me, but I've thought about this hard, and I think..."

Personally these kind of qualifiers rarely do anything to allay my doubt, and can easily increase them. I prefer to see incredulity. For instance, when a scientist has an amazing result, rather than seeing that they fully believe it but recognizing it's difficult for me to believe, I'd rather see them doubtful of their own conclusion but standing by it nonetheless because of the strength of the evidence.

"I know it's hard to believe, but it's likely an AI will kill us all in the future."

could become

"It's hard for me to come to terms with, but there doesn't seem to be any natural safeguards preventing an AI from doing serious damage."

Comment author: multifoliaterose 16 August 2010 06:43:01PM *  1 point [-]

Personally these kind of qualifiers rarely do anything to allay my doubt, and can easily increase them. I prefer to see incredulity. For instance, when a scientist has an amazing result, rather than seeing that they fully believe it but recognizing it's difficult for me to believe, I'd rather see them doubtful of their own conclusion but standing by it nonetheless because of the strength of the evidence.

Sure, I totally agree with this - I prefer your formulation to my own. My point was just that there ought to be some disclaimer - the one that I suggested is a weak example.

Edit: Well, okay, actually I prefer:

"It took me a long time to come to terms with, but there don't seem to be any natural safeguards preventing an AI from doing serious damage."

If one has actually become convinced of a position, it sounds disingenuous to say that it's hard for one to come to terms with at present, but any apparently absurd position should at some point have been hard to come to terms with.

Adding such a qualifier is a good caution against appearing to be placing oneself above the listener. It carries the message "I know how you must be feeling about these things, I've been there too."

Comment author: Perplexed 15 August 2010 06:22:57PM 10 points [-]

I believe that more likely than not, the reason why Eliezer has missed the point that I raise in this post is social naivete on his part rather than willful self-deception.

I find it impossible to believe that the author of Harry Potter and the Methods of Rationality is oblivious to the first impression he creates. However, I can well believe that he imagines it to be a minor handicap which will fade in importance with continued exposure to his brilliance (as was the fictional case with HP). The unacknowledged problem in the non-fictional case, of course, is in maintaining that continued exposure.

I am personally currently skeptical that the singularity represents existential risk. But having watched Eliezer completely confuse and irritate Robert Wright, and having read half of the "debate" with Hanson, I am quite willing to hypothesize that the explanation of what the singularity is (and why we should be nervous about it) ought to come from anybody but Eliezer. He speaks and writes clearly on many subjects, but not that one.

Perhaps he would communicate more successfully on this topic if he tried a dialog format. But it would have to be one in which his constructed interlocutors are convincing opponents, rather than straw men.

Comment author: timtyler 15 August 2010 06:31:09PM *  -1 points [-]

It depends on exactly what you mean by "existential risk". Development will likely - IMO - create genetic and phenotypic takeovers in due course - as the bioverse becomes engineered. That will mean no more "wild" humans.

That is something which some people seem to wail and wave their hands about - talking about the end of the human race.

The end of earth-originating civilisation seems highly unlikely to me too - which is not to say that the small chance of it is not significant enough to discuss.

Eliezer's main case for that appears to be on http://lesswrong.com/lw/y3/value_is_fragile/

I think that document is incoherent.

Comment author: Emile 15 August 2010 04:52:22PM 0 points [-]

My understanding of Michael Vassar's position is that the people who are dissuaded from thinking about existential risk because of remarks like Eliezer's are too irrational for it to be worthwhile for them to be thinking about existential risk.

Seems like a reasonable position to me.

An important part of existential risk reduction is making sure that people who are likely to work on AI, or fund it, have read the sequences, and are at least aware of how most possible minds are not minds we would want, and of how dangerous recursive self-improvement could be.

Comment author: JoshuaZ 15 August 2010 04:56:00PM 10 points [-]

My understanding of Michael Vassar's position is that the people who are dissuaded from thinking about existential risk because of remarks like Eliezer's are too irrational for it to be worthwhile for them to be thinking about existential risk.

Seems like a reasonable position to me.

Really? I don't understand this position at all. The vast majority of the planet isn't very rational and the people with lots of resources are often not rational. If one can get some of those people to direct their resources in the right directions then that's still a net win for preventing existential risk even if they aren't very rational. If say a hundred million dollars more gets directed to existential risk even if much of that goes to the less likely existential risks that's still an overall reduction in existential risk and a general increase to the sanity waterline.

Comment author: JoshuaZ 15 August 2010 04:28:32PM 7 points [-]

I disagree strongly with this post. In general, it is a bad idea to refrain from making claims that one believes are true simply because those claims will make people less likely to listen to other claims. That direction lies the downwards spiral of emotional manipulation, rhetoric, and other things not conducive to rational discourse.

Would one under this logic encourage the SIAI to make statements that are commonly accepted but wrong in order to make people more likely to listen to the SIAI? If not, what is the difference?

Comment author: Jonathan_Graehl 16 August 2010 09:37:42PM 0 points [-]

To the extent that people really want what you argue against, perhaps they should pursue an alternate organization than SIAI that promotes only the more palatable subset. I agree with you that somebody should be making all the claims, popular or not, that bear on x-risk.

Comment author: multifoliaterose 15 August 2010 05:26:31PM *  6 points [-]

I believe that there are contexts in which the right thing to do is to speak what one believes to be true even if doing so damages public relations.

These things need to be decided on a case-by-case basis. There's no royal road to instrumental rationality.

As I say here, in the present context, a very relevant issue in my mind is that Eliezer & co. have not substantiated their most controversial claims with detailed evidence.

It's clichéd to say so, but extraordinary claims require extraordinary evidence. A claim of the type "I'm the most important person alive" is statistically many orders of magnitude more likely to be made by a poser than by somebody for whom the claim is true. Casual observers are rational to believe that Eliezer is a poser. The halo effect problem is irrational, yes, but human irrationality must be acknowledged, it's not the sort of thing that goes away if you pretend that it's not there.

I don't believe that Eliezer's outlandish and unjustified claims contribute to rational discourse. I believe that Eliezer's outlandish and unjustified claims lower the sanity waterline.

To summarize, I believe that in this particular case the costs that you allude to are outweighed by the benefits.

Comment author: timtyler 15 August 2010 06:03:07PM *  3 points [-]

Come on - he never actually claimed that.

Besides, many people have inflated views of their own importance. Humans are built that way. For one thing, It helps them get hired, if they claim that they can do the job. It is sometimes funny - but surely not a big deal.

Comment author: timtyler 15 August 2010 04:32:54PM 2 points [-]

It seems as though the latter strategy could backfire - if the false statements were exposed. Keeping your mouth shut about controversial issues seems safer.

Comment author: JRMayne 15 August 2010 04:25:58PM 6 points [-]

Solid, bold post.

Eliezer's comments on his personal importance to humanity remind me of the Total Perspective Device from Hitchhiker's. Everyone who gets perspective from the TPD goes mad; Zaphod Beeblebrox goes in and finds out he's the most important person in human history.

Eliezer's saying he's Zaphod Beeblebrox. Maybe he is, but I'm betting heavily against that for the reasons outlined in the post. I expect AI progress of all sorts to come from people who are able to dedicate long, high-productivity hours to the cause, and who don't believe that they and only they can accomplish the task.

I also don't care if the statements are social naivete or not; I think the statements that indicate that he is the most important person in human history - and that seems to me to be what he's saying - are so seriously mistaken, and made with such a high confidence level, as to massively reduce my estimated likelihood that SIAI is going to be productive at all.

And that's a good thing. Throwing money into a seriously suboptimal project is a bad idea. SIAI may be good at getting out the word of existential risk (and I do think existential risk is serious, under-discussed business), but the indicators are that it's not going to solve it. I won't give to SIAI if Eliezer stops saying these things, because it appears he'll still be thinking those things.

I expect AI progress to come incrementally, BTW - I don't expect the Foomination. And I expect it to come from Google or someone similar; a large group of really smart, really hard-working people.

I could be wrong.

--JRM

Comment author: Eliezer_Yudkowsky 18 August 2010 03:03:07PM 0 points [-]

And saddened once again at how people seem unable to distinguish "multi claims that something Eliezer said could be construed as claim X" and "Eliezer claimed X!"

Please note that for the next time you're worried about damaging an important cause's PR, multi.

Comment author: JRMayne 18 August 2010 04:52:19PM 11 points [-]

Um, I wasn't basing my conclusion on multifoliaterose's statements. I had made the Zaphod Beeblebrox analogy due to the statements you personally have made. I had considered doing an open thread comment on this very thing.

Which of these statements do you reject?:

  1. FAI is the most important project on earth, right now, and probably ever.

  2. FAI may be the difference between a doomed multiverse of [very large number] of sentient beings. No project in human history is of greater importance.

  3. You are the most likely person - and SIAI the most likely agency, because of you - to accomplish saving the multiverse.

Number 4 is unnecessary for your being the most important person on earth, but:

  1. People who disagree with you are either stupid or ignorant. If only they had read the sequences, then they would agree with you. Unless they were stupid.

And then you've blamed multi for this. He is trying to help an important cause; both multifoliaterose and XiXiDu are, in my opinion, acting in a manner they believe will help the existential risk cause.

And your final statement, that multifoliaterose is damaging an important cause's PR appears entirely deaf to multi's post. He's trying to help the cause - he and XiXiDu are orders of magnitude more sympathetic to the cause of non-war existential risk than just about anyone. You appear to have conflated "Eliezer Yudkowsky," with "AI existential risk."

Again.

I might be wrong about my interpretation - but I don't think I am. If I am wrong, other very smart people who want to view you favorably have done similar things. Maybe the flaw isn't in the collective ignorance and stupidity in other people. Just a thought.

--JRM

Comment author: JGWeissman 18 August 2010 06:39:40PM 7 points [-]

Which of those statements do you reject?

Comment author: multifoliaterose 18 August 2010 04:08:02PM 9 points [-]

My understanding of JRMayne's remark is that he himself construes your statements in the way that I mentioned in my post.

If JRMayne has misunderstood you, you can effectively deal with the situation by making a public statement about what you meant to convey.

Note that you have not made a disclaimer which rules out the possibility that you claim that you're the most important person in human history. I encourage you to make such a disclaimer if JRMayne has misunderstood you.

Comment author: XiXiDu 18 August 2010 03:23:21PM *  8 points [-]

I have to disagree based on the following evidence:

Q: The only two legitimate occupations for an intelligent person in our current world? (Answer)

and

"At present I do not know of any other person who could do that." (Reference)

This makes it reasonable to state that you think you might be the most important person in the world.

Comment author: Eliezer_Yudkowsky 18 August 2010 03:26:54PM 0 points [-]

I love that "makes it reasonable" part. Especially in a discussion on what you shouldn't say in public.

Now we're to avoid stating any premises from which any absurd conclusions seem reasonable to infer?

This would be a reducto of the original post if the average audience member consistently applied this sort of reasoning; but of course it is motivated on XiXiDu's part, not necessarily something the average audience member would do.

Note that saying "But you must therefore argue X..." where the said person has not actually uttered X, but it would be a soldier against them if they did say X, is a sign of political argument gone wrong.

Comment author: JRMayne 18 August 2010 04:59:12PM 8 points [-]

Gosh, I find this all quite cryptic.

Suppose I, as Lord Chief Prosecutor of the Heathens say:

  1. All heathens should be jailed.

  2. Mentally handicapped Joe is a heathen; he barely understands that there are people, much less the One True God.

One of my opponents says I want Joe jailed. I have not actually uttered that I want Joe jailed, and it would be a soldier against me if I had, because that's an unpopular position. This is a mark of a political argument gone wrong?

I'm trying to find another logical conclusion to XiXiDu's cited statements (or a raft of others in the same vein.) Is there one I don't see? Is it just that you're probably the most important entity in history, but, you know, maybe not? Is it that there's only a 5% chance that you're the most important person in human history?

I have not argued that you should not say these things, BTW. I have argued that you probably should not think them, because they are very unlikely to be true.

Comment author: JGWeissman 18 August 2010 06:45:20PM 2 points [-]

In this case I would ask you if you really want Joe jailed, or if when you said that "All heathens should be jailed", you were using the word "heathen" in a stronger sense of explicitly rejecting the "One True God" than the weak sense that Joe is a "heathen" for not understanding the concept.

And if you answer that you meant only that strong heathens should be jailed, I would still condemn you for that policy.

Comment author: XiXiDu 18 August 2010 03:32:24PM 3 points [-]

I'm too dumb to grasp what you just said in its full complexity. But I believe you are indeed one of the most important people in the world. Further, (1) I don't see what is wrong with that (2) It is positive for public relations as it attracts people to donate money (Evidence: Jesus) (3) It won't hurt academic relations as you are always able to claim that you were misunderstood.

Comment author: nhamann 15 August 2010 05:33:46PM 6 points [-]

I expect AI progress to come incrementally, BTW - I don't expect the Foomination. And I expect it to come from Google or someone similar; a large group of really smart, really hard-working people.

I'd like to point out that it's not either/or: it's possible (likely?) that it will take decades of hard work and incremental progress by lots of really smart people to advance AI science to a point where an AI could FOOM.

Comment author: CarlShulman 15 August 2010 06:05:22PM 3 points [-]

I would say likely, conditional on eventual FOOM. The alternative means both a concentration of probability mass in the next ten years and that the relevant theory and tools are almost wholly complete.

Comment author: orthonormal 15 August 2010 03:21:51PM 13 points [-]

whpearson mentioned this already, but if you think that the most important thing we can be doing right now is publicizing an academically respectable account of existential risk, then you should be funding the Future of Humanity Institute.

Funding SIAI is optimal only if you think that the pursuit of Friendly AI is by far the most important component of existential risk reduction, and indeed they're focusing on persuading more people of this particular claim. As you say, by focusing on something specific, radical and absurd, they run more of a risk of being dismissed entirely than does FHI, but their strategy is still correct given the premise.

Comment author: homunq 31 August 2010 04:11:20PM *  3 points [-]

I do "think that the pursuit of Friendly AI [and the avoidance of unfriendly AI] is by far the most important component of existential risk reduction". I also think that SIAI is not addressing the most important problem in that regard. I suspect there's a lot of people who would agree, for various reasons.

In my case, the logic is that I think:

1) That corporations, though not truly intelligent, are already superhuman and unFriendly.

2) That coordinated action (that is, strategic politics, in well-chosen solidarity with others with whom I have important differences) has the potential to reduce their power and/or increase their Friendliness

3) That this would, in turn, reduce the risk of them developing a first-mover unFriendly AI ...

3a) ... while also increasing the status of your ideas in a coalition which may be able to develop a Friendly one.

I recognize that points 2 and 3a are partially tribal and/or hope-seeking beliefs of mine, but think 1 and 3 are well-founded rationally.

Anyway, this is only one possible reason for parting ways with the SIAI and the FHI, without in any sense discounting the risks they are made to confront.

Comment author: orthonormal 31 August 2010 09:07:57PM 1 point [-]

From your analysis, it seems that FHI would be very well aligned with your goals: it's a high-profile, academic rather than corporate, entity which can publicize existential risks (and takes corporate creation of such seriously, IIRC).

Would this not be desirable, or is there any organization within the broader anticorporate movement you speak of that would even think to do the same with comparable competency?

Comment author: homunq 01 September 2010 02:31:41PM *  0 points [-]

I believe that explicitly political movements, not academic ones, are the only ones which are other-optimizing enough to fight the mal-optimization of corporations. And I think that at our current level of corporate power versus AI-relevant technological understanding, my energy is best spent fighting the former rather than advancing the latter (and I majored in cognitive science and work as a programmer, so I hold that same conclusion for most people.)

I realize that these beliefs are partly tribal (something which allows me to get along with my wife and friends) and partly hope-seeking (something which allows me to get up in the morning). I think that these are valid reasons to give a belief the benefit of the doubt. I would not, however, use these excuses to justify a belief with no rational basis, or to avoid considering an argument for the lack of rational basis. Anyway, even if one tried to rid oneself of tribal and hope-seeking biases, beyond the caveats in the previous sentence, I don't think it would help one be appreciably more rational.

Comment author: timtyler 31 August 2010 09:07:56PM 0 points [-]

Re: coordinated action to tame corporations

One thing we need is corporation reputation systems. We have product reviews, and so forth - but the whole area is poorly organised.

Comment author: timtyler 31 August 2010 09:05:11PM *  0 points [-]

Why are corporations "not truly intelligent"? They contain humans, surely. Would you say that humans are "not truly intelligent" either?

Comment author: homunq 01 September 2010 02:21:16PM 1 point [-]

They contain humans. However, while corporations themselves are psychopathic, most are not controlled and staffed by psychopaths. This gives corporations (thank Darwin) cognitive biases which systematically reduce their intelligence when pursuing obviously unFriendly goals.

In the end, it depends on your definition of intelligence. The intelligence of a corporation in choosing strategies to fit its goals is sometimes of the level of natural selection (weak), sometimes human intelligence (true), and sometimes effective crowd intelligence (mildly superhuman). I'd guess that on the whole, they average somewhat below human intelligence (but much higher power) when pursuing explicitly unFriendly subgoals; and somewhat above human intelligence when pursuing subgoals that happen to be neutral or Friendly. But that does not necessarily mean they are on balance Friendly, because their root goals are not.

Comment author: timtyler 01 September 2010 03:36:45PM -2 points [-]

The basic idea with corporations is that they are kept in check by an even more powerful organisation: the government. If any corporation gets too big, the Monopolies and Mergers commission intervenes and splits it up. As far as I know, no corporation has ever overthrown their "parent" government.

Comment author: wnoise 01 September 2010 03:43:07PM 0 points [-]

Other governments however...

Comment author: pjeby 31 August 2010 05:27:30PM 1 point [-]

In my case, the reason is that I think that corporations, though not truly intelligent,

They get to use borrowed intelligence from their human symbiotes, though. ;-) (Or would they be symbionts? Hm...)

Comment author: timtyler 18 August 2010 03:09:27PM *  1 point [-]

You approve of their plan to build a machine to take over the world and impose its own preferences on everyone? You talk about "optimality" - how confident are you that that is really going to help? What reasoning supports such a claim?

Comment author: Eliezer_Yudkowsky 18 August 2010 02:46:17PM 6 points [-]

if you think that the most important thing we can be doing right now is publicizing an academically respectable account of existential risk, then you should be funding the Future of Humanity Institute. Funding SIAI is optimal only if you think that the pursuit of Friendly AI is by far the most important component of existential risk reduction

Agreed. (Modulo a caveat about marginal ROI eventually balancing if FHI got large enough or SIAI got small enough.)

Comment author: wedrifid 15 August 2010 05:22:43PM 13 points [-]

Funding SIAI is optimal only if you think that the pursuit of Friendly AI is by far the most important component of existential risk reduction

This seems to assume that existential risk reduction is the only thing people care about. I doubt I am the only person who wants more from the universe than eliminating risk of humans going extinct. I would trade increased chance of extinction for a commensurate change in the probable outcomes if we survive. Frankly I would consider it insane not to be willing make such a trade.

Comment author: ciphergoth 16 August 2010 06:20:36PM 1 point [-]

I disagree. If we can avoid being wiped out, or otherwise have our potential permanently limited, our eventual outcome is very likely to be good beyond our potential to imagine. I really think the "maxipok" term of our efforts toward the greater good can't fail to absolutely dominate all other terms.

Comment author: wedrifid 16 August 2010 06:47:27PM 5 points [-]

I disagree. If we can avoid being wiped out, or otherwise have our potential permanently limited, our eventual outcome is very likely to be good beyond our potential to imagine.

That sounds very optimistic. I just don't see any reason for us to expect the future should be so bright if human genetic, cultural and technological go on under the usual influence of competition. Unless we do something rather drastic (eg. FAI or some other kind of positive singleton) in the short term then it seems inevitable that we end up in Malthusian hell.

Most of what I consider 'good' is, for the purposes of competition, a complete waste of time.

Comment author: timtyler 16 August 2010 06:28:59PM *  -1 points [-]

Lack of interest in existential risk reduction makes perfect sense from an evolutionary perspective. As I have previously explained:

"Organisms can be expected to concentrate on producing offspring - not indulging paranoid fantasies about their whole species being wiped out!"

Most people are far more concerned about other things - for perfectly sensible and comprehensible reasons.

Comment author: orthonormal 17 August 2010 11:08:03PM 2 points [-]

This is a bizarre digression from the parent comment. You're already having this exact conversation elsewhere in the thread!

Comment author: timtyler 17 August 2010 11:35:16PM *  0 points [-]

It follows from - "This seems to assume that existential risk reduction is the only thing people care about." - and - "I disagree." - People do care about other things. They mostly care about other things.

Comment author: Jonathan_Graehl 16 August 2010 09:44:18PM 0 points [-]

Your last sentence seems true.

I think I also buy the evolved-intelligence-should-be-myopic argument, even though we have only one data point, and don't need the evolutionary argument to lend support to what direct observation already shows in our case.

So, I can't see why this is downvoted except that it's somewhat of a tangent.

Comment author: timtyler 17 August 2010 06:10:18AM *  0 points [-]

Well, I wasn't really claiming that "evolved-intelligence-should-be-myopic".

Evolved-intelligence is what we have, and it can predict the future - at least a little:

Even if the "paranoid fantasies" have consderable substance, would still usually be better (for your genes) to concentrate on producing offspring. Averting disaster is a "tragedy of the commons" situation. Free riding - and letting someone else do that - may well reap the benefits without paying the costs.

Comment author: komponisto 15 August 2010 10:34:50PM *  1 point [-]

Upvoted.

We've had agreements and disagreements here. This is one of the agreements.

Comment author: orthonormal 15 August 2010 05:31:51PM 4 points [-]

I meant "optimal within the category of X-risk reduction", and I see your point.

Comment author: timtyler 15 August 2010 05:29:09PM *  1 point [-]

It seems pretty clear that very few care much about existential risk reduction.

That makes perfect sense from an evolutionary perspective. Organisms can be expected to concentrate on producing offspring - not indulging paranoid fantasies about their whole species being wiped out!

The bigger puzzle is why anyone seems to care about it at all. The most obvious answer is signalling. For example, if you care for the fate of everyone in the whole world, that SHOWS YOU CARE - a lot! Also, the END OF THE WORLD acts as a superstimulus to people's warning systems. So - they rush and warn their friends - and that gives them warm fuzzy feelings. The get credit for raising the alarm about the TERRIBLE DANGER - and so on.

Disaster movies - like 2012 - trade on people's fears in this area - stimulating and fuelling their paranoia further - by providing them with fake memories of it happening. One can't help wondering whether FEAR OF THE END is a healthy phenomenon - overall - and if not, whether it realy sensible to stimulate those fears.

Does the average human - on being convinced the world is about to end - behave better - or worse? Do they try and hold back the end - or do they rape and pillage? If their behaviour is likely to be worse then responsible adults should think very carefully before promoting the idea that THE END IS NIGH on the basis of sketchy evidence.

Comment author: Eneasz 24 August 2010 07:27:33PM 2 points [-]

Does the average human - on being convinced the world is about to end - behave better - or worse? Do they try and hold back the end - or do they rape and pillage?

Given the current level of technology the end IS nigh, the world WILL end, for every person individually, in less than a century. On average it'll happen around the 77-year mark for males in the US. This has been the case through all of history (for most of it at a much younger age) and yet people generally do not rape and pillage. Nor are they more likely to do so as the end of their world approaches.

Thus, I do not think there is much reason for concern.

Comment author: ata 24 August 2010 08:20:32PM *  6 points [-]

People care (to varying degrees) about how the world will be after they die. People even care about their own post-mortem reputations. I think it's reasonable to ask whether people will behave differently if they anticipate that the world will die along with them.

Comment author: timtyler 24 August 2010 07:58:08PM *  0 points [-]

The elderly are not known for their looting and rabble-rousing tendencies - partly due to frailty and sickness.

Those who believe the world is going to end do sometimes cause problems - e.g. see The People's Temple and The Movement for the Restoration of the Ten Commandments of God.

Comment author: Jonathan_Graehl 16 August 2010 09:46:42PM 2 points [-]

This seems correct. Do people object on style? Is it a repost? Off topic?

Comment author: cata 16 August 2010 10:12:59PM *  2 points [-]

I think it's bad form to accuse other people of being insincere without clearly defending your remarks. By claiming that the only reason anyone cares about existential risk is signalling, Tim is saying that a lot of people who appear very serious about X-risk reduction are either lying or fooling themselves. I know many altruists who have acted in a way consistent with being genuinely concerned about the future, and I don't see why I should take Tim's word over theirs. It certainly isn't the "most obvious answer."

I also don't like this claim that people are likely to behave worse when they think they're in impending danger, because again, I don't agree that it's intuitive, and no evidence is provided. It also isn't sufficient; maybe some risks are important enough that they ought to be addressed even if addressing them has bad cultural side effects. I know that the SIAI people, at least, would definitely put uFAI in this category without a second thought.

Comment author: timtyler 17 August 2010 05:49:06AM *  1 point [-]

I didn't say that "the only reason anyone cares about existential risk is signalling". I was mostly trying to offer an explanation for the observed fact that relatively few give the matter much thought.

I was raising the issue of whether typical humans behave better or worse - if they become convinced that THE END IS NIGH. I don't know the answer to that. I don't know of much evidence on the topic. Is there any evidence that proclaiming the END OF THE WORLD is at hand has a net positive effect? If not, then why are some so keen to do it - if not for signalling and marketing purposes?

Comment author: Perplexed 16 August 2010 10:37:16PM 2 points [-]

I thought people here were compatibilists. Saying that someone does something of their own free will is compatible with saying that their actions are determined. Similarly, saying that they are genuinely concerned is compatible with saying that their expressions of concern arise (causally) from "signaling".

Comment author: timtyler 17 August 2010 06:00:06AM *  0 points [-]

The common complaint here is that the signalled motive is usually wonderful and altruistic - in this case SAVING THE WORLD for everyone. Whereas the signalling motive is usually selfish (SHOWING YOU CARE, being a hero, selflessly warning others of the danger - etc).

So - if the signalling theory is accepted - people are less likely to believe there is altruism underlying the signal any more (because there isn't any). It will seem fake - the mere appearance of altruism.

The signalling theory is unlikely to appeal to those sending the signals. It wakes up their audience, and reduces the impact of the signal.

Comment author: wedrifid 17 August 2010 05:25:38AM 2 points [-]

That's what Tim could have said. His post may have got a better reception if he left off:

It seems pretty clear that very few care much about existential risk reduction. The bigger puzzle is why anyone seems to care about it at all.

I mean, I most certainly do care and the reasons are obvious. p(wedrifid survives | no human survives) = 0

Comment author: timtyler 17 August 2010 05:52:32AM *  0 points [-]

What I mean is things like:

"Citation Index suggests that virtually nothing has been written about the cost effectiveness of reducing human extinction risks," and Nick Bostrom and Anders Sandberg noted, in a personal communication, that there are orders of magnitude more papers on coleoptera—the study of beetles—than "human extinction." Anyone can confirm this for themselves with a Google Scholar search: coleoptera gets 245,000 hits, and "human extinction" gets fewer than 1,200."

I am not saying that nobody cares. The issue was raised because you said:

This seems to assume that existential risk reduction is the only thing people care about. I doubt I am the only person who wants more from the universe than eliminating risk of humans going extinct.

...and someone disagreed!!!

People do care about other things. They mostly care about other things. And the reason for that is pretty obvious - if you think about it.

Comment author: SilasBarta 16 August 2010 10:20:41PM *  4 points [-]

Hm, I didn't get that out of timtyler's post (just voted up). He didn't seem to be saying, "Each and every person interested in this topic is doing it to signal status", but rather, "Hey, our minds aren't wired up to care about this stuff unless maybe it signals" -- which doesn't seem all that objectionable.

Comment author: whpearson 16 August 2010 10:39:32PM 3 points [-]

DNDV (did not down vote). Sure signalling has a lot to do with it, the type of signalling he suggests doesn't ring true with what I have see of most peoples behaviour. We do not seem to be great proselytisers most of the time.

The ancient circuits that x-risk triggers in me are those of feeling important, of being a player in the tribes future with the benefits that that entails. Of course I won't get the women if I eventually help save humanity, but my circuits that trigger on "important issues" don't seem to know that. In short by trying to deal with important issues I am trying to signal a raised status.

Comment author: Jonathan_Graehl 16 August 2010 10:55:35PM 0 points [-]

Ok, so people don't like the implication of either the evo-psych argument, or the signaling argument. They both seem plausible, if speculative.

Comment author: Vladimir_Nesov 15 August 2010 03:40:32PM 3 points [-]

Funding SIAI is optimal only if you think that the pursuit of Friendly AI is by far the most important component of existential risk reduction

But who does the evaluation? It seems that it's better to let specialists think about whether a given cause is important, and they need funding just to get that running. This argues for ensuring minimum funding of organizations that research important uncertainties, even the ones where your intuitive judgment says probably lead nowhere. Just as most people shouldn't themselves research FAI, and instead fund its research, similarly most people shouldn't research feasibility of research of FAI, and instead fund the research of that feasibility.

Comment author: orthonormal 15 August 2010 04:01:03PM 1 point [-]

I think you claim too much. If I decided I couldn't follow the relevant arguments, and wanted to trust a group to research the important uncertainties of existential risk, I'd trust FHI. (They could always decide to fund or partner with SIAI themselves if its optimality became clear.)

Comment author: whpearson 15 August 2010 08:32:50PM 5 points [-]

My only worry about funding FHI exclusively is that they are primarily philosophical and academic. I'd worry that the default thing they would do with more money would be to produce more philosophical papers. Rather than say doing/funding biological research or programming, if that was what was needed.

But as the incentive structures for x-risk reduction organisations go, those of an academic philosophy department aren't too bad at this stage.