If I were to ask the question "What threat poses the greatest risk to society/humanity?" to several communities I would expect to get some answers that follow a predictable trend:

If I asked the question on an HBD blog I'd probably get one of the answers demographic disaster/dysgenics/immigration.

If I asked the question to a bunch of environmentalists they'd probably say global warming or pollution.

If I asked the question on a leftist blog I might get the answer: growing inequality/exploitation of workers.

If I asked the question to Catholic bishops they might say abortion/sexual immorality.

And if I were to ask the question on LessWrong (which is heavily populated by Computer scientists and programmers) many would respond with unfriendly AI.

One of these groups might be right, I don't know. However I would treat all of their claims with caution.

Edit: This may not be a bad from thing from an instrumental rationality perspective. If you think that the problem you're working on is really important then you're more likely to put a good effort into solving it.

New Comment
26 comments, sorted by Click to highlight new comments since:

Environmentalists care about direct and specific threats. The rest of them seem to care about possible loss of something we call "Friendliness" here.

HBD people believe that different cultures encourage different behaviors, which creates different selection pressures, and the differences gradually get encoded in genes. Which means that people from other cultures are for us like 99% humans + 1% Pebblesorters. We should optimize for our values. -- In addition to this, there are also direct specific threats, e.g. violence.

Leftists point out that the Invisible Hand of Market is not Friendly, therefore it may optimize for things we consider horrible; just like evolution, and for the same reason. This part seems obvious to me; I only object connotationally that institutions in general are not Friendly, including the institutions created by leftists.

Catholics have a model of the world where Friendliness comes from God, and humans themselves are not Friendly. We are broken; we don't automatically optimize for things we actualy like to have. Azathoth is in us. Ignoring God's commands -- and sexual behavior is where doing so is most tempting -- means giving up the Friendliness and letting Azathoth optimize for its own purposes.

LessWrongians point out that an AI could be a powerful threat to Friendliness, that it is likely to be the threat, and that it may be too powerful to be stopped once this becomes obvious.

To sum up the differences: Catholics pretend they have an external source of Friendliness and try to preserve the connection with it. HBD people worry about external sources of Unfriendliness. Leftists worry about existing forces that slowly but persistently optimize away from Friendliness. LessWrongians worry about a new very fast and powerful source we may create.

My opinion is that Friendliness is fragile and in danger. Azathoth and the hypothetical Unfriendly AI will optimize away from it. There is no equivalent power optimizing towards it; we only have inertia on our side. Unless we develop the Friendly AI or become greatly more rational ourselves (unless either MIRI or CFAR reaches their goal), we are doomed in long term. But of course some other disaster can destroy us even faster; we should not forget about immediate threats.

I think all these groups except for Catholics have a valid point (and even Catholics have some useful heuristics; the problem is that their other heuristics are harmful, and their model is hopelessly wrong) and I have no idea how to evaluate which one deserves most attention. My reason to choose LW is that it is the smallest group, so its concerns are most likely to be ignored. Also, in long term only LW has a satisfying solution (although this itself does not prove that someone else shouldn't get priority in short term), the other groups are merely slowing down the inevitable.

At least some of the disagreement shrinks when you clarify "greatest risk" :

  • Is it a problem for rich first-worlders (dysgenics, maybe), poor first-worlders (immigration), poor third-worlders (global warming, pollution)?

  • Is it a likely but "mild" problem (dysgenics, sexual immorality, growing inequality), or an unlikely/uncertain but catastrophic problem (AI, grey goo) ?

There's probably still a lot of actual disagreement about facts left (for example, how likely is AI, whether God punishes/rewards us), but I think the bulk of the "disagreement" boils down to "bad for different people" and "bad in different ways".

Yes, it is indeed a common pattern.

People are likely to get agitated about the stuff they are actually working with, especially if it is somehow entangled with their state of knowledge, personal interests and employment. Belief that we are the ones to save the world, really helps to find motivation for continuing their pursuits (and helps fund-raising efforts, I would reckon). It is also a good excuse to push your values on others (Communism will save the world from our greed).

On the other hand, I don't think it is a bad thing. That way, we have many little small groups, each working on their small subset of problem space when also trying to save the world from the disaster, which they perceive to be the greatest danger. As long as response is proportional to actual risk, of course.

But I still agree with you that it is only prudent to treat any such claims with caution, so that we don't fall into a trap of using data taken from a small group of people working at Asteroid Defense Foundation as our only and true estimates of likelihood and effect of an asteroid impact, without verifying their claims using an unbiased source. It is certainly good to have someone looking at the sky from time to time, just in case their claims prove true, though.

That way, we have a little of small groups each working on their small subset of problem space when also trying to save the world from the disaster, they perceive to be the greatest danger. As long as response is proportional to actual risk, of course

Good point, I'll include that.

Actually I don't think you're right. I don't think there's much consensus on the issue within the community, so there's not much of a conclusion to draw:

Last year's survey answer to "which disaster do you think is most likely to wipe out greater than 90% of humanity before the year 2100?" was as follows:

Pandemic (bioengineered): 272, 23% Environmental collapse: 171, 14.5% Unfriendly AI: 160, 13.5% Nuclear war: 155, 13.1% Economic/Political collapse: 137, 11.6% Pandemic (natural): 99, 8.4% Nanotech: 49, 4.1% Asteroid: 43, 3.6%

I deliberately didn't say that the majority of LessWrongers would give that answer. Partly because Lesswrong is only about 1/3 computer scientists/programmers. Also 14.5% is very high compared to most communities.

I didn't explicitly state an argument but if I were to it would be that communities with an interest in topic X are the most likely to think that topic X is the most important thing ever. So it isn't necessary for most computer scientists to think that unfriendly AI is the biggest problem for my argument to work, just that computer scientists are the most likely to think that it is the biggest problem.

I deliberately didn't say that the majority of LessWrongers would give that answer. Partly because Lesswrong is only about 1/3 computer scientists/programmers.

Fortunately we have the census and the census does ask for the profession. Among those with the profession Computers (AI), Computers (practical: IT, programming, etc.) and Computers (other academic, computer science) 14.4% think that unfriendly AI is the biggest threat.

Lesswrong isn't a community that focuses much on bioengineered pandemics. Yet among those computer programmers 23.7% still think it's the greatest threat.

We are a community that actually cares about data.

I think this is a bad thing, and that we need to try to avoid it.

I also think that we are biased not just by wanting the stuff we are good at to be the most important stuff, but also by wanting to agree with the community.

Here is a small step we can take in the right direction.

If you are one of the 15 people who down voted the following post, stop for a minute and ask yourself why. I am not saying either way whether or not the post deserved to be down voted. However, I would not be surprised if several people down voted it for a bad reason.

http://lesswrong.com/r/discussion/lw/ji0/the_first_ai_probably_wont_be_very_smart/

And if you come up with a reason why that isn't already in the comments, please add it. It's a lot less lonely being in the minority if the hate you receive isn't just a faceless wall of downvotes.

[-][anonymous]50

How much of the bias is priming? What if you provided a list of likely disasters? If you raise the issue, you can get most people to admit that a pandemic scenario is more "serious".

"Greatest risk to society/humanity" might also be interpreted different ways. One person may think of existential risk, while another might think about risk to cultural values.

It seems these groups exist, in large part, as an effect of their beliefs about the biggest risks. You're not afraid of global warming and pollution because you are an environmentalist, rather you are an evironmentalist because of your fear of global warning and pollution.

That said, I'm not sure what your point is. I'm sure there are many in each group who haven't done the math themselves and are just following like sheep. But it is the same regardless of what we are talking about...it certainly isn't specific to evalutating threats to humanity. Just groupthink and half a dozen other biases at play.

The other thing is that groups may not be focusing on the largest existential threat at any given time. Instead they might be spending time on a particular issue that has come to the forefront.

Conservative Christianity, for instance, is dealing with homosexuality right now. But that is really just a pawn in a much larger eschatological endgame. Homosexuality isn't really that big a threat to Christians. Hell is a bigger threat.

You're not afraid of global warming and pollution because you are an environmentalist, rather you are an evironmentalist because of your fear of global warning and pollution.

I actually think the former is more true than the latter. You first become an environmentalist (through e.g. social pressure and status-seeking) and then filter your information input to become fearful of global warming and pollution.

How do you define 'environmentalist'?

[-]gjm20

Evidence?

No evidence, just some anecdata as I know a couple of people for whom it happened in this order.

There is no sharp boundary, of course, and there's a bit of a feedback loop there, too. It's kinda like asking whether someone feared hell and because of that became a Christian, or whether she became a Christian and that made her fear hell...

[-]gjm40

On that particular example, it seems to me that anyone who fears hell is (at least) most of the way to Christianity already. Assuming it's the Christian hell they fear, of course, but then it's hard to see how fear of some other religion's hell would incline someone to become a Christian.

If you asked the people in question how their opinions evolved, do you think they would give an account that matches yours?

anyone who fears hell is (at least) most of the way to Christianity already.

A lot of religions have much unpleasantness in the afterlife as a possibility :-/

[-]gjm20

Which is why I added "Assuming it's the Christian hell they fear", etc.

What percentage of the community who considers UFAI a major risk is only part of that community because of social pressure and status-seeking?

No idea. However I am unaware of any social pressure to join LW. On the other hand, there is a lot of social pressure to, let's say, display environmentalist sensibilities.

Hm. Perhaps I asked poorly.

Would you say the social pressure as motivation to agree with the severity AI risks becomes significant once one voluntarily joins a community like LW?

If there are, say, 1000 active members, 500 of which believe that UFAI is the most important threat to deal with, how many of those 500 people have authentically arrived at that conclusion by doing the math? And how many are simply playing along because of social pressure, status-seeking, and a sort of Pascal's Wager that benefits them nothing for dissenting?

Would you say the social pressure as motivation to agree with the severity AI risks becomes significant once one voluntarily joins a community like LW?

Yes, provided you want to integrate into the community (and not e.g. play the role of a contrarian).

how many of those 500 people have authentically arrived at that conclusion by doing the math?

I don't know but I would expect very few. Also, you can't arrive at this conclusion by doing math because at this point the likelihood of UFAI is a matter of your priors, not available data.

In characterizing this trend, it seems as though you are assuming that membership in these various communities is mutually exclusive. However, this doesn't have to be the case. For example, a person may be both a Catholic and a leftist. Thus, a good follow-up question might be: To what extent does leftist politics and Catholicism have an impact on a person's evaluation of risk. For example, one could compare Pope Benedict with Pope Francis, who are both Catholic, with each other and potentially conclude that income inequality/exploitation influences Francis' conceptualization of risk to society to a greater degree than abortion or sexual immorality.

Definitely brings déformation professionnelle to mind for me, I'm not sure if there is anything in there about the community aspects of it though.

This is not exactly true. Here is similar statistics ("what is a greatest danger in 21 century ") from astronomy forum :

Nuclear war - 19.8 % Resources depletion - 10 % Nothing, there is no threat - 9,4% Asteroid - 8,4 % Lack of interest for life - 6.3% Overpopulation - 5.5% AI - 4.5% Simulation shutdown - 3.9% Solar flare - 3.7% Grey goo - 2.4% Biological weapons - 3.5% Supervulcano - 2.6%

(rest goes to "other" ).