Many people on Less Wrong believe reducing existential risk is one of the most important causes. Most arguments to this effect point out the horrible consequences: everyone now living would die (or face something even worse). The situation becomes even worse if we also consider future generations. Such an argument, as spelt out in Nick Bostrom's latest paper on the topic, for instance, should strike many consequentialists as persuading. But of course, not everyone's a consequentialist, and on other approaches it's far from obvious that existential risk should come out on top. Might it be worth to spend some more time investigating arguments for existential risk reduction that don't presuppose consequentialism? Of course, "non-consequentialism" is a very diverse category, and I'd be surprised if there were a single argument that covered all its members.

 

New Comment
33 comments, sorted by Click to highlight new comments since: Today at 10:04 AM

Most philosophers who are non-consequentialists are pluralists that think consequences are very important, so they can still use standard arguments to support the idea that reducing x-risk is important. A lot of non-consequentialism is about moral prohibitions rather than prescriptions, so I suspect most of it would have little to say about altruistic considerations. And of course a lot of it is loose, vague, and indeterminate, so it would be hard to draw out any comparative claims anyway.

Might it be worth to spend some more time investigating arguments for existential risk reduction that don't presuppose consequentialism?

Most non-consequentialists are not indifferent to consequences. For example, they might believe in punishing drunk drivers irrespective of whether they run into someone - but if they drive drunk and then actually kill someone, that is still highly relevant information.

Egoists are more of a problem from the perspective of this cause, I believe.

Egoists are more of a problem from the perspective of this cause, I believe.

That's an inferential step further, but those could be swayed by the prospect of a very very long life. It's a reaaaly long shot, but existential risks are a barrier to personal immortality.

Sure - egoists, assign some value to avoiding the end of the world.

For them, it isn't billions of times worse than all their friends and relatives dying, though.

Smaller utilities mean that the "tiny chance times huge utility" sums don't have the same results as for utilitarians.

This results in disagreements over policy issues. For instance, an egoist might regard a utilitarian organisation - like the Singularity Institute - gaining power as being a bad thing - since they plainly have such a different set of values. They would be willing to gamble small chances of a huge utility - while the egoist might regard the huge utility as being illusory.

This is a problem because (I claim) the actions of most people more closely approximate those of egoists than utilitarians - since they were built by natural selection to value their own inclusive fitness.

The Singularity Institute is a kind of utilitarian club - where utilitarians club together in an attempt to steal the future, against practically everyone else's wishes.

Smaller utilities mean that the "tiny chance times huge utility" sums don't have the same results as for utilitarians.

Beware Pascal's wager. Also worthy of note is that Eliezer himself doesn't gamble on a small probability. But maybe you talked about the difference the egoist could make? Then I agree it amounts to a much smaller probability.

On the other hand, I think the prospect of living a few aeons represents by itself a huge utility, even for an egoist. It might still be worth a long shot.

If an example of where there is a difference would help, consider these two possibilities:

  • 1% of the population takes over the universe;
  • everyone is obliterated (99% chance) - or "everyone" takes over the universe (1% chance);

To an egoist those two possibilities look about equally bad.

To those whose main concern is existential risk, the second option looks a lot worse.

[-][anonymous]12y-10

I would call myself more of an egoist, and I would say the first possibility looks really good and the second possibility looks pretty bad. I of course assume that I am part of the 1%.

As an egoist myself, the prospect of a very very long life would push me to care less about long term existential risk and care more about increasing the odds of that prospect of a long life for me and mine in particular.

Having no prospect of an increased life span would make me more likely to care about existential risk. If there's not much I can do to increase my lifespan, the question becomes how to spend that time. Spending it saving the world has some appeal, particularly if I can get paid for it.

I think the original post is mistakenly conflating consequentialism and utilitarianism. Consequentialism only indicates you care about consequences - it doesn't indicate whose consequences you care about. It certainly doesn't make you a utilitarian, or a utilitarian for future beings either.

Oh, I wouldn't advise you to do something about existential risks first. But once you're signed up for Cryonics, and do your best to live a healthy, safe, and happy life, the only lever left is a safer society. That means taking care about a range of catastrophic and existential risks.

I agree however that at that point, you hit diminishing returns.

Even if I've done all I can directly for my own health, until we reach longevity escape velocity, pushing longevity technology, and supporting any direct (computers, genetic engineering) or indirect (cognitive enhancement, productivity enhancements) technologies would seem to give more egoistic any utilitarian bang for the buck, at least if you're focused on the utility of actually existing people.

Hmm, that's a tough call. I note however that at that point, where your marginal dollar goes is more a matter of a cost-benefit calculation than a real difference in preferences (I also mostly care about currently existing people).

The question is, which will maximize life expectancy ? If you estimate that existential risks are sufficiently near and high, you would reduce them. If they are sufficiently far and low, you'd go for life extension first.

I reckon It depends on a range of personal factors, not least of which your own age. You may very well estimate that if you where not egoist, you'd go for existential risks, but maximizing your own life expectancy calls for life extension. Even then, that shouldn't be a big problem for altruists. Because at that point, you're doing good for everyone anyway.

Michael Sandel has made communitarian arguments for concern with existential risk, along the lines of "the loss of the chain of generations, the dreams of progress, the scientific and artistic and religious and cultural traditions, etc, have value beyond that accruing to individual lives" in his book Public Philosophy.

Derek Parfit briefly considers a similar argument in the concluding section of Reasons and Persons. On this argument, "what matters are what Sidgwick called the ‘ideal goods’—the Sciences, the Arts, and moral progress, or the continued advance towards a wholly just world-wide community. The destruction of mankind would prevent further achievements of these three kinds. This would be extremely bad because what matters most would be the highest achievements of these kinds, and these highest achievements would come in future centuries." (p. 454) However, although not strictly utilitarian, the argument is still broadly consequentialist.

That sounds consequentialist to me.

Sandel thinks in terms of loyalties and obligations, and is not coming from a universalist aggregative person-centered axiological perspective. He often makes arguments against performing various welfare-enhancing courses so as to signal/affirm communal loyalty. He's not of the utilitarian school, at least, and hard to capture precisely as a consequentialist.

Existential risk for non-consequentialists

"Not caring about existential risk is bad"?

For folks who really, actually, truly think that there's a difference between murder and not preventing a death when the means and knowledge to do so are available, I don't think there's anything you can say, since a logical consequence of this belief is that there's no moral imperative to do anything when an asteroid is about to kill everyone.

But for those who merely profess this belief for in-group reasons, you can pull the conversation sideways, talking about how you'd probably want to prevent a small rock from falling on your head, and how if there was a really big rock falling on a bunch of people's heads, you'd want to work together to prevent it from falling on your head as well as theirs, and if there was a really big rock, you'd do the same, and if there were an asteroid you want to work together with the whole world, only the asteroid is metaphorical.

Show them that their own belief system already prioritizes existential risk.

For folks who really, actually, truly think that there's a difference between murder and not preventing a death when the means and knowledge to do so are available, I don't think there's anything you can say, since a logical consequence of this belief is that there's no moral imperative to do anything when an asteroid is about to kill everyone.

Doesn't follow. Even if the two are different, this doesn't necessarily mean that there's no moral imperative in regards to the latter, it may just be a different or lesser moral imperative.

You're right. My mistake.

But of course, not everyone's a consequentialist, and on other approaches it's far from obvious that existential risk should come out on top.

Indeed, it strikes me as non-controversial that it won't come out on top.

This seems right. Even those non-consequentialists who argue for keeping the human race in existence usually do so based on moral reasons and principles that are comparable in importance to all sorts of mundane reasons and could easily be trumped. Here's John Leslie writing about such non-consequentialists:

Having thereby dismissed utilitarianism with suspicious ease, they find all manner of curious reasons for trying to produce benefits for posterity: benefits which anyone favouring mere maximization of benefits would supposedly be uninterested in producing! Some base their concern for future generations mainly on the need to respect the wishes of the dead. Others emphasize that love for one's grandchildren can be logically linked to a wish that they too should have the joy of having grandchildren.

(The End of the World, p. 184 of the paperback edition )

Did you mean "As a consequentialist, I don't think existential risk is the most important thing." or 'm not a consequentialist, and I don't think it comes out on top."

I mean "If I were a non-consequentialist I don't see why I would care much about existential risk".

This should depend very much on the system in question. Some people have already given examples of possible philosophical stances but it is worth nothing that even some religions can plausibly see existential risk as a problem. For example, in most forms of Judaism, saving lives is in general important and there's also an injunction that one cannot rely on divine intervention for any purposes. So even if someone believes that there's a nice deity who will make sure that existential risks don't occur, there's still an obligation to take active steps in that regard. This dates back to ideas which are directly in the Bible, such as the exchange between Mordechai and Esther in Chapter 4 of the eponymous book.

Similar remarks can probably be made at least for the other Abrahamic religions in various forms, although I don't have the knowledge base and time to flesh out the details.

I'm still waiting for a method for interpersonal utility comparison. Until then, I'll stick with non-cognitivism and leave utilitarianism to the metaphysicians.

I'm still waiting for a method for interpersonal utility comparison.

I suspect you already have one. In dealing with others, I think most people use interpersonal utility comparisons all the time as a factor in their own decision making.

So, to be clear, I think you have one as well, and are really just waiting for those who have another interpersonal utility weighting, which they claim is "true", to rationally support their supposed truth. I've been waiting for that one. Harassed a number of people over at Sam Harris's web site asking about it.

Until then, I'll stick with non-cognitivism and leave utilitarianism to the metaphysicians.

I don't think non-cognitivism is the whole story on morality. People make moral distinctions, and some part of that is cognitively mediated, even when not verbally mediated. One cognitively recognizes a pattern one has a non-cognitive response to.

I suspect you already have one. In dealing with others, I think most people use interpersonal utility comparisons all the time as a factor in their own decision making.

So, to be clear, I think you have one as well, and are really just waiting for those who have another interpersonal utility weighting, which they claim is "true", to rationally support their supposed truth. I've been waiting for that one. Harassed a number of people over at Sam Harris's web site asking about it.

It sounds like we are mostly in agreement, but there is an important difference between me getting utility from other people getting utility (for instance, I would prefer to occupy a world where my friend is having a pleasant experience to one in which he is not, all else equal) and me performing arithmetic using values from different people's utility functions and obtaining a result that would be unobjectionable like the result of adding or subtracting apples would be. In other words, me engaging in a kind of "interpersonal utility comparison" is only really telling us about my preferences, not about a uniquely correct calculation of "utility-stuff" that tells us about everyone's (combined) preferences.

I don't think non-cognitivism is the whole story on morality. People make moral distinctions, and some part of that is cognitively mediated, even when not verbally mediated. One cognitively recognizes a pattern one has a non-cognitive response to.

Nor do I, which is why I am still considering other hypothesis (for instance, virtue ethics, egoism, and contractarianism, all of which seem much more likely to be true than utilitarianism).

Most political systems are implicitly based on some kind of interpersonal utility comparison. They usually count the wishes of everyone over the age of 18 as being of equal value, and say that children and animal utilities only count by proxy.

Your second sentence is obviously false. The political system I happen to live under gives the president's wishes considerably more weight than my own (even though both of us are over the age of 18) and it would be no different in just about any other extent political system.

If you live in the United States, I think the president is supposed to be a voice for the wishes of all the people who elected him. I'm not saying this actually works in practice, but at least in theory the president is supposed to speak for the citizenry in governmental affairs, not always get the last piece of cake at parties.

Yes, I am familiar with that theory (I did attend kindergarten after all). I also know such a theory is little more than a fairytale. I was commenting on the truth value of a particular sentence, not discussing how things are "supposed" to be (whatever that means).

Sure - there's also vote-rigging and other phenomena. The "everyone has an equal say" is an ideal, not a practical reality.

[-][anonymous]12y-10

Consider me not impressed by "most political systems".

[This comment is no longer endorsed by its author]Reply