S-risks are risks of future global infinite sufferings. Foundational research institute suggested them as the most serious class of existential risks, even more serious than painless human extinction. So it is time to explore types of s-risks and what to do about them.

Possible causes and types of s-risks:
"Normal Level" - some forms of extreme global suffering exist now, but we ignore them:
1. Aging, loss of loved ones, moral illness, infinite sufferings, dying, death and non-existence - for almost everyone, because humans are mortal
2. Nature as a place of suffering, where animals constantly eat each other. Evolution as superintelligence, which created suffering and using it for its own advance.

Colossal level:
1. Quantum immortality creates bad immortality - I survived as old, but always dying person, because of weird observation selection.
2. AI goes wrong. 2.1 Rocobasilisk 2.2. Error in programming 2.3. Hacker's joke 2.4 Indexical blackmail.
3. Two AIs go in war with each other, and one of them is benevolent to human, so another AI tortures humans to get bargain position in the future deal.
4. X-risks, which includes infinite suffering for everyone - natural pandemic, cancer epidemic etc
5. Possible worlds (in Lewis terms) with infinite sufferings qualia in them. For any human a possible world with his infinite sufferings exist. Modal realism makes them real.

Ways to fight s-risks:
1. Ignore them by boxing personal identity inside today
2. Benevolent AI fights "measure war" to create infinitely more copies of happy beings, as well as trajectories in the space of the possible minds from sufferings to happiness

Types of most intensive sufferings:

Qualia based, listed from bad to worse:
1. Eternal, but bearable in each moment suffering (Anhedonia)
2. Unbearable sufferings - sufferings, to which death is the preferable outcome (cancer, death in fire, death by hanging). However, as said Mark Aurelius: “Unbearable pain kills. If it not kills, it is bearable"
3. Infinite suffering - qualia of the infinite pain, so the duration doesn’t matter (not known if it exists)
4. Infinitely growing eternal sufferings, created by constant upgrade of the suffering’s subject (hypothetical type of sufferings created by malevolent superintelligence)

Value based s-risks:
1. Most violent action against one’s main values: like "brutal murder of children”
2. Meaninglessness, acute existential terror or derealisation with depression (Nabokov’s short story “Terror”) - incurable and logically proved understanding of meaningless of life
3. Death and non-existence are forms of counter-value sufferings.

Time-based:
1. Infinite time without happiness.

Subjects, who may suffer from s-risks:

1. Anyone as individual person
2. Currently living human population
3. Future generation of humans
4. Sapient beings
5. Animals
6. Computers, neural nets with reinforcement learning, robots and AIs.
7. Aliens
8. Unembodied sufferings in stones, Boltzmann brains, pure qualia etc.

My position

It is important to prevent s-risks, but not by increasing probability of human extinction, as it would mean that we already fail victims of blackmail by non-existence things.

Also s-risk is already default outcome for anyone personally (so it is global), because of inevitable aging and death (and may be bad quantum immortality).

People prefer the illusive certainty of non-existence - to hypothetical possibility of infinite sufferings. But nothing is certain after death.

The same way overestimating of the animal suffering results in the underestimating of the human sufferings and risks of human extinction. But animals are more suffering in the forests than in the animal farms, where they are feed every day, get basic healthcare, there no predators, who will eat them alive etc.

The hopes are wrong that we will prevent future infinite sufferings if we stop progress or commit suicide on the personal or civilzational level. It will not help animals. It will not help in sufferings in the possible world. It even will not prevent sufferings after death, if quantum immortality in some form is true.

But the fear of infinite sufferings makes us vulnerable to any type of the “acausal" blackmail. The only way to fight sufferings in possible worlds is to create an infinitely larger possible world with happiness.


New Comment
34 comments, sorted by Click to highlight new comments since:

What if the solution to the Fermi paradox is that S-risks cause all sufficiently advanced civilizations to destroy themselves and leave no trace that could be used to resurrect and then torture them?

Surely it can't be the total solution of the FP, as it doesn't explain coordination problem, but the thought is terrifying.

I finally read your article about Fermi paradox and would like to say that I mostly agree with you and have been thinking in the same directions.

A couple of comments:

  1. To escape wrong ways to prevent x-risks by other civilizations, we could try random strategy. We'll throw a dice and choose some strange path. In that case, we will have chances NOT to repeat others strategies, which failed in other civilizations, even if they did the same - throw a dice.

  2. The idea of the natural selection of the universes, that is the fecund universes in black holes by Lee Smolin, "nicely" combines with idea that most civilisation goes extinct as a result of the physical experiment. We create a special black hole on LHC, that create many universes, like our own. In other words, our universe is fine-tuned in the way that civilisations self-destroy by creating many new universes. No matter how improbable such construction, it will dominate because of its ability to replicate.

  3. It is important to send signals to other civilisations, especially if we know how we are going to fail, to tell them that some dangerous paths should be escaped. However, we could attach to this message our DNA and some cultural information - and the request to resurrect us. Also, we should do it not in the form of the radiosignals, but as a data hoard on Moon, as most probably the next civilization will appear again on Earth. If aliens arrive in the Solar system they also will be able to find the message. I wrote an article about it recently, which still a draft.

but as a data hoard on Moon, as most probably the next civilization will appear again on Earth.

Excellent point.

We create a special black hole on LHC, that create many universes, like our own. In other words, our universe is fine-tuned in the way that civilisations self-destroy by creating many new universes

I'm not sure. Wouldn't new universes be mostly created by advanced civilizations trying to create new universes? I think your idea works only if creating a new universe requires destroying an old one.

John Smart looked in the similar directions:

"To recap, Smolin’s CNS hypothesis proposes that our universe’s developmental constants are fine tuned for the fecund replication of complex universes via black holes. The developmental singularity hypothesis (to come), proposes that our universe’s developmental constants are fine tuned for the replication of universes like ours via intelligent black holes, an even more specific and falsifiable claim.” http://s3.amazonaws.com/academia.edu.documents/1141345/1b58jhif9cm1ho8.pdf?AWSAccessKeyId=AKIAIWOWYYGZ2Y53UL3A&Expires=1499728293&Signature=j43UmsNh64pY4u748k4Fyb%2FmLjM%3D&response-content-disposition=inline%3B%20filename%3DEvo_Devo_Universe_A_Framework_for_Specul.pdf

However, it still not explains the non-existence of other civilisations, IMHO. Maybe the ultimate computer is not a black hole, but a wave of the false vacuum decay, similar to the intelligence explosion wave, but real on the physical level?

Because the idea simultaneously explains Fermi paradox: every civilisation destroys the whole visible universe, by creating a new universe, so we are the fist and last one.

It explains Fermi paradox, as well as fine-tuning of our universe for the black -holes AND for the civilization appearing.

However, there are still many holes in the idea, as some civilisations may not build LHC, and also fecund theory suggests that many new universes appear from the first one, but in the case of false vacuum decay caused by LHC it is the only one new universe and with different properties. And if there are many blackholes from LHC, they will not destroy the universe and even Earth.

Maybe micro black holes may do its bad job slowly, like for 40 years it will sit in the center of earth slowly growing, up until some threshold then it starts to grow quickly. In that case many such blackholes appear before the catastrophe, but, it still not universal solution, as some civilization may not build colliders.

I also don't understand that universe replication scenario. Maybe if there were a type of black hole -like object that both created many universes and destroyed all stars in their vicinity (generally destroying their creators).

It called false vacuum decay, but see my comment above.

FRI has focused on a few s-risks that you didn't mention (perhaps because they are not "colossal" enough):

Spread of wild animals (Related to your #2, "Normal Level") - "Humans may colonize other planets, spreading suffering-filled animal life via terraforming. Some humans may use their resources to seed life throughout the galaxy, which some sadly consider a moral imperative."

A possible compromise between the pro-panspermia and suffering-focused groups would be directed panspermia based on gradients of bliss (if Pearce's abolitionist project is possible).

Michael Dello-Iacovo also wrote a paper on the possible spread of wild animal suffering through the cosmos.

Sentient simulations: "Given astronomical computing power, post-humans may run various kinds of simulations. These sims may include many copies of wild-animal life, most of which dies painfully shortly after being born. For example, a superintelligence aiming to explore the distribution of extraterrestrials of different sorts might run vast numbers of simulations of evolution on various kinds of planets. Moreover, scientists might run even larger numbers of simulations of organisms-that-might-have-been, exploring the space of minds. They may simulate decillions of reinforcement learners that are sufficiently self-aware as to feel what we consider conscious pain."

I don't know whether such simulations would experience net-positive or net-negative welfare according to classical utilitarian standards, but it could very well cause a lot of suffering. There may also be evolutionary reasons for having more pain than pleasure, which could apply to the kinds of beings that would be simulated.

Suffering subroutines: "It could be that certain algorithms (say, reinforcement agents) are very useful in performing complex machine-learning computations that need to be run at massive scale by advanced AI. These subroutines might be sufficiently similar to the pain programs in our own brains that we consider them to actually suffer. But profit and power may take precedence over pity, so these subroutines may be used widely throughout the AI's Matrioshka brains."

PETRL.org advocates the idea that such "voiceless" algorithms deserve moral consideration. Tomasik argues that even some current-day reinforcement learners may be sentient. These claims rely on controversial positions about the philosophy of mind, but it may still be worth erring on the safe side.

Brian Tomasik also mentions lab universes as a potential source of infinite suffering (but also infinite happiness? how to deal with infinite utilities? although, if you give some even some small nonzero moral weight to negative utilitarianism, then you may want to err on the side of not creating lab universes.).

BTW, I don't understand how non-existence could be considered an s-risk, except insofar as existing people may have a preference to continue living and we define suffering as preference frustration. So while you can argue that death is a form of suffering, it does not really make sense to say that "never having existed" is a form of suffering. I think if you broaden the term that much, it loses most of its value.

Spread of wild animals

This implies that for wild animals life is not worth living. So just kill them all, as quickly as possible?

  1. No, it doesn't necessarily imply that. Suppose wild animals have net-positive aggregate welfare, but a subset of these lives contain extreme involuntary suffering. Spreading this throughout the universe would still be considered an s-risk according to FRI's definition: "Finally, some futures may contain both vast amounts of happiness and vast amounts of suffering, which constitutes an s-risk but not necessarily a (severe) x-risk. For instance, an event leading to a future containing 10^35 happy individuals and 10^25 unhappy ones, would constitute an s-risk, but not an “x-risk"."

  2. It may actually be the case that wild animals have net-negative welfare. The economist Yew-Kwang Ng has argued for this position. Brian Tomasik takes a similar view, and even endorses your attempted reductio (Edit: Ng has explicitly rejected it at this point). Michael Plant has written several counter-arguments to the Ng/Tomasik view. There doesn't seem to be any way to resolve this at present. There may also be other ways to reduce wild animal suffering besides destroying nature (e.g., see Pearce's abolitionist project).

your attempted reductio

You have two choices: ad absurdum and "Brian Tomasik takes a similar view, and even endorses".

Pick one :-)

Someone once proposed a possible s-risk:

If the suffering of hypothetical entities is morally relevant, then Brian Tomasik’s electron thought experiment was a crime of unimaginable proportions. In fact, it may well be that Tomasiks spontaneously forming in empty space outweigh every “conventional” source of suffering in the Universe. I call this the Boltzmann Brian problem.

Well then, how much resources (e.g. time and mental energy) do you feel should we spend entertaining absurd (note: no quotes) notions?

Are you referring to empirical or normative claims? I don't consider the idea that wild animals experience net suffering absurd, although the idea that habitat destruction is morally beneficial is counterintuitive to most people. I think the idea that we should reduce the chance of spreading extreme involuntary suffering, including wild-animal suffering, throughout the universe is much less counterintuitive, and is consistent with a wide range of moral views.

Since I give significant (but not 100%) weight to "the overwhelming importance of the far future" (Nick Beckstead), and the future is always absurd, we should probably spend significant time engaging with ideas that seem intuitively absurd. I don't think opposition to spreading wild-animal suffering is one of these, although things like suffering subroutines and some of the ideas mentioned in the OP (e.g., quantum immortality, multiverses) might be. Some people consider the intelligence explosion absurd, but I still think it has some non-negligible plausibility.

I don't see much in the way of empirical claims here (these would require a hard definition of "suffering" and falsifiability to start with), so I guess I'm talking about counterintuitive normative claims.

I think the idea that we should reduce the chance of spreading extreme involuntary suffering throughout the universe is much less counterintuitive

The claim is a bit different: that we should not spread (non-human) life through the galaxy. This is counterintuitive.

we should probably spend significant time engaging with ideas that seem intuitively absurd

So how do you pick absurd ideas to engage with? There are a LOT of them.

I don't see much in the way of empirical claims here (these would require a hard definition of "suffering" and falsifiability to start with), so I guess I'm talking about counterintuitive normative claims.

Fair point. This is one problem I have had with moral realist utilitarianism. Although I think it may still be the case that sentience and suffering are objective, just not (currently) measurable. Regardless, I don't think the claim of net-suffering in nature is all that absurd.

The claim is a bit different: that we should not spread (non-human) life through the galaxy. This is counterintuitive.

The claim I made is that spreading non-human life throughout the galaxy constitutes an s-risk, i.e. it could drastically increase the total amount of suffering. Any plausible moral view would say that s-risks are generally bad things, but it is not necessarily the case that suffering can never be outweighed by positive value. E.g., if one is not something like a negative utilitarian, then it could still be permissible to spread non-human life throughout the galaxy, as long as you take action to ensure that the benefits outweigh the harms, however you want to define that. Perhaps genetically altering them to reduce infant mortality rates, or to reduce their capacity to experience suffering, having a singleton to prevent suffering re-emerging through Darwinian processes, etc.

So how do you pick absurd ideas to engage with? There are a LOT of them.

This is a hard problem in practice, and I don't claim to know the solution. Ideally, you would prioritize exploring ideas that are decision-relevant and where further research has high Value of Information. Then you would probably transition from an exploration stage to an exploitation state (see the "multi-armed bandit").

Ideally, you would prioritize exploring ideas that are decision-relevant and where further research has high Value of Information.

And does the exploration of the consequences of spreading non-human life throughout the galaxy qualify? Doesn't look like that to me, seems like you'll be better off figuring out whether living on intersections of ley lines is beneficial, or maybe whether ghosts have many secrets to tell you...

Yes, I think it does because it's a plausible scenario and most plausible (IMO) ethical views say that causing non-human suffering is bad. Further exploration of the probability of such scenarios could influence my EA cause priorities, donation targets, and/or general worldview of the future.

seems like you'll be better off figuring out whether living on intersections of ley lines is beneficial, or maybe whether ghosts have many secrets to tell you...

Those have very low prior probabilities and low decision-relevance to me.

a plausible scenario ... [vs] ... very low prior probabilities

Aren't we talking about picking which absurd ideas to engage with?

You are doing some motte and bailey juggling:

Motte: This is an absurd idea which we engage with because it's worth engaging with absurd ideas.

Bailey: This is an important plausible scenario which we need to be concerned about.

I believe I already told you that I don't consider "spreading wild animal suffering" to be absurd; it's a plausible scenario. What may be intuitively absurd is the claim that "destroying nature is a good thing" -- which is not necessarily the same as the claim that "spreading wild animal suffering to new realms is bad, or ought to be minimized". (And there are possible interventions to reduce non-human suffering conditional on spreading non-human life. E.g. "value spreading" is often discussed in the EA community.)

Anyway, I'm done with this conversation for now as I believe other activities have higher EV.

Thanks for adding ideas, I will add them in the next version of the map.

I think that the way we explore s-risks should be beneficial to our future. And for that we need that s-risks will not exclude x-risks, or create them. However, the lines of reasoning where life in general is net-negative, or human sufferings are less important than animal sufferings, or running simulation or reinforcement learning algorithms are regarded as mindcrime - are themself able to create dystopian future without any measurable reduction of sufferings.

To balance x-risks and s-risks we need to understand that non-existence is also a form of sufferings in a provable way.

At first, we need to define suffering not based on pain, but based on values and choices. It is measurable and is according to common sence. Some masochists may love pain, or in some cases pain is felt but not regarded as bad.

However, if define suffering only through pain we have problems:

1) non-existence becomes preferable in many situations, about which common sense says that they should bot be preferable.

2) Wireheading becomes good solution

3) Sufferings become unmeasurable, as we can't measure other's qualia.

4) We may start to decide about others' preferences against their will, but based on out (false) extrapolation of it.

So defining suffering through values and choices will help to come to more consistent results. We could ask a person about its worse possible outcome, and in many cases it will be not only pain. It case of animals we often can't ask, but we could make a thought experiment, would they prefer to live.

Such thought experiment helps us to establish that non-existence is a form of suffering for most actually existing humans and animals (but not mind in general). Imagine that my cat died. Is it suffering tomorrow? I could imagine that it will be alive tomorrow and measure two things 1) its pain level 2) its readiness to protect its life. The fact that it would protect its life if was alive - is an argument that non-existence fo it is a form of suffering, and it could be measured.

An interesting fact is that in cultures like India, where reincarnation is just a popular belief, the subjective measure of suffering is lower. A social (shared) stoicism in the face of death makes it easier to bare.

In the West another form of suffering yet emerged that I would call the Sword of Damocles.

Also worth mentioning is psychological suffering, particularly Persecutory delusion & Persecution complex.

An interesting fact is that in cultures like India, where reincarnation is just a popular belief, the subjective measure of suffering is lower.

That's an interesting claim. What evidence do you have for that claim?

I'm wrong, it's actually simple stoicism, not real alleviation of pain.

Congratulations to updating.

Will it be feasible in the next decade or so to actually do real research into how to make sure AI systems don't instantiate anything with any non-negligible level of sentience?

If it is a question to me, I think no. It may be simpler to make bad AI able to kill anyone in the next decade than to solve nature of consciousness and thus learn if any AI actually have any subjective experiences. I have been working 2 years ago on a plan of research of the nature of consciousness, but later mostly abandoned it as I think it is not an urgent question.

Awesome and informative shared like such type of post keep it up good work. It is also helpful a lot and can download GC SOC 100 ALL WEEK ASSIGNMENTS PACKAGE LATEST loving your Post . Bundle of Thanks for such an awesome post.

Tutorials Experts

[-]Val10

Could you please put some links to "Hacker's joke" and "Indexical blackmail"? Both use words common enough to not yield obvious results for a google search.

Indexical blackmail was somewhere on lesswrong, and idea is that AI in the box creates many my copies, and inform me about it, and because of this I can't be sure that I am not one of such copies, and thus I will realise it from the box (or face torture with probability 999 to 1000). I can't find the link.

The idea is based on idea of the "indexical uncertainty", which is googlable, for example, here: https://books.google.ru/books?id=1mMJBAAAQBAJ&pg=PT138&lpg=PT138&dq=indexical+uncertainty&source=bl&ots=Pmy8RXDflh&sig=7RT7DQIKidIN-Q6Po6seizyNGYw&hl=en&sa=X&ved=0ahUKEwi285nQzfrUAhXjZpoKHQJ2CQM4ChDoAQgjMAA#v=onepage&q=indexical%20uncertainty&f=false

Hacker's joke - it is a hypothetical situation when first and last AI creator is just a random 15 years old boy, who wants to play with AI by putting in it stupid goals. Nothing to google here.

[-]Val00

I know about the first one having been mentioned on this site, I've read about it plenty of times, but it was not named as such. Therefore it's advisable if you use a rare term (or especially one made up by you) that you also tell what it means.