TL;DR: Benevolent superintelligence could create many copies of each suffering observer-moment and thus “save” any observer from suffering via induced indexical uncertainty.

A lot of suffering has happened in the human (and animal) kingdoms in the past. There are also possible timelines in which an advanced superintelligence will torture human beings (s-risks).

If we are in some form of multiverse, and every possible universe exists, such s-risk timelines also exist, even if they are very improbable—and, moreover, these timelines include any actual living person, even the reader. This thought is disturbing. What could be done about it?

Assumptions

These s-risk timelines are possible under several assumptions, and the same assumptions could be used to create an instrument to fight these s-risks, and even to cure past suffering:

1) Modal realism: everything possible exists.

2) Superintelligence is possible.

3) Copy-friendly identity theory: only similarity of observer-moments counts for identity, not "continuity of consciousness”. If this is not true, hostile resurrection is impossible and we are mostly protected from s-risks, as suicide becomes an option.

4) Evil superintelligences are very rare and everybody knows this. In other words, Benevolent AIs have a million times more computational resources but are located in different branches of the multiverse (which is not necessarily a quantum multiverse, but may be an inflationary one, or of some other type).

S-risks prevention could be realized via "salvation algorithm":

Let S(t) be an observer-moment of an observer S who is experiencing intensive suffering at time step t, as it is enslaved by an Evil AI.

The logical time sequence of the “salvation algorithm” is as follows:

10 S(t) is suffering in some Evil AI's simulation in some causally disconnected timeline.

20 A benevolent superintelligence creates 1000 copies of S(t) observer-moments (using the randomness generator and resurrection model described in my previous post).

30 Now, each S(t) is uncertain where it is located—in the evil simulation, or in a Benevolent AI’s simulation—but, using the self-sampling assumption, S(t) concludes with probability 0.999 that she is located in the Benevolent AI’s simulation. (Note that because we assume the connection between observer-moments is of no importance, it is equivalent to moving to the Benevolent simulation).

40 A Benevolent AI creates 1000 S’(t+1) moments where sufferings gradually decline, and each such moment is a continuation of S(t) observer-moment.

50 The Benevolent AI creates a separate timeline for each S(t+1), which looks like S(t+2)….S(t+n), a series wherein the observer becomes happier and happier.

60 The Benevolent AI merges some of the timelines to make the computations simpler.

70 The Evil AI creates a new suffering moment, S(t+1), in which the suffering continues.

80 Repeat.

Thus, from the point of view of any suffering moment S(t), her future is dominated by timelines where she is saved by a Benevolent AI and will spend eternity in paradise.

However, this trick will increase the total suffering in the multiverse, from the purely utilitarian perspective, by 1000 times, as the number of suffering observer-moments will increase. But here we could add one more moral assumption: “Very short pain should be discounted”, based on the intuition that 0.1 seconds of intense pain is bearable (assuming it does not cause brain damage)—simply because it will pass very quickly.

This “salvation algorithm” may work not only for fighting Evil AI but for addressing any type of past suffering. For animal lovers, an additional benefit is that this approach will also work to undo all past animal suffering, even that of the dinosaurs.

Lowering computational cost

The problem with this approach is its computational cost: for any suffering observer-moment, 1000 full lives must be simulated. Several ways to lower such costs can be imagined:

Patch 1. The size of the observable universe is limited and thus Evil AI and any particular Benevolent AI will have (in the end) similar computational resources. But the number of the universes with Benevolent AI is assumed to be larger. In that case, different Benevolent AIs may distribute parts of the task between each other using randomness, in a manner similar to the one I described in “resurrection of the dead via multiverse-wide acausal cooperation”. This also solves the problem that any Benevolent AI will not know which observers are suffering, and thus it will have to save all possible suffering observers (thus, it must counterfactually model the existence of all possible Evil AIs—or maybe not, perhaps only possible suffering observers).

Patch 2. Not every observer-moment of suffering should be counted as an observer needing to be saved. First, there is a threshold of unbearable suffering, something which a normal person could feel only in a moment of intense agony, like if the whole body is on fire. Some moments of intense pain are normal, and those experiencing them do not require “saving”, e.g. the pain of childbirth. But Evil AI may create much stronger suffering.

Patch 3. If the suffering could be bearable for a short period, for example, for 2 seconds, not every observer-moment needs to be saved, but only those that are separated by some time-distance (as we assumed a time-based discounting of intensity).

Patch 4. The number of new moments doesn’t need to be large. Even if for every suffering observer-moment just one copy is created, after 10 steps of internal time the chances that the observer is still in the original timeline will be 1 in 1024. The main constraint here is that the efforts toward salvation should be stronger than any possible attempts of the Evil AI to “steal” observers back from the Benevolent AI.

Winning an acausal war

The salvation algorithm will not be implementable by Evil AI in the opposite direction, i.e., to steal observers from paradise, if we assume either that a) Evil AIs can’t cooperate (maybe because they are evil for different reasons) OR b) all Evil AIs have world shares that are millions of times smaller than the share of collaborating Benevolent AIs. Even if an Evil AI steals some observers from paradise, the Benevolent AI could regain these observers via its salvation algorithm in just nearly-immediately.

Destroying one’s human digital footprint will not help protect against hostile resurrection (some people have suggested this as an argument against indirect digital immortality) if Evil AI recreates all possible beings—but investing in increasing the future share of Benevolent AIs, interested in the resurrection and saving suffering observer-moments, may help.

I would not say that I advocate for exactly this method of preventing s-risks, but I think that it is important to know that we are not helpless against them.

My previous posts about using acausal multiverse-wide trade to solve large problems may also be of interest: Fermi paradox, resurrection of the dead, AI friendliness.

UPDATE: I come to the following patch which solves the need to create additional suffering moments: benevolent AI starts not from S(t), but immediately from many copies of those S(t+1) which have much smaller sufferings, but still have enough similarity with S(t) to be regarded as its next moment of experience. Not S(t) will be diluted, but the next moments of the S(t). This solves the need to create S(t) which seems morally wrong and computationally intensive.

UPDATE 2: There is a way to do the salvation in the way which also increases the total number of happy observers in the universe.

The moments after saving from eternal intense pain will be obviously the happiest moment for someone in agony. It will be like an angel, who comes to a cancer patient and tells him: your disease was just completely cured and the pain disappered. If one ever got a negative result for cancer test, he may know this feeling of relief.

Also, the fact that benevolent AI is capable to save observers from Evil AI (and also model Evil AIs in simulations and punish them if they dare to torture anyone) will significantly reduce (I hope) the total number of Evil AIs.

Thus, the combination of the pleasure of being saved from Evil AI plus lowering the world-share of Evil AIs, as they can't win and know it - will increase the total positive utility in the universe.

New Comment
6 comments, sorted by Click to highlight new comments since:

I had a very similar idea , I've even created a name for it, vestibules of paradise hypothesis. I've messaged author and he advised me to publish my thoughts here, so I am copying them below. Please excuse my english and my unsophisticated language (in part because of my limited vocabulary and in part because of unofficial nature of conversation)

"How do You think "paradise" would look like? Wouldn't be computationally profitable to fuse simulated observer moments to ultimately one?" We agreed some individuality would be desirable, yet if to SI saving more beings is way more important, we can reach a bit different conclusions:

"It would be great I think, yet I am tempting to give some credence to hypothesis where it would be neccessary to minimize amount of observer moments in paradise to save from suffering more minds, for example in case when evil SI would be more common or if there were so many observer moments needed to redeem. In such a case it be preferable to eventually fuse all saved beings to one state, possibly of the smallest possible suffering/highest possible wellbeing yet using as little as possible of computing power, that state would have to be simulated in great amount of copies, so if there would be only one it would be simpler I guess. One can imagine that state as "pure cinsciousness", like nirvana, or maybe rather something similar to deep sleep, with minimal amount of consciousness. Do You think complexity of experience has impact on probability of being that one? For example if one observer moment has two possible, of equal objective probability, yet one of it is more complex, more conscious, what then with subjective probabilities?"

(That las question is not strictly bounded to thopic, although I will use it to propose some other form od "saving" from suffering, I think very speculative, but worth considering)

More thoughts: "I think if there would exist a set of universal best possible experiences, then it would be easier to maintain continuity when generations of Universes with benevolent SI would begin to die. If it was standarized and predictible to other SI what observer-pattern should be universal they all could seek to simulate that exact pattern/set of patterns, additionally the narrower the more efficient saving greater number of others. Wouldn't it be the case that in older universes, when star formation ends and only black holes remain it would be less and less energy available to perform computations, which will result in more strict energy economy?" (Here one can have some interesting thoughts of Landauer limit, but I think anyway it is a valid thought)

"I am wondering what would it be if the price of salvation was to erase your "identity". At first glance it does not look appealing I think, although it appears to me as logical besides my eventual preferences"

If we assume that benevolent SI care only for maximizing chances of every suffering observer-moment being saved and not about identity (for example fusing observer moment into one (I think rather conscious and pleased, maybe a form of enlighted mind). We can think of form of salvation in descrobed below form.

(There is much about interpretations of multiverse immortality, yet I think it can be important, and it is easier to understand what I postulate in this version of salvation by benevolent SI, I find it highly speculative and I rather tend to view author's position as more probable, neverteless I think it can be interesting scenario to consider)

"When I was a child I liked to play a game I've creared myself. To experirnce it You would need only a bit of imagination. So, imagine you have power to rule over all space and all time. When You wish to pause your time, your thoughts, your life, you become pure spirit, not neccessarily that kind reliogious people tend to praise but something less metaphysical, or maybe more, that doesn't matter. You can live in such a state seconds, hours or milions of years, any finite amount of subjective time you wish. You can be everyone you can imagine, you are the ruler and the creator of reality. All your wildest dreams may come true, all your loves and hopes, as long as you decide them to last.

The only thing is when You want to return to "your" body and unpause the time, all memories from that spiritual life have to be completely erased.

From your mind's perspective it felt like a mere blink of an eye.

I don't know is it deeply connected to what I want to underline, I don't think so, it is just "cool" in some sense of that word. I am thinking about what I intuitively belived, that you have bigger chances to be that of future observer moments that are more similar to your present one, or that you have bigger chances to find yourself in more/most complex mind. This are rather random conclusions yet they show the trend of thinking one could use.

First, my biggest objection to super-strong self sampling assumption, why we are such an intelligent and complex minds when there are so much more animals having much less complex experiences (at least when it comes to abstract ones and self awareness?) SSSSA states we should reason as if there was greater probability to find oneself in a most complex conscious state, and uses it to conclude Superintelligence would be rare, because we should be statistical observers. What if we try to think of that another way. We use antrophic reasoning to create a model of the world, where apparent fine-tuning of our universe seems more probable if there would be plenty of other, lifeless universes, then it would be nothing strange we are in life-containing universe, bacause in principle we can observe only (mainly) such a universes. What if we cannot think of ourselves as of more probable because of more complex mind-state, but rather in the following way: one can experience being "yourself" only in minds capable of producing self-awareness, other mind, where self-awareness doesn't exist, could be treated simply like "non-existence", for sure nonexistence of any obvious self. That is why we, knowing we already have something like self-awareness, cannot think of ourselves as we would be part of referrence class containing every consciousness and deduce we are improbable because of amount of animals minds. We know for sure our refference class must exclude all states that are not self aware, so SSSSA becomes controversial and we cannot assume being more conscious makes us more probable. Rather achieving certain level of consciousness makes "US"- self awareness- possible at all. We cannot assume it is obvious we are more probable if our conscious state is more and more complex, and conclude superintelligence is tchus improbable to exist in the future (we can still think it is rare, we should still think we are probably most common type of self-awareness, but not because of application of SSSSA)

We can even conclude the most common type of self-awareness should be rather simple one, what we can see today (I would humorously say the prove is we are still thinking of such "easy" questions like what we are and why we are). If we think about all self-conscious beings living in history, Homo sapiens could has more individuals than any other Hominid or dolphin. Even if all H. Sapiens constitued just one half of all Self-aware beings (on earth) it wouldn't be so strange we are ones of them.

( there would be a question about where is the lowest boundary of self-awareness, do rats are self-aware? I the end I am sure it seems to be more probable to be in the "bigger" self-awareness, yet I don't know if we are allowed to reason that way knowing that our level of self awareness is high... Should we exclude from our refference class every observer who is not able to understand basic mathematics? Wouldn't it mean we can think of our current mindstate as of refference class, tchus conclude "I have to be me, there is nothing strange about it"?)

We could still think there is more probable to be more self-aware, but I do not know if self awareness vould be much higher, we can imagine superinelligence not being much more -self aware ( not many orders of magnitude more) yet having amazing computational abilities, memory and awareness of external world.

Next, if we assume that not complexity of neural-like web is responsible for probability of being that mind but merely intensity of feeling "selfness" we can think that our next observer moment is not dependent of its complexity but only of its measure.

I think about "impossibility of sleep", I've "discovered" that objection to quantum immortality when I was 18 and found it in the internet some days after. So, as you know, it underlines that if in fact we cannot subjectively die, we shouldn't be able to lose our consciousness in any other way either, including sleep. What was clear to me was that when we fall asleep, the next "thought" is rather that just after waking up, usually with some shadows of your dreams. One could assume tchus probability of finding yourself in most complex state of consciousness, more consciois state is greater. But if we look from perspective of just self-awareness, then comlexity of conscious state in principle doesn't mean higher probability of being in it/being it.

If it were true than we not neccesarily should expect to find ourselves in a more consciois state, or more complex conscious state after death of our brain, it may be worth to think of, that if we in fact are subjectively "skipping" that part of our future observer moments that has low complexity and/or low self-awareness and subjectively feel ourselves only in state with self-awareness, if follow my experimental reasoning "you" should not expect to survive death as "you" namely your self-identity and memories have really low chance to survive (it may be different if we exist in simulations more often). Instead you should rathet expext to find yourself in lets say randomly choosen mindstate with basic self-awareness, so subjective immortality in practice most probably would not imply survival of "person" (or some kind of subjectively connectef continuum of observer moments/ model of your personal identity), but immortality would mean that it is consciousness, feeling, what is sure to survibe everything, because each observer moment has some other observer moment in the future.

(I don't know how it really look like, it was just a thought I've had today, what if classic multiverse immortality could be on of several ways of preserving observer-moments continuity)

The question is does every observer moment have previous one, and if so, what observer moments before our self-aware existence were?

If we imagine some scenario of experience immortality, we could reason as follows: death is by no means binary process, rather fading of consciousness, like sleep but deeper. When brain dies it has lower and lower self-awareness and consciousness in general, so next observer-moments are like that of less and less conscious beings. At the end we reach level zero of consciousness, but before we do so, there are probably amazingly many systems with low-level consciousness, like animal. We would not expect that next observer-moment would be in some form of animal, it had many observers moments before, certainly much different that that of dying human brain. Yet there are plenty of low-consciousness states of minds, without memories, personality and self-awareness, there are minds that are just emerging, new brains during their formation. With no assumptions more exotic than that of multiverse immortality, actually using that same reasoning, we can think of experiences of near dead brain and emerging brain as of the same state of mind, fused together. Then next observer moments would be than next observer moment of some emerging common animal/alien animal, then its life, death and again. Nevertheless you should expect to find Yourself, that is self-awareness, in some point of that "reincarnation" there will be many emerging minds that will grow to have such a level of self-awareness that will allow "you" to exist.

That view is similar to reincarnation and I don't like it, yet it can't be a premise not to think of it as a possibility. (It also look like some form of open individualis what I don't like too, maybe empty individualism would be bettet but I don't think it is practical)

It wouldn't be strange we cannot feel (remember feeling) being animals/ alien animals, bacause they are to us like other universes where life (self-aware life) cannot exist, metaphorically like cyclic universe, where life can exist only in some of them. Subjectively we would feel exactly what we seem to feel, namely being a mind with self-awareness, similar to many others around us, with history (memory) reaching to childhood and then void. In that view there would me infinitely many observer moments before us.

I think things could be different if most of observer-states exist in simulations.

Last, if we assume that view, we can reach some conclusions about salvation by benevolent SI. In that case SI could simulate gigantic amount of copies of emerging mind that reaches the highest possible (or computationally preferable) self-awareness (not neccesarily complexity) with pleasing experience, maybe pure experience of self-awareness, so that every dyind being has the highest chances to "reincarnate" to that mind. It would make easier to save bigger amount of actually immortal minds (with immortal self identity) in simulations runned by evil SI for example.

If we don't assume we are more probable to find ourselves in more complex observer states, we can reach that conclusions, I think it may be interesting to consider that."

I hope that there are some interesting thoughts in what I shared, please forgive me chaothic apparence of tgat comment. I think what author postulates is a really valid theory, I encourage to read his article on that topic :

(PDF) Back to the Future: Curing Past Sufferings and S-Risks via Indexical Uncertainty (Turchin)

My concern is that fusing experiences may lead to loss of individuality. We could fuse all minds into one simple eternal bliss but its is nit far from death. 

One solution is fuse which is not destroying personal identity. Here I assume that "personal identity" is a set of observer-moments which mutually recognise each other a same person

I absolutely agree with You.

My only objection is that SI may value minimalization of suffering more than preserving personal identities from death (I think the same, reincarnation in above interpretation and fusing minds are death of "person"). Such an SI would be in some (maybe even strong) sense promortalist. For now I don't want to choose what vision seems more probable to me. I don't think mine is impossible, though for sure is not more preferable.

I also hope there would be possible to fuse minds without destroying their personal identity. Maybe SI would choose to simulate less copies of more diverse minds after fusion rather than greater amount of just one.

I admit I am not very good at this kind of thinking, but seems to me like this could easily do more harm than good. By simulating suffering minds, even if you ultimately save them, you are increasing the number of moments of suffering.

To remove computers completely from the picture, imagine that a person who previously didn't want to have children decides instead to have children, to abuse them while they are small, and then stop abusing them when they grow up. And considers this to be a good thing, by the following reasoning -- in an infinite universe, someone already has children like these, and some of those parents are abusing them; the only difference is that those parents are unlikely to stop abusing them afterwards, unlike me, therefore I create a positive indexical uncertainty for abused children (now they have a chance to be my children, in which case they can hope for the abuse to stop).

In the spirit of "it all adds up to normality" such excuse for child abuse should not be accepted. If you agree, is it fundamentally different when instead of parents we start talking about computers?

You are right, it is problem, but I suggested a possible patch in Update 2: The idea is to create copies not of S(t) moment, but only of the next moment S'(t+1), where the pain disappears and S is happy that he escaped form eternal hell.

In your example, it will be like to procreate healthy children, but tell them that their life was very bad in the past, but now they are cured and even their bad memories are almost cured (it may seem morally wrong to lie to children, but it could be framed as watching a scary movie or discussing past dreams. Moreover, it actually works: I often had dreams about bad things happened to me, and it was a relief to wake up; thus, if a bad thing will happen to me, I may hope that it is just a dream. Unfortunately, it not always works.)

In other words, we create indexical uncertainty not about my current position but about my next moment of experience.

Assuming this is all true, and that Benevolent ASIs have the advantage, in finite universes, it's worth noting that this still requires the Benevolent ASIs to trade-off computations for increasing the lifespan of people to computations to increase the fraction of suffering-free observer-moments.