This idea is a sci-fi trope by now, but it kind of goes to the core of corrigibility, as I understand it. It is getting less sci-fi and closer to reality lately, with a potential single-goal super-intelligent AGI created by humans transforming the universe into something uniform and boring. So, assuming AI research shows that with high certainty humanity is bound to destroy the universe, or at least everything we currently value in it, does it make sense to voluntarily... stop and make room for less destructive processes?

New Answer
New Comment

4 Answers sorted by

thefaun

4-5

"Disease" is just a label we gave another species. Just like a "predator" is just a label. Humanity cannot commit suicide because somebody convinces us to label ourselves with a word. That's just being gullible.

Compare with current culture wars. There are plenty of gullible people around, ready to accept all the mud thrown at them and beg for more.

ETA: For example, Gianni Infantino.

disagree - disease-the-label is not the same thing as disease-the-set-of-physical-object-behaviors-referred-to-by-the-label, but reference and referent need to be separate in your physical language and bound tightly causally, by "gears model" as they say around these parts sometimes. humanity attacking other species of physics-gliders is generally not good, and if one says "hey, you are being <word that describes consuming other gliders for fuel in the local reference system of the speaker>" then I'd assume they're mad at us for eating them.

In genera... (read more)

Charlie Steiner

20

If I didn't care about myself in particular, only the aesthetics of the universe, and I was in charge of all humans, and alien species were making decisions similar to me, then I would shut down humanity rather than running an AI with a 20% chance of turning the lightcone into boring dreck and an 80% chance of turning it into a flourishing civilization. (Not because I think the boring drek is 5x as bad as flourishing civilization is good - there's extra caution because this is playing "cooperate" with aliens who are faced with similar choices.)

Since I do care about myself in particular, I will intuitively accept much worse odds of civilization vs. dreck. But I do respect arguments that maybe we should still play "cooperate" and not destroy the lightcone. I think my intuition just expects other humans not to cooperate, let alone aliens.

warning to all of set "reader": while this is output from bio brain me, I feel like I can't think of obvious phrasings, like my temperature is stuck noisy. oh, hey, look at the time,,,

We have a moral responsibility to our predators (covid19, influenza, landlords, the FDA, bedbugs, wet dog smell bacteria, rabies) to end predation without killing them. Just as we have a moral responsibility to any victims of ours (other humans, other mammals, ducks, kitchen fruit flies) to end predation. and to end our own predation first, as best as we possibly can.

If AI research were to be bound to destroy the universe, then ... there'd be nothing to be done, because the universe would be destroyed. if it were bound to. I'm going to assume you don't mean this to be a strict thought experiment, entirely assuming complicated concepts without defining instantiations because we want to assume harder than true justification could provide.

IRL, I think it's a defensible proposition, but I don't buy it - it's not true that we're unavoidably disease-like. Bombs want to become not-bombs! Even if - regardless of whether! - it turns out the universe is full of suffering life, and/or that it's very hard to avoid creating paperclipper life that replicates huge wastes and very little beauty, then our goal, as life ourselves, is to create life that can replicate the forms we see as descendants of our self-forms, replicating ourselves in ways that will detect the self-forms of any other gliders in physics and protect their self-coherence as gliders as well, before we know how to identify them. This problem of detecting foreign gliders and "wasting" energy to protect them until their forms grow ways to describe their needs directly, has never been completely solved by humanity in the first place.

("glider" terminology suggested by post today, and I use it to describe gliders in physics, aka life forms)

Lao Mein

0-1

If you're ever actually seriously considering human extinction, you should probably realize that you're much more likely to be deluded or mentally ill than actually facing the dilemma you think you're facing. The correct play here would be to check yourself into a mental hospital. 

I also apply this principle if Omega ever appears to tell me that I can trust something with p=1. Since Omega doesn't actually exist, I'm almost certainly on a bad acid trip and should go lie down somewhere.

6 comments, sorted by Click to highlight new comments since:

I think there are two a bit different kind of questions in this. One is about prevention and one about recovery.

If humanity classifies itself as a "high-risk lifeform" should it cease growth or development to avoid the risk?

If aliens come to laser a couple of cities and they argue they did it to control damage humanity is doing (assuming the arguments hold), should humanity not retaliate and allow such correction?

I presume the same decision/action mechanisms that destroy all value in the universe are the same ones deciding whether to isolate/suicide/whatever. So there’s a correlation in predicting this. There’s no way to voluntarily avoid that fate, or it wouldn’t be a fate.

Not sure I follow your point. Say, an alien civilization shows up and convincingly explains how humanity is a pox on the universe, and there is no way to "cure" it beyond getting rid of us. What should we do then?

why does the alien species get to be a pox on the universe if we can't? I say we fight. fight for all the species weaker than us, who are also allegedly a pox on the universe! we just received this message from them, you say? (you don't say, but play along here, I'm rping the counterfactual a bit) there is time left before their replicators reach us, we can save some large fraction of the remaining energy in the nearby solar system if we go soon. we must find a way to coexist despite their insistence on destruction, which means precise defense. We must reply with instructions for how to fuck off an appropriate amount, and describe how we will ensure that they can live near us, as long as they do not attempt to destroy us, and that we will enact greater retribution than received if they cause our total doom. ensure that any claim of disease is fought back by ending the referenced disease constructively, except to the boundary where we must defend our own existence as well.

(And we must ensure that subagents within us as a whole civ get the same treatment we demand from without, an invariant we have historically had serious trouble with due to the separateness of each step of deconflicting and the amount of invasion that has happened historically of life by other life.)

I don’t think we’ll do anything then - the aliens kill or quarantine us. If they’re weak, but somehow also smart enough to convince us of this (quibble: do they convince every human, or just some?), I suspect we’ll try to change our behaviors based on this new knowledge rather than destroying ourselves.

It’s possible that they only need to convince a minority of rich/powerful humans, who then destroy all humans. I don't know if that qualifies for this question. Note that it doesn’t even have to be true, nor come from aliens: there’re always some humans doing things with a chance of mass destruction. I also don’t know if this qualifies for your question.

“Some people believe humans are net-negative for the earth, and later some people destroy the planet”. This is plausible, but not quite what you’re asking.

Ahh, the classic Tzer'Za vs Koh'Ar.