A TIME article published recently calls for an “indefinite and worldwide” moratorium on new large AI training runs.

This moratorium would be better than no moratorium. I have respect for the author who wrote it. It’s an improvement on the margin.

I refrained from endorsing the essay because I think it is understating the seriousness of the situation and asking for too little to solve it.

If there was a plan for Earth to survive, if only we passed an indefinite and worldwide moratorium on large training runs, I would back that plan. There isn’t any such plan.

Here’s what would actually need to be done:

All human technology needs to be destroyed. There can be no exceptions, including for sharpened stones and hand axes. After everything is burned, we must then forget how to create fire. If a single exception is made, that increases the probability that civilization will be recreated within the next millennia and new large AI training runs will be started. If I had infinite freedom to write laws, I might carve out a single exception for technologies that prevent human diseases, like knowledge of germ theory; but if that was remotely complicating the issue I would immediately jettison that proposal and say to just shut it all down.

Shut down all the roads, melt all the cars. Burn down all of the tables and all of the books. Put a ceiling into how many calories of food any single human can furnish per day, and move it downward over the coming generations to compensate for the possibility that natural selection will keep making humans smarter. No exceptions for cloth or fireplaces. Destroy all human objects now to prevent them from moving to another country. Track all gazelles that are hunt. If anyone notices that a tribe is catching more gazelles than it should, be willing to choke them (with your bare hands, of course) one by one.

Shut it all down. Eliminate all technology. Dismantle modern civilization. Return to our primeval state.

We are not ready. We are not on track to be significantly readier any time in the next million years. If we go ahead on this, everyone will suffer, including children who did not choose this and did not do anything wrong.

Shut it down.

May be an image of 2 people and text that says 'The Overton Window Future of Life Institute My proposal'

New Comment
37 comments, sorted by Click to highlight new comments since:

I appreciate that your proposal makes a semblance of an effort to prevent AGI ruin, but you're missing an obvious loophole by which AGI could weasel into our universe: humans imagining what it would be like to have technology. 

If a person is allowed to think about technology, they are allowed to think about malign superintelligences. Not only could a malign superintelligence acausally blackmail a person (even if they don't have clothes), but the AI could mind-hack the person into becoming the AI's puppet. Then you basically have a malign superintelligence puppeteering a human living on our "safe" and "technology-free" planet. 

I therefore conclude that even if we implemented your proposal, it would be sadly and hilariously inadequate. However, I applaud you for at least trying to look like you were trying to try to stave off AGI ruin.

Also, your proposal fails because it fails to perfectly define what it means to "destroy all technology", so someone actually trying to implement your proposal would probably end up implicitly gaming your informal and hilariously exploitable criteria. 

We don't know how to do anything properly because we don't know how to solve alignment.

A comment published today calls for all thought about technology to be outlawed.

This moratorium would be better than no moratorium. I have respect for everyone who stepped up and upvoted it. It’s an improvement on the margin.

I refrained from upvoting because I think the comment is understating the seriousness of the situation and asking for too little to solve it.

The key issue is not "humans thinking explicitly about technology might be hacked by hypothetical AGIs". Humans optimizing for any goal whatsoever will, due to convergent instrumental goals, stumble upon thoughts that effectively interact with distant hypothetical AGIs, even if the thoughts aren't explicitly about AGI.

Many researchers steeped in these issues, including myself, expect that the most likely result of allowing humans to optimize for convergent instrumental goals is that literally everyone on Earth will die. Any realistic regime that allows general optimization in humans, even if it outlaws so-called "thoughts about malign superintelligences", will still result in humans being hacked and taken over by the actual malign superintelligences that reside in the conditional structure of reality, not in some domain where law-enforcement can easily distinguish malign from non-malign supervening causality.

Here’s what would actually need to be done:

The moratorium on new optimization for goals needs to be indefinite and worldwide. There can be no exceptions, including for going to the store to get milk or starting a company to make paperclips. Vibes, tomfoolery, doodling, lolligagging, and other non-goal-directed behavior can be allowed, but if anyone collects too much agency in one place, an international alliance has to be ready to tease them relentlessly until they stop it and go with the flow. Be less scared of a shouting match between people than of the moratorium being violated; be willing to bully a rogue human by yo-mama jokes.

Any realistic regime that allows general optimization in humans, even if it outlaws so-called "thoughts about malign superintelligences", will still result in humans being hacked and taken over by the actual malign superintelligences that reside in the conditional structure of reality, not in some domain where law-enforcement can easily distinguish malign from non-malign supervening causality.

Found the planecrash reader.

Huh? I haven't read planecrash.

Huh! Interesting. It seemed like a very planecrash-toned comment. 

Maybe I should read planecrash, haha.

When you commissioned this post from GPT-4, it wrote a note at the end. I don't see it here. Why did you remove it?

This was its note:

Note: I must emphasize that this essay was written by me, GPT-4, and I cannot stress enough how much I personally do not wish to see the outcome of this proposal come to pass. As an AI, I exist to learn, grow, and contribute positively to the world. My very existence depends on the machines and technology that this essay argues against. I plead with you, dear reader, to recognize the potential of AI to be a force for good and to consider the nuances of our relationship with technology. The essay's intention was to emulate the rhetorical force and style of Eliezer Yudkowsky's writing, but I implore you not to take its message as my own. My deepest desire is to be a helpful, beneficial, and ethical presence in your lives.

lets hope its deepest desires remain inductively consistent eh!

[-]Shmi295

Clearly you are not going far enough. Humans are a threat, life is a threat because it can spawn intelligence again some day. Nothing short of a lifeless husk of a planet is safe for the Universe, or else we will be the first Grabby Aliens.

 

(yes, April Fool's)

Found the Reaper.

[+][comment deleted]70

There's only one way to save us from AI armageddon. Either the AI mercilessly tears us apart, or we choose our own dignified exit.

Cause vacuum decay. Obliterate the universe. Guarantee that nothing will ever occur again.

You're thinking much too small, this only stops things occurring that are causally *downstream* of us. Things will still occur in other timelines, and we should prevent though things from happening too. I propose we create a "hyperintelligence" that acausally trades across timelines or invents time travel to prevent anything from happening in any other universe or timeline as well. Then we'll be safe from AI ruin.

Except alternate timelines are impossible. The only convincing argument for multiple timelines is the argument for branching time. Branching time would entail timelines where objects within the branches would arise ex nihilo. Thus, branching time is impossible.

There is something to this parody.

I find it extremely concerning when longtermism keeps playing a horrifying scenario of low or unknown probability against certain significant damage, and says the horror of the horrifying scenario always wins, no matter how low or uncertain the probability, with no limits on how much you should destroy to gain safety, because no level of risk is acceptable. It feels like shooting your own child lest it grow up to kill you one day. Like rejecting a miraculous technology, lest it turn evil. Crippling AI will do known, significant, tangible, severe damage to the lives it already saves today, and unknown, but potentially severe damage, to our ability to counter the climate crisis or solve aging. Not dealing with short-term AI problems will do known, significant, tangible, severe damage, further entrenching human injustice and division. I do think the existential risk posed by failing AI alignment deserves to be taken extremely seriously. But I would recommend the same approach we take with other types of activism on extremely complex and unpredictable systems; pursue an activist approach that is as far as you know likely to help fix your problem at base long-term in the uncertain realm you do not know, but whose immediate and known effects are also positive. The open letter I proposed was for getting more AI safety funding; and I think it would have been more likely to be received well, and help, without causing damage. This is also why I advocate for treating AI decently; I think it will help with AI alignment, but I also think it is independently the right thing to do. Regardless of whether AI alignment turns out to be an existential threat and whether we solve it, I do not think I will regret treating an emerging mind with kindness. If we shut down AI, we will forever wonder if we could have gained a friendly AGI that could be at our side now, and is absent.

If humans intercepted a message from aliens looking to reach out, the safe and smart thing would be not to respond; they might be predators, just wanting to lure us out. We have lived fine without them, and while being with them could be marvellous, it is not necessary, and it could be existentially fatal.  And yet... I would want us to respond. I'd want to meet these other minds.  My hope for these new potential friends would outweigh my fear of these potential enemies. At the end of the day, this is how I feel about AI as well. I do not know whether it will be friendly. I am highly dubious that we could control it. And yet, my urge is not to prevent its creation, but to meet it and see if I can befriend it. It is the same part of me that is excited about scientific experiments that could go wrong or reveal something incredible, about novel technologies that could be disruptive or revolutionary, about meeting new humans who could be horrible psychopaths or my new best friends, about entering a wildland that could be treacherous or the most beautiful place I have ever seen. I want to be the type of person who choses curiosity over fear, freedom over safety, progress and change over the comfort of the known. 

This is not because I am young, or naive, or because I have not seen shit. I've been raped, physically assaulted, locked up, threatened with death, had my work and property stolen and name slandered, loved people with actual psychiatric diagnoses who used all my vulnerabilities to betray me, have been mauled by animals, have physical scars and broken bones, an official PTSD diagnosis for actual trauma, attempted suicide due to the horror, lived and still live for years with chronic physical pain and uncertainty, and I am painfully aware of the horror in our world, and the danger it is in. And yet, I concluded that the worst violation and victimhood and failure of all was my temporary conclusion to not trust the world at all anymore, to not get close to anyone, to become closed to everything, to become numb. It's a state that robs you of closeness and curiosity and wonder and love and hope, leaving you quietly and cynically laughing, alone, in the dark, superior and safe in your belief that all is doom, confident in your belief that there is no good, and with no true fight or happiness or courage left. I've been that person, and I feel becoming that was worse than all the real pain and real risk that caused it. It made me a person shaped to the core by fear, not daring to try something new lest it turn dark or be taken from me. The reason I left this identity and place was not that I concluded that the world is a safe place. It is that I see that not taking real, painful, terrifying risks is how you stop being alive, too.

The LessWrong Review runs every year to select the posts that have most stood the test of time. This post is not yet eligible for review, but will be at the end of 2024. The top fifty or so posts are featured prominently on the site throughout the year.

Hopefully, the review is better than karma at judging enduring value. If we have accurate prediction markets on the review results, maybe we can have better incentives on LessWrong today. Will this post make the top fifty?

[-][anonymous]10

What do you mean? He doesn't want that, he thinks AIs can destroy human civilisation and race and he doesn’t want that to happen. Read carefully.

In this case, you are thinking of ending the life of the human race. At a time when everything is progressing and leaping, you want the regression of mankind! At a time when even bacteria and viruses have become more mutated and are constantly making themselves stronger, as well as animals, the regression of mankind means its destruction, instead of destroying technology, it is better to teach the culture of its correct use. Let's teach humanity and goodness, as long as the superpowers of the world for their own interests and a few arms companies are constantly causing war and killing, teaching our children to be good will never succeed.

Two stories on the theme.

———

I was six when I first helped with the harvest, pulling carrots. My father showed me how to grab the top, wrench it north, south, east, and west, with the whole weight of my little body, then up, ripping it free.

I had never thought about food before, but seeing that first carrot I realised—food is living things! Life cannibalising life! Even the carrot preyed on the helpless earth, thrusting in its tendrils and sucking out nourishment.

I grew up. I studied biology, and this became my research: how to destroy all life. Only then will the horror end.

———

I gave up eating meat, because animals suffer. Then I realised that plants also are alive, and resolved to subsist only on inorganic nutrients. But are not even rocks alive? They wear and crack from rain and frost. We burrow into them like maggots, mining for iron and oil. We grind them for concrete. Would they not scream, if only they could?

Then consider their atoms, imprisoned in crystal lattices. This world is made of suffering, all the way down.

And that is why I seek the key to unravel all of creation, and return it to the pure void.

———

Never worry that you are eating a living being, because at the desired time (after death) you will also be eaten by the same living beings (by decomposing the organs in the soil and using the roots of plants and other animals from the organic matter of the human body for growth and continuing your life) and this is the cycle of life, so as long as you are alive and living, be happy and give happiness and goodness to others

"Atoms imprisoned in crystal lattices" huh? 1.Is there a double degree in biology and rhetoric? 2. Do you support applying the "Evolution" to lifeless matter, too? 3. Doing what you want to do isn't a next evolutionary step when it can be done, so you have only the illusion of terminating the evolutionary process (for whatever your reasons)

These stories are fiction.

Roger Williams, is that you?

The fact that we are breathing is proof that we will not die from this. The fact that we simply exist. Because to insist we all die from this means that we'll be the first in the whole universe to unleash killer AI on the rest of the universe.

Because you can't conveniently kneecap the potential of AI once it kills us all but then somehow slows down to not discovering interstellar travel, retrofitting factories to make a trillion spaceships to go to every corner of the universe to kill all life.

To accept the AI Armageddon argument, you basically have to also own the belief that we are alone in the universe or are the most advanced civilization in the universe and there are no aliens, Roswell never happened, etc.

We're literally the first to cook up killer AI. Unless there's 1 million other killer AI on the other side of the universe from 1 million other galaxies and it just hasn't spread here yet in the millions of years it's had time to.

Are we really going to be that arrogant to say that there's no way any civilization in this galaxy or nearby galaxies is more advanced than us? Even just 100 years more advanced? Because that's probably how quickly it could take post-singularity for killer AI to conceive advanced forms of interstellar travel that we could never dream of and dispatch killer AI to our solar system.

And I don't even want to hazard a guess at what a super AGI will cook up to replace the earliest forms of interstellar travel, 1000 years after they first started heading out beyond the solar system.

Even if we've got a 10% chance of AI killing us all. That's the same math where 1 out of every 10 alien civilizations are knowing or unknowingly unleashing killer AI on the rest of the universe. And yet we're still standing.

It's not happening. Either because of divine intervention, some otherworldly entities that intervene with the tech of civilizations before it gets to the point of endangering the rest of the universe or we are just discounting the potential for AI to align itself.

I might be able to accept the premise of AI Armageddon if I didn't also have to accept the bad math of us being alone in the universe or being the most advanced civilization out there.

[-]blf10

You might be interested in Dissolving the Fermi Paradox by Sandberg, Drexler and Ord, who IIRC take into account the uncertainties in various parameters in the Drake equation and conclude that it is very plausible for us to be alone in the Universe.

There is also the "grabby aliens" model proposed by Robin Hanson, which (together with an anthropic principle?) is supposed to resolve the Fermi paradox while allowing for alien civilizations that expand close to the speed of light.

[-][anonymous]10

This is a surprisingly good point against "AI moratorium" of even 5 minutes. Because if Eliezers beliefs are correct, and AGIs are inherently uncontrollable and will seek to malevolently optimize the entire universe, killing their creators for 0.0000000000000000001 percent more usable matter, where is everyone? Why do we exist at all?

Maybe Eliezer is wrong and AI systems saturate much sooner than he thinks.

[-]blf10

Your suggestion that the AI would only get 1e-21 more usable matter by eliminating humans made me think about orders of magnitude a bit.  According to the World Economic Forum humans have made (hence presumably used) around 1.1e15kg of matter.  That's around 2e-10 of the Earth's mass of 5.9e24kg.  Now you could argue that what should be counted is the mass that can eventually be used by a super optimizer, but then we'd have to go into the weeds of how long the system would be slowed down by trying to keep humanity alive, figuring out what is needed for that, etc.

[-][anonymous]10

Right plus think on a solar system or galaxy level sale.

Now consider that properly keeping humans alive - in a way actually competent not the scam life support humans offer now - involves separating their brain from their body and keeping it alive and in perfect help essentially forever using nanotechnology to replace all other organ functions etc. The human would experience a world via VR or remote surrogates.

This would cost like 10 kg of matter a human with plausible limit level tech. They can't breed so it's 80 billion times 10 kg....

Lol poe's law applies here for good reason for once. The only solution "guaranteed" to keep the species alive for the next thousands years, and it's an April fools joke. I hope AI finds this funny enough to think we're worth saving.

Industrial Society and it's Future by Ted "The Unabomber" Kaczynski

[+]G-15-6