1 min read

1

This is a special post for quick takes by james oofou. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
13 comments, sorted by Click to highlight new comments since:

Is there a one stop shop type article presenting the AI doomer argument? I read the sequence posts related to AI doom but they're very scattered and more tailored toward trying to I guess exploring ideas than presenting a solid, cohesive argument. Of course, I'm sure that was the approach that made sense at the time. But I was wondering if since then there's been made some kind of canonical presentation of the AI doom argument? Something in the "attempts to be logically sound" side of things.

If you're looking for recent, canonical one-stop-shop, the answer is List of Lethalities.

List of lethalities is not by any means a "one stop shop". If you don't agree with Eliezer on 90% of the relevant issues, it's completely unconvincing. For example, in that article he takes as an assumption that an AGI will be godlike level omnipotent, and that it will default to murderism. 

If you don't agree with Eliezer on 90% of the relevant issues, it's completely unconvincing.

Of course. What kind of miracle are you expecting? 

It also doesn't go into much depth on many of the main counterarguments. And doesn't go into enough detail that it even gets close to "logically sound". And it's not as condensed as I'd like. And it skips over a bunch of background. Still, it's valuable, and it's the closest thing to a one-post summary of why Eliezer is pessimistic about the outcome of AGI.

The main value of list of lethalities as a one-stop shop is that you can read it and then be able to point to roughly where you disagree with Eliezer. And this is probably what you want if you're looking for canonical arguments for AI risk. Then you can look further into that disagreement if you want.

Reading the rest of your comment very charitably: It looks like your disagreements are related to where AGI capability caps out, and whether default goals involve niceness to humans. Great!

If I read your comment more literally, my guess would be that you haven't read list of lethalities, or are happy misrepresenting positions you disagree with.

he takes as an assumption that an AGI will be godlike level omnipotent

He specifically defines a dangerous intelligence level as around the level required to design and build a nanosystem capable of building a nanosystem (or any of several alternative example capabilities) (In point 3). Maybe your omnipotent gods are lame. 

and that it will default to murderism

This is false. Maybe you are referring to how there isn't any section justifying instrumental convergence? But it does have a link, and it notes that it's skipping over a bunch of background in that area (-3). That would be a different assumption, but if you're deliberately misrepresenting it, then that might be the part that you are misrepresenting.

David Chalmers asked for one last year, but there isn't. 

I might give the essence of the assumptions as something like: you can't beat superintelligence; intelligence is independent of value; and human survival and flourishing require specific complex values that we don't know how to specify. 

But further pitfalls reveal themselves later, e.g. you may think you have specified human-friendly values correctly, but the AI may then interpret the specification in an unexpected way. 

What is clearer than doom, is that creation of superintelligent AI is an enormous gamble, because it means irreversibly handing control of the world to something non-human. Eliezer's position is that you shouldn't do that unless you absolutely know what you're doing. The position of the would-be architects of superintelligent AI is that hopefully they can figure out everything needed for a happy ending, in the course of their adventure. 

One further point I would emphasize, in the light of the last few years of experience with generative AI, is the unpredictability of the output of these powerful systems. You can type in a prompt, and get back a text, an image, or a video, which is like nothing you anticipated, and sometimes it is very definitely not what you want. "Generative superintelligence" has the potential to produce a surprising and possibly "wrong" output that will transform the world and be impossible to undo. 

I'd actually recommend Zvi's On A List of Lethalities over the original, as a more readily understandable version that covers the same arguments.

I think  AGI Safety From First Principles by Richard Ngo is probably good.

I think AGI Ruin: A List of Lethalities is comprehensive but also sort of advanced and skips over the two basic bits.

I think this post is an excellent distillation of the AI doomer argument, and it importantly helps me understand why people think AI alignment was going to be difficult:

https://www.lesswrong.com/posts/wnkGXcAq4DCgY8HqA/a-case-for-ai-alignment-being-difficult

[-]TAG31

What I have noticed is that while there are cogent overviews of AI safety that don't come to the extreme conclusion that we all going to be killed by AI with high probability....and there are articles that do come to that conclusion without being at all rigorous or cogent....there aren't any that do both. From that I conclude there aren't any good reasons to believe in extreme AI doom scenarios, and you should disbelieve them. Others use more complicated reasoning, like "Yudkowsky is too intelligent to communicate his ideas to lesser mortals, but household believe him anyway".

(See @DPiepgrass saying something similar and of course getting downvoted).

@MitchellPorter supplies us with some examples of gappy arguments.

human survival and flourishing require specific complex values that we don't know how to specify

There 's no evidence that "human values" are even a coherent entity , and no reason to believe that any AI of any architecture would need them.

But further pitfalls reveal themselves later, e.g. you may think you have specified human-friendly values correctly, but the AI may then interpret the specification in an unexpected way.

What is clearer than doom, is that creation of superintelligent AI is an enormous gamble, because it means irreversibly handing control of the world

Hang on a minute. Where does control of the come from? Do we give it to the AI? Does it take it?

to something non-human. Eliezer's position is that you shouldn't do that unless you absolutely know what you're doing. The position of the would-be architects of superintelligent AI is that hopefully they can figure out everything needed for a happy ending, in the course of their adventure.

One further point I would emphasize, in the light of the last few years of experience with generative AI, is the unpredictability of the output of these powerful systems. You can type in a prompt, and get back a text, an image, or a video, which is like nothing you anticipated, and sometimes it is very definitely not what you want. "Generative superintelligence" has the potential to produce a surprising and possibly "wrong" output that will transform the world and be impossible to undo.

Current generative AI has no ability to directly affect anything. Where would that come from?

[-]Dagon-2-9

I don't know that "the AI doomer argument" is a coherent thing.  At least I haven't seen an attempt to gather or summarize it in an authoritative way.  In fact, it's not really an argument (as far as I've seen), it's somewhere between a vibe and a prediction.

For me, when I'm in a doomer mood, it's easy to give a high probability to the idea that humanity will be extinct fairly soon (it may take centuries to fully die out, but will be fully irreversible path in 10-50 years, if it's not already).  Note that this has been a common belief long before AI was a thing - nuclear war/winter, ecological collapse, pandemic, etc. are pretty scary, and humans are fragile.

My optimistic "argument" is really not better-formed.  Humans are clever, and when they can no longer ignore a problem, they solve it.  We might lose 90%+ of the current global population, and a whole lot of supply-chain and tech capability, but that's really only a few doublings lost, maybe a millennium to recover, and maybe we'll be smarter/luckier in the next cycle.

From your perspective, what do you think the argument is, in terms of thesis and support?  

There are a lot of detailed arguments for doom by misaligned AGI.

Coming to grips with them, and the conterarguments in actual proposals for aligning AGI and managing the political and economic fallout, is a herculean task. I feel it's taken me about two years of spending the majority of my work time on doing that to even have my head mostly around most of the relevant arguments. Having done that, my p(doom) is still roughly 50%, with wide uncertainty for unknown unknows still to be revealed or identified.

So if someone isn't going to do that, I think the above summary is pretty accurate. Alignment and managing the resulting shifts in the world is not easy, but it's not impossible. Sometimes humans do amazing things. Sometimes they do amazingly stupid things. So again, roughly 50% from this much rougher method.