...That requires, not the ability to read this document and nod along with it, but the ability to spontaneously write it from scratch without anybody else prompting you; that is what makes somebody a peer of its author. It's guaranteed that some of my analysis is mistaken, though not necessarily in a hopeful direction. The ability to do new basic work noticing and fixing those flaws is the same ability as the ability to write this document before I published it, which nobody apparently did, despite my having had other things to do than write this up for the last five years or so. Some of that silence may, possibly, optimistically, be due to nobody else in this field having the ability to write things comprehensibly - such that somebody out there had the knowledge to write all of this themselves, if they could only have written it up, but they couldn't write, so didn't try. I'm not particularly hopeful of this turning out to be true in real life, but I suppose it's one possible place for a "positive model violation" (miracle). The fact that, twenty-one years into my entering this death game, seven years into other EAs noticing the death game, and two years into even normies start
I agree this list doesn't seem to contain much unpublished material, and I think the main value of having it in one numbered list is that "all of it is in one, short place", and it's not an "intro to computers can think" and instead is "these are a bunch of the reasons computers thinking is difficult to align".
The thing that I understand to be Eliezer's "main complaint" is something like: "why does it seem like No One Else is discovering new elements to add to this list?". Like, I think Risks From Learned Optimization was great, and am glad you and others wrote it! But also my memory is that it was "prompted" instead of "written from scratch", and I imagine Eliezer reading it more had the sense of "ah, someone made 'demons' palatable enough to publish" instead of "ah, I am learning something new about the structure of intelligence and alignment."
[I do think the claim that Eliezer 'figured it out from the empty string' doesn't quite jive with the Yudkowsky's Coming of Age sequence.]
Nearly empty string of uncommon social inputs. All sorts of empirical inputs, including empirical inputs in the social form of other people observing things.
It's also fair to say that, though they didn't argue me out of anything, Moravec and Drexler and Ed Regis and Vernor Vinge and Max More could all be counted as social inputs telling me that this was an important thing to look at.
Eliezer's post here is doing work left undone by the writing you cite. It is a much clearer account of how our mainline looks doomed than you'd see elsewhere, and it's frank on this point.
I think Eliezer wishes these sorts of artifacts were not just things he wrote, like this and "There is no fire alarm".
Also, re your excerpts for (14), (15), and (32), I see Eliezer as saying something meaningfully different in each case. I might elaborate under this comment.
Re (14), I guess the ideas are very similar, where the mesaoptimizer scenario is like a sharp example of the more general concept Eliezer points at, that different classes of difficulties may appear at different capability levels.
Re (15), "Fast capability gains seem likely, and may break lots of previous alignment-required invariants simultaneously", which is about how we may have reasons to expect aligned output that are brittle under rapid capability gain: your quote from Richard is just about "fast capability gain seems possible and likely", and isn't about connecting that to increased difficulty in succeeding at the alignment problem?
Re (32), I don't think your quote isn't talking about the thing Eliezer is talking about, which is that in order to be human level at modelling human-generated text, your AI must be doing something on par with human thought that figures out what humans would say. Your quote just isn't discussing this, namely that strong imitation requires cognition that is dangerous.
So I guess I don't take much issue with (14) or (15), but I think you're quite off the mark about (32). In any case, I still have a strong sense that Eliezer is successfully being more on the mark here than the rest of us manage. Kudos of course to you and others that are working on writing things up and figuring things out. Though I remain sympathetic to Eliezer's complaint.
Sure—that's easy enough. Just off the top of my head, here's five safety concerns that I think are important that I don't think you included:
The fact that there exist functions that are easier to verify than satisfy ensures that adversarial training can never guarantee the absence of deception.
It is impossible to verify a model's safety—even given arbitrarily good transparency tools—without access to that model's training process. For example, you could get a deceptive model that gradient hacks itself in such a way that cryptographically obfuscates its deception.
It is impossible in general to use interpretability tools to select models to have a particular behavioral property. I think this is clear if you just stare at Rice's theorem enough: checking non-trivial behavioral properties, even with mechanistic access, is in general undecidable. Note, however, that this doesn't rule out checking a mechanistic property that implies a behavioral property.
Any prior you use to incentivize models to behave in a particular way doesn't necessarily translate to situations where that model itself runs another search over algorithms. For example, the fastest way to search for algorith
Consider my vote to be placed that you should turn this into a post, keep going for literally as long as you can, expand things to paragraphs, and branch out beyond things you can easily find links for.
(I do think there's a noticeable extent to which I was trying to list difficulties more central than those, but I also think many people could benefit from reading a list of 100 noncentral difficulties.)
I do think there's a noticeable extent to which I was trying to list difficulties more central than those
Probably people disagree about which things are more central, or as evhub put it:
Every time anybody writes up any overview of AI safety, they have to make tradeoffs [...] depending on what the author personally believes is most important/relevant to say
Now FWIW I thought evhub was overly dismissive of (4) in which you made an important meta-point:
EY: 4. We can't just "decide not to build AGI" because GPUs are everywhere, and knowledge of algorithms is constantly being improved and published; 2 years after the leading actor has the capability to destroy the world, 5 other actors will have the capability to destroy the world. The given lethal challenge is to solve within a time limit, driven by the dynamic in which, over time, increasingly weak actors with a smaller and smaller fraction of total computing power, become able to build AGI and destroy the world. Powerful actors all refraining in unison from doing the suicidal thing just delays this time limit - it does not lift it [...]
evhub: This is just answering a particular bad plan.
But I would add a criticism of my o...
(Note that these have a theme: you can't wrangle general computation / optimization. That's why I'm short universal approaches to AI alignment (approaches that aim at making general optimization safe by enforcing universal rules), and long existential approaches (approaches that try to find specific mechanisms that can be analytically seen to do the right thing).)
I'm sorry to hear that your health is poor and you feel that this is all on you. Maybe you're right about the likelihood of doom, and even if I knew you were, I'd be sorry that it troubles you this way.
I think you've done an amazing job of building the AI safety field and now, even when the field has a degree of momentum of its own, it does seem to be less focused on doom than it should be, and I think you continuing to push people to focus on doom is valuable.
I don't think its easy to get people to take weird ideas seriously. I've had many experiences where I've had ideas about how people should change their approach to a project that weren't particularly far out and (in my view) were right for very straightforward reasons, and yet for the most part I was ignored altogether. What you've accomplished in building the AI safety field is amazing because AI doom ideas seemed really crazy when you started talking about them.
Nevertheless, I think some of the things you've said in this post are counterproductive. Most of the post is good, but insulting people who might contribute to solving the problem is not, nor is demanding that people acknowledge that you are smarter than they are. I'...
There's a point here about how fucked things are that I do not know how to convey without saying those things, definitely not briefly or easily. I've spent, oh, a fair number of years, being politer than this, and less personal than this, and the end result is that people nod along and go on living their lives.
I expect this won't work either, but at some point you start trying different things instead of the things that have already failed. It's more dignified if you fail in different ways instead of the same way.
FWIW you taking off the Mr. Nice guy gloves has actually made me make different life decisions. I'm glad you tried it even if it doesn't work.
Do whatever you want, obviously, but I just want to clarify that I did not suggest you avoid personally criticising people (only that you avoid vague/hard to interpret criticism) or saying you think doom is overwhelmingly likely. Some other comments give me a stronger impression than yours that I was asking you in a general sense to be nice, but I'm saying it to you because I figure it mostly matters that you're clear on this.
I vehemently disagree here, based on my personal and generalizable or not history. I will illustrate with the three turning points of my recent life.
First step: I stumbled upon HPMOR, and Eliezer way of looking straight into the irrationality of all our common ways of interacting and thinking was deeply shocking. It made me feel like he was in a sense angrily pointing at me, who worked more like one of the PNJ rather than Harry. I heard him telling me you're dumb and all your ideals of making intelligent decisions, being the gifted kid and being smarter than everyone are all are just delusions. You're so out of touch with reality on so many levels, where to even start.
This attitude made me embark on a journey to improve myself, read the sequences, pledge on Giving What we can after knowing EA for many years, and overall reassess whether I was striving towards my goal of helping people (spoiler: I was not).
Second step: The April fools post also shocked me on so many levels. I was once again deeply struck by the sheer pessimism of this figure I respected so much. After months of reading articles on LessWrong and so many about AI alignment, this was the one that made me terrifie...
I disagree strongly. To me it seems that AI safety has long punched below its weight because its proponents are unwilling to be confrontational, and are too reluctant to put moderate social pressure on people doing the activities which AI safety proponents hold to be very extremely bad. It is not a coincidence that among AI safety proponents, Eliezer is both unusually confrontational and unusually successful.
This isn't specific to AI safety. A lot of people in this community generally believe that arguments which make people feel bad are counterproductive because people will be "turned off".
This is false. There are tons of examples of disparaging arguments against bad (or "bad") behavior that succeed wildly. Such arguments very frequently succeed in instilling individual values like e.g. conscientiousness or honesty. Prominent political movements which use this rhetoric abound. When this website was young, Eliezer and many others participated in an aggressive campaign of discourse against religious ideas, and this campaign accomplished many of its goals. I could name many many more large and small examples. I bet you can too.
Obviously this isn't to say that confrontational and insu...
I think there's an important distinction between:
Either might be justifiable, but I'm a lot more wary of heuristics like "it's never OK to talk about individuals' relative proficiency at things, even if it feels very cruxy and important, because people just find the topic too triggering" than of heuristics like "it's never OK to say things in ways that sound shouty or aggressive". I think cognitive engines can much more easily get by self-censoring their tone than self-censoring what topics are permissible to think or talk about.
This kind of post scares away the person who will be the key person in the AI safety field if we define "key person" as the genius main driver behind solving it, not the loudest person. Which is rather unfortunate, because that person is likely to read this post at some point.
I don't believe this post has any "dignity", whatever weird obscure definition dignity has been given now. It's more like flailing around in death throes while pointing fingers and lauding yourself than it is a solemn battle stance against an oncoming impossible enemy.
For context, I'm not some Eliezer hater, I'm a young person doing an ML masters currently who just got into this space and within the past week have become a huge fan of Eliezer Yudkowsky's earlier work while simultaneously very disappointed in the recent, fruitless, output.
It seems worth doing a little user research on this to see how it actually affects people. If it is a net positive, then great. If it is a net negative, the question becomes how big of a net negative it is and whether it is worth the extra effort to frame things more nicely.
I'd have more hope - not significant hope, but more hope - in separating the concerns of (a) credibly promising to pay big money retrospectively for good work to anyone who produces it, and (b) venturing prospective payments to somebody who is predicted to maybe produce good work later.
I desperately want to make this ecosystem exist, either as part of Manifold Markets, or separately. Some people call it "impact certificates" or "retroactive public goods funding"; I call it "equity for public goods", or "Manifund" in the specific case.
If anyone is interested in:
a) Being a retroactive funder for good work (aka bounties, prizes)
b) Getting funding through this kind of mechanism (aka income share agreements, angel investment)
c) Working on this project full time (full-stack web dev, ops, community management)
Please get in touch! Reply here, or message austin@manifold.markets~
I'm also on a team trying to build impact certificates/retroactive public goods funding and we are receiving a grant from an FTX Future Fund regrantor to make it happen!
If you're interested in learning more or contributing you can:
It's as good as time as any to re-iterate my reasons for disagreeing with what I see as the Yudkowskian view of future AI. What follows isn't intended as a rebuttal of any specific argument in this essay, but merely a pointer that I'm providing for readers, that may help explain why some people might disagree with the conclusion and reasoning contained within.
I'll provide my cruxes point-by-point,
Notably, humans were once much less powerful, in our hunter-gatherer days, but over time, through the gradual process of accumulating technology, knowledge, and culture, humans now possess vast productive capacities that far outstrip our ancient powers.
Similarly, our ability to coordinate through language also plays a huge role in explaining our power compared to other animals. But, on a first approximation, other animals can't coordinate at all, making this distinction much less impressive. The first AGIs we construct will be born into a culture already capable of coordinating, and sharing knowledge, making the potential power difference between AGI and humans relatively much smaller than between humans and other animals, at least at first.
I basically buy the story that human intelligence is less useful that human coordination; i.e. it's the intelligence of "humanity" the entity that matters, with the intelligence of individual humans relevant only as, like, subcomponents of that entity.
But... shouldn't this mean you expect AGI civilization to totally dominate human civilization? They can read each other's source code, and thus trust much more deeply! They can transmit information...
But... shouldn't this mean you expect AGI civilization to totally dominate human civilization? They can read each other's source code, and thus trust much more deeply! They can transmit information between them at immense bandwidths! They can clone their minds and directly learn from each other's experiences!
This is 100% correct, and part of why I expect the focus on superintelligence, while literally true, is bad for AI outreach. There's a much simpler (and empirically, in my experience, more convincing) explanation of why we lose to even an AI with an IQ of 110. It is Dath Ilan, and we are Earth. Coordination is difficult for humans and the easy part for AIs.
I will note that Eliezer wrote That Alien Message a long time ago I think in part to try to convey the issue to this perspective, but it's mostly about "information-theoretic bounds are probably not going to be tight" in a simulation-y universe instead of "here's what coordination between computers looks like today". I do predict the coordination point would be good to include in more of the intro materials.
But... shouldn't this mean you expect AGI civilization to totally dominate human civilization? They can read each other's source code, and thus trust much more deeply! They can transmit information between them at immense bandwidths! They can clone their minds and directly learn from each other's experiences!
I don't think it's obvious that this means that AGI is more dangerous, because it means that for a fixed total impact of AGI, the AGI doesn't have to be as competent at individual thinking (because it leans relatively more on group thinking). And so at the point where the AGIs are becoming very powerful in aggregate, this argument pushes us away from thinking they're good at individual thinking.
Also, it's not obvious that early AIs will actually be able to do this if their creators don't find a way to train them to have this affordance. ML doesn't currently normally make AIs which can helpfully share mind-states, and it probably requires non-trivial effort to hook them up correctly to be able to share mind-state.
Nice! Thanks! I'll give my commentary on your commentary, also point by point. Your stuff italicized, my stuff not. Warning: Wall of text incoming! :)
I think raw intelligence, while important, is not the primary factor that explains why humanity-as-a-species is much more powerful than chimpanzees-as-a-species. Notably, humans were once much less powerful, in our hunter-gatherer days, but over time, through the gradual process of accumulating technology, knowledge, and culture, humans now possess vast productive capacities that far outstrip our ancient powers.
Similarly, our ability to coordinate through language also plays a huge role in explaining our power compared to other animals. But, on a first approximation, other animals can't coordinate at all, making this distinction much less impressive. The first AGIs we construct will be born into a culture already capable of coordinating, and sharing knowledge, making the potential power difference between AGI and humans relatively much smaller than between humans and other animals, at least at first.
I don't think I understand this argument. Yes, humans can use language to coordinate & benefit from cultural evolution, so an AI that...
You said you weren't replying to any specific point Eliezer was making, but I think it's worth pointing out that when he brings up Alpha Go, he's not talking about the 2 years it took Google to build a Go-playing AI - remarkable and surprising as that was - but rather the 3 days it took Alpha Zero to go from not knowing anything about the game beyond the basic rules to being better than all humans and the earlier AIs.
I hate how convincing so many different people are. I wish I just had some fairly static, reasoned perspective based on object-level facts and not persuasion strings.
Note that convincing is a 2-place word. I don't think I can transfer this ability, but I haven't really tried, so here's a shot:
The target is: "reading as dialogue." Have a world-model. As you read someone else, be simultaneously constructing / inferring "their world-model" and holding "your world-model", noting where you agree and disagree.
If you focus too much on "how would I respond to each line", you lose the ability to listen and figure out what they're actually pointing at. If you focus too little on "how would I respond to this", you lose the ability to notice disagreements, holes, and notes of discord.
The first homework exercise I'd try to printing out something (probably with double-spacing), and writing your thoughts each sentence. "uh huh", "wait what?", "yes and", "no but", etc.; at the beginning you're probably going to be alternating between the two moves before you can do them simultaneously.
[Historically, I think I got this both from 'reading a lot', including a lot of old books, and also 'arguing on the internet' in forum environments that only sort of exist today, which was a helpful feedback loop for the relevant subskills, and of course whatever background factors made me do those activities.]
Some quick thoughts on these points:
Part of what makes it difficult for me to talk about alignment difficultly is that the concept doesn’t fit easily into my paradigm of thinking about the future of AI. If I am correct, for example, that AI services will be modular, marginally more powerful than what comes before, and numerous as opposed to monolithic, then there will not be one alignment problem, but many.
I could talk about potential AI safety principles, healthy cultural norms, and specific engineering issues, but not “a problem” called “aligning the AI” — a soft prerequisite for explaining how difficult “the problem” will be. Put another way, my understanding is that future AI alignment will be continuous with ordinary engineering, like cars and skyscrapers. We don’t ordinarily talk about how hard the problem of building a car is, in some sort of absolute sense, though there are many ways of operationalizing what that could mean.
One question is how costly it is to build a car. We could then compare that cost to the overall consumer benefit that people get from cars, and from that, deduce whether and how many cars will be built. Similarly, we could ask about the size of the “alignment tax” (the cost of aligning an ...
First, some remarks about the meta-level:
The ability to do new basic work noticing and fixing those flaws is the same ability as the ability to write this document before I published it, which nobody apparently did, despite my having had other things to do than write this up for the last five years or so. Some of that silence may, possibly, optimistically, be due to nobody else in this field having the ability to write things comprehensibly - such that somebody out there had the knowledge to write all of this themselves, if they could only have written it up, but they couldn't write, so didn't try. I'm not particularly hopeful of this turning out to be true in real life, but I suppose it's one possible place for a "positive model violation" (miracle). The fact that, twenty-one years into my entering this death game, seven years into other EAs noticing the death game, and two years into even normies starting to notice the death game, it is still Eliezer Yudkowsky writing up this list, says that humanity still has only one gamepiece that can do that.
Actually, I don't feel like I learned that much reading this list, compared to what I already knew. [EDIT: To be clear, this know...
There is a big chunk of what you're trying to teach which not weird and complicated, namely: "find this other agent, and what their values are". Because, "agents" and "values" are natural concepts, for reasons strongly related to "there's a relatively simple core structure that explains why complicated cognitive machines work".
This seems like it must be true to some degree, but "there is a big chunk" feels a bit too strong to me.
Possibly we don't disagree, and just have different notions of what a "big chunk" is. But some things that make the chunk feel smaller to me:
Humans are at least a little coherent, or we would never get anything done; but we aren't very coherent, so the project of piecing together 'what does the human brain as a whole "want"' can be vastly more difficult than the problem of figuring out what a coherent optimizer wants.
This is a point where I feel like I do have a substantial disagreement with the "conventional wisdom" of LessWrong.
First, LessWrong began with a discussion of cognitive biases in human irrationality, so this naturally became a staple of the local narrative. On the other hand, I think that a lot of presumed irrationality is actually rational but deceptive behavior (where the deception runs so deep that it's part of even our inner monologue). There are exceptions, like hyperbolic discounting, but not that many.
Second, the only reason why the question "what X wants" can make sense at all, is because X is an agent. As a corollary, it only makes sense to the extent that X is an agent. Therefore, if X is not entirely coherent then X's preferences are only approximately defined, and hence we only need to infer them approximately. So, the added difficulty of inferring X's preferences, resulting from the partial ...
Second, the only reason why the question "what X wants" can make sense at all, is because X is an agent. As a corollary, it only makes sense to the extent that X is an agent.
I'm not sure this is true; or if it's true, I'm not sure it's relevant. But assuming it is true...
Therefore, if X is not entirely coherent then X's preferences are only approximately defined, and hence we only need to infer them approximately.
... this strikes me as not capturing the aspect of human values that looks strange and complicated. Two ways I could imagine the strangeness and complexity cashing out as 'EU-maximizer-ish' are:
In both cases, the fact that my brain isn't a single coherent EU maximiz...
On Twitter, Eric Rogstad wrote:
"the thing where it keeps being literally him doing this stuff is quite a bad sign"
I'm a bit confused by this part. Some thoughts on why it seems odd for him (or others) to express that sentiment...
1. I parse the original as, "a collection of EY's thoughts on why safe AI is hard". It's EY's thoughts, why would someone else (other than @robbensinger) write a collection of EY's thoughts?
(And if we generalize to asking why no-one else would write about why safe AI is hard, then what about Superintelligence, or the AI stuff in cold-takes, or ...?)
2. Was there anything new in this doc? It's prob useful to collect all in one place, but we don't ask, "why did no one else write this" for every bit of useful writing out there, right?
Why was it so overwhelmingly important that someone write this summary at this time, that we're at all scratching our heads about why no one else did it?
Copying over my reply to Eric:
...My shoulder Eliezer (who I agree with on alignment, and who speaks more bluntly and with less hedging than I normally would) says:
- The list is true, to the best of my knowledge, and the details actually matter.
Many civilizations try to make a canonical
I think most worlds that successfully navigate AGI risk have properties like:
The historical figures who basically saw it (George Eliot 1879: "will the creatures who are to transcend and finally supersede us be steely organisms [...] performing with infallible exactness more than everything that we have performed with a slovenly approximativeness and self-defeating inaccuracy?"; Turing 1951: "At some stage therefore we should have to expect the machines to take control") seem to have done so in the spirit of speculating about the cosmic process. The idea of coming up with a plan to solve the problem is an additional act of audacity; that's not really how things have ever worked so far. (People make plans about their own lives, or their own businesses; at most, a single country; no one plans world-scale evolutionary transitions.)
-3. I'm assuming you are already familiar with some basics, and already know what 'orthogonality' and 'instrumental convergence' are and why they're true.
I think this is actually the part that I most "disagree" with. (I put "disagree" in quotes, because there are forms of these theses that I'm persuaded by. However, I'm not so confident that they'll be relevant for the kinds of AIs we'll actually build.)
1. The smart part is not the agent-y part
It seems to me that what's powerful about modern ML systems is their ability to do data compression / pattern recognition. That's where the real cognitive power (to borrow Eliezer's term) comes from. And I think that this is the same as what makes us smart.
GPT-3 does unsupervised learning on text data. Our brains do predictive processing on sensory inputs. My guess (which I'd love to hear arguments against!) is that there's a true and deep analogy between the two, and that they lead to impressive abilities for fundamentally the same reason.
If so, it seems to me that that's where all the juice is. That's where the intelligence comes from. (In the past, I've called this the core smarts of our brains.)
On this view, all the agent-y, planful...
GPT-3 does unsupervised learning on text data. Our brains do predictive processing on sensory inputs. My guess (which I'd love to hear arguments against!) is that there's a true and deep analogy between the two, and that they lead to impressive abilities for fundamentally the same reason.
Agree that self-supervised learning powers both GPT-3 updates and human brain world-model updates (details & caveats). (Which isn’t to say that GPT-3 is exactly the same as the human brain world-model—there are infinitely many different possible ML algorithms that all update via self-supervised learning).
However…
If so, it seems to me that that's where all the juice is. That's where the intelligence comes from … if agency is not a fundamental part of intelligence, and rather something that can just be added in on top, or not, and if we're at a loss for how to either align a superintelligent agent with CEV or else make it corrigible, then why not try to avoid creating the agent part of superintelligent agent?
I disagree; I think the agency is necessary to build a really good world-model, one that includes new useful concepts that humans have never thought of.
Without the agency, some of the things ...
For example, I claim that while AlphaGo could be said to be agent-y, it does not care about atoms. And I think that we could make it fantastically more superhuman at Go, and it would still not care about atoms. Atoms are just not in the domain of its utility function.
In particular, I don't think it has an incentive to break out into the real world to somehow get itself more compute, so that it can think more about its next move. It's just not modeling the real world at all. It's not even trying to rack up a bunch of wins over time. It's just playing the single platonic game of Go.
I would distinguish three ways in which different AI systems could be said to "not care about atoms":
I agree with pretty much everything here, and I would add into the mix two more claims that I think are especially cruxy and therefore should maybe be called out explicitly to facilitate better discussion:
Claim A: “There’s no defense against an out-of-control omnicidal AGI, not even with the help of an equally-capable (or more-capable) aligned AGI, except via aggressive outside-the-Overton-window acts like preventing the omnicidal AGI from being created in the first place.”
I think this claim is true, on account of gray goo and lots of other things, and I suspect Eliezer does too, and I’m pretty sure other people disagree with this claim.
If someone disagrees with this claim (i.e., if they think that if DeepMind can make an aligned and Overton-window-abiding “helper” AGI, then we don’t have to worry about Meta making a similarly-capable out-of-control omnicidal misaligned AGI the following year, because DeepMind’s AGI will figure out how to protect us), and also believes in extremely slow takeoff, I can see how such a person might be substantially less pessimistic about AGI doom than I am.
Claim B: “Shortly after (i.e., years not decades after) we have dangerous AGI, we will have dang...
I think this claim is true, on account of gray goo and lots of other things, and I suspect Eliezer does too, and I’m pretty sure other people disagree with this claim.
If you have robust alignment, or AIs that are rapidly bootstrapping their level of alignment fast enough to outpace the danger of increased capabilities, aligned AGI could get through its intelligence explosion to get radically superior technology and capabilities. It could also get a hard start on superexponential replication in space, so that no follower could ever catch up, and enough tech and military hardware to neutralize any attacks on it (and block attacks on humans via nukes, bioweapons, robots, nanotech, etc). That wouldn't work if there are thing like vacuum collapse available to attackers, but we don't have much reason to expect that from current science and the leading aligned AGI would find out first.
That could be done without any violation of the territory of other sovereign states. The legality of grabbing space resources is questionable in light of the Outer Space Treaty, but commercial exploitation of asteroids is in the Overton window. The superhuman AGI would also be in a good position to per...
I agree with these two points. I think an aligned AGI actually able to save the world would probably take initial actions that look pretty similar to those an unaligned AGI would take. Lots of sizing power, building nanotech, colonizing out into space, self-replication, etc.
Depends on what DeepMind does with the AI, right?
Maybe DeepMind uses their AI in very narrow, safe, low-impact ways to beat ML benchmarks, or read lots of cancer biology papers and propose new ideas about cancer treatment.
Or alternatively, maybe DeepMind asks their AI to undergo recursive self-improvement and build nano-replicators in space, etc., like in Carl Shulman’s reply.
I wouldn’t have thought that the latter is really in the Overton window. But what do I know.
You could also say “DeepMind will just ask their AI what they should do next”. If they do that, then maybe the AI (if they’re doing really great on safety such that the AI answers honestly and helpfully) will reply: “Hey, here’s what you should do, you should let me undergo recursive-self-improvement, and then I’ll be able to think of all kinds of crazy ways to destroy the world, and then I can think about how to defend against all those things”. But if DeepMind is being methodical & careful enough that their AI hasn’t destroyed the world already by this point, I’m inclined to think that they’re also being methodical & careful enough that when the AI proposes to do that, DeepMind will say, “Umm, no, that’s total...
explain to other AGI developers how to make theirs safe or even just give them a safe design (maybe homomorphically encrypted to prevent modification, but they might not trust that)
What if the next would-be AGI developer rejects your “explanation”, and has their own great ideas for how to make an even better next-gen AGI that they claim will work better, and so they discard your “gift” and proceed with their own research effort?
I can think of at least two leaders of would-be AGI development efforts (namely Yann LeCun of Meta and Jeff Hawkins of Numenta) who believe (what I consider to be) spectacularly stupid things about AGI x-risk, and have believed those things consistently for decades, despite extensive exposure to good counter-arguments.
Or what if the next would-be AGI developer agrees with you and accepts your “gift”, and so does the one after that, and the one after that, but not the twelfth one?
...have aligned AGI monitoring the internet and computing resources, and alert authorities of [anomalies] that might signal new AGI developments. Require that AGI developments provide proof that they were designed according to one of a set of approved designs, or pass some tests determi
Found this to be an interesting list of challenges, but I disagree with a few points. (Not trying to be comprehensive here, just a few thoughts after the first read-through.)
But it's much safer to deploy AI iteratively; increasing the stakes, time horizons, and autonomy a little bit each time. With this iterative approach to deployment, you only need to generalize a little bit out of distribution. Further, you can use Agent N to help you closely supervise Agent N+1 before giving it any power.
My model of Eliezer claims that there are some capabilities that are 'smooth', like "how large a times table you've memorized", and some are 'lumpy', like "whether or not you see the axioms behind arithmetic." While it seems plausible that we can iteratively increase smooth capabilities, it seems much less plausible for lumpy capabilities.
A specific example: if you have a neural network with enough capacity to 1) memorize specific multiplication Q+As and 2) implement a multiplication calculator, my guess is that during training you'll see a discontinuity in how many pairs of numbers it can successfully multiply.[1] It is not obvious to me whether or not there are relevant capabilities like this that we'll "find with neural nets" instead of "explicitly programming in"; probably we will just build AlphaZero so that it uses MCTS instead of finding MCTS with grad...
Several of the points here are premised on needing to do a pivotal act that is way out of distribution from anything the agent has been trained on. But it's much safer to deploy AI iteratively; increasing the stakes, time horizons, and autonomy a little bit each time.
To do what, exactly, in this nice iterated fashion, before Facebook AI Research destroys the world six months later? What is the weak pivotal act that you can perform so safely?
Human raters make systematic errors - regular, compactly describable, predictable errors.... This is indeed one of the big problems of outer alignment, but there's lots of ongoing research and promising ideas for fixing it. Namely, using models to help amplify and improve the human feedback signal. Because P!=NP it's easier to verify proofs than to write them.
When the rater is flawed, cranking up the power to NP levels blows up the P part of the system.
To do what, exactly, in this nice iterated fashion, before Facebook AI Research destroys the world six months later? What is the weak pivotal act that you can perform so safely?
Do alignment & safety research, set up regulatory bodies and monitoring systems.
When the rater is flawed, cranking up the power to NP levels blows up the P part of the system.
Not sure exactly what this means. I'm claiming that you can make raters less flawed, for example, by decomposing the rating task, and providing model-generated critiques that help with their rating. Also, as models get more sample efficient, you can rely more on highly skilled and vetted raters.
Not sure exactly what this means.
My read was that for systems where you have rock-solid checking steps, you can throw arbitrary amounts of compute at searching for things that check out and trust them, but if there's any crack in the checking steps, then things that 'check out' aren't trustable, because the proposer can have searched an unimaginably large space (from the rater's perspective) to find them. [And from the proposer's perspective, the checking steps are the real spec, not whatever's in your head.]
In general, I think we can get a minor edge from "checking AI work" instead of "generating our own work" and that doesn't seem like enough to tackle 'cognitive megaprojects' (like 'cure cancer' or 'develop a pathway from our current society to one that can reliably handle x-risk' or so on). Like, I'm optimistic about "current human scientists use software assistance to attempt to cure cancer" and "an artificial scientist attempts to cure cancer" and pretty pessimistic about "current human scientists attempt to check the work of an artificial scientist that is attempting to cure cancer." It reminds me of translators who complained pretty bitterly about being given machine-transl...
I did, briefly. I ask that you not do so yourself, or anybody else outside one of the major existing organizations, because I expect that will make things worse as you annoy him and fail to phrase your arguments in any way he'd find helpful.
Other MIRI staff have also chatted with Yann. One co-worker told me that he was impressed with Yann's clarity of thought on related topics (e.g., he has some sensible, detailed, reductionist models of AI), so I'm surprised things haven't gone better.
Non-MIRI folks have talked to Yann too; e.g., Debate on Instrumental Convergence between LeCun, Russell, Bengio, Zador, and More.
There was also a debate between Yann and Stuart Russel on facebook, which got discussed here:
For a more comprehensive writeup of some stuff related to the "annoy him and fail to phrase your arguments helpfully", see Idea Innoculation and Inferential Distance.
My view is that if Yann continues to be interested in arguing about the issue then there's something to work with, even if he's skeptical, and the real worry is if he's stopped talking to anyone about it (I have no idea personally what his state of mind is right now)
I think until recently, I've been consistently more pessimistic than Eliezer about AI existential safety. Here's a 2004 SL4 post for example where I tried to argue against MIRI (SIAI at the time) trying to build a safe AI (and again in 2011). I've made my own list of sources of AI risk that's somewhat similar to this list. But it seems to me that there are still various "outs" from certain doom, such that my probability of a good outcome is closer to 20% (maybe a range of 10-30% depending on my mood) than 1%.
- Human thought partially exposes only a partially scrutable outer surface layer. Words only trace our real thoughts. Words are not an AGI-complete data representation in its native style. The underparts of human thought are not exposed for direct imitation learning and can't be put in any dataset. This makes it hard and probably impossible to train a powerful system entirely on imitation of human words or other human-legible contents, which are only impoverished subsystems of human thoughts; unless that system is powerful enough to contain inner intelligences figuring out the humans, and at that point it is no longer really working as imitative human thought.
One of the...
[This is a nitpick of the form "one of your side-rants went a bit too far IMO;" feel free to ignore]
The ability to do new basic work noticing and fixing those flaws is the same ability as the ability to write this document before I published it, which nobody apparently did, despite my having had other things to do than write this up for the last five years or so. Some of that silence may, possibly, optimistically, be due to nobody else in this field having the ability to write things comprehensibly - such that somebody out there had the knowledge to write all of this themselves, if they could only have written it up, but they couldn't write, so didn't try. ... The fact that, twenty-one years into my entering this death game, seven years into other EAs noticing the death game, and two years into even normies starting to notice the death game, it is still Eliezer Yudkowsky writing up this list, says that humanity still has only one gamepiece that can do that.
The third option this seems to miss is that there are people who could have written this document, but they also thought they had better things to do than write it. I'm thinking of people like Paul Christiano, Nate Soares, John W...
I'm thinking of people like Paul Christiano, Nate Soares, John Wentworth, Ajeya Cotra... [...] I do agree with you that they seem to on average be way way too optimistic, but I don't think it's because they are ignorant of the considerations and arguments you've made here.
I don't think Nate is that much more optimistic than Eliezer, but I believe Eliezer thinks Nate couldn't have generated enough of the list in the OP, or couldn't have generated enough of it independently ("using the null string as input").
Reading this post made me more optimistic about alignment and AI. My suspension of disbelief snapped; I realized how vague and bad a lot of these "classic" alignment arguments are, and how many of them are secretly vague analogies and intuitions about evolution.
While I agree with a few points on this list, I think this list is fundamentally misguided. The list is written in a language which assigns short encodings to confused and incorrect ideas. I think a person who tries to deeply internalize this post's worldview will end up more confused about alignment and AI, and urge new researchers to not spend too much time trying to internalize this post's ideas. (Definitely consider whether I am right in my claims here. Think for yourself. If you don't know how to think for yourself, I wrote about exactly how to do it! But my guess is that deeply engaging with this post is, at best, a waste of time.[1])
I think this piece is not "overconfident", because "overconfident" suggests that Lethalities is simply assigning extreme credences to reasonable questions (like "is deceptive alignment the default?"). Rather, I think both its predictions and questions are not reasonable because they are no...
I would summarize a dimension of the difficulty like this. There are the conditions that give rise to intellectual scenes, intellectual scenes being necessary for novel work in ambiguous domains. There are the conditions that give rise to the sort of orgs that output actions consistent with something like Six Dimensions of Operational Adequacy. The intersection of these two things is incredibly rare but not unheard of. The Manhattan Project was a Scene that had security mindset. This is why I am not that hopeful. Humans are not the ones building the AGI, egregores are, and spending egregore sums of money. It is very difficult for individuals to support a scene of such magnitude, even if they wanted to. Ultra high net worth individuals seem much poorer relative to the wealth of society than in the past, where scenes and universities (a scene generator) could be funded by individuals or families. I'd guess this is partially because the opportunity cost for smart people is much higher now, and you need to match that (cue title card: Baumol's cost disease kills everyone). In practice I expect some will give objections along various seemingly practical lines, but my experience so far is...
Note: I think there's a bunch of additional reasons for doom, surrounding "civilizational adequacy / organizational competence / societal dynamics". Eliezer briefly alluded to these, but AFAICT he's mostly focused on lethality that comes "early", and then didn't address them much. My model of Andrew Critch has a bunch of concerns about doom that show up later, because there's a bunch of additional challenges you have to solve if AI doesn't dramatically win/lose early on (i.e. multi/multi dynamics and how they spiral out of control)
I know a bunch of people whose hope funnels through "We'll be able to carefully iterate on slightly-smarter-than-human-intelligences, build schemes to play them against each other, leverage them to make some progress on alignment that we can use to build slightly-more-advanced-safer-systems". (Let's call this the "Careful Bootstrap plan")
I do actually feel nonzero optimism about that plan, but when I talk to people who are optimistic about that I feel a missing mood about the kind of difficulty that is involved here.
I'll attempt to write up some concrete things here later, but wanted to note this for now.
What concerns me the most is the lack of any coherent effort anywhere, towards solving the biggest problem: identifying a goal (value system, utility function, decision theory, decision architecture...) suitable for an autonomous superhuman AI.
In these discussions, Coherent Extrapolated Volition (CEV) is the usual concrete formulation of what such a goal might be. But I've now learned that MIRI's central strategy is not to finish figuring out the theory and practice of CEV - that's considered too hard (see item 24 in this post). Instead, the hope is to use safe AGI to freeze all unsafe AGI development everywhere, for long enough that humanity can properly figure out what to do. Presumably this freeze (the "pivotal act") would be carried out by whichever government or corporation or university crossed the AGI threshold first; ideally there might even become a consensus among many of the contenders that this is the right thing to do.
I think it's very appropriate that some thought along these lines be carried out. If AGI is a threat to the human race, and it arrives before we know how to safely set it free, then we will need ways to try to neutralize that dangerous potenti...
There's shard theory, which aims to describe the process by which values form in humans. The eventual aim is to understand value formation well enough that we can do it in an AI system. I also think figuring out human values, value reflection and moral philosophy might actually be a lot easier than we assume. E.g., the continuous perspective on agency / values is pretty compelling to me and changes things a lot, IMO.
Thank you, this was very helpful. As a bright-eyed youngster, it's hard to make sense of the bitterness and pessimism I often see in the field. I've read the old debates, but I didn't participate in them, and that probably makes them easier to dismiss. Object level arguments like these help me understand your point of view.
Alright, now that I've read this post, I'll try to respond to what I think you got wrong, and importantly illustrate some general principles.
To respond to this first:
...
3. We need to get alignment right on the 'first critical try' at operating at a 'dangerous' level of intelligence, where unaligned operation at a dangerous level of intelligence kills everybody on Earth and then we don't get to try again. This includes, for example: (a) something smart enough to build a nanosystem which has been explicitly authorized to build a nanosystem; or (b) something smart enough to build a nanosystem and also smart enough to gain unauthorized access to the Internet and pay a human to put together the ingredients for a nanosystem; or (c) something smart enough to get unauthorized access to the Internet and build something smarter than itself on the number of machines it can hack; or (d) something smart enough to treat humans as manipulable machinery and which has any authorized or unauthorized two-way causal channel with humans; or (e) something smart enough to improve itself enough to do (b) or (d); etcetera. We can gather all sorts of information beforehand from less
Mod note: I activated two-axis voting on this post, since it seemed like it would make the conversation go better.
New users are pretty confused by it when I've done some user-testing with it, so I think it needs some polish and better UI before we can launch it sitewide, but I am pretty excited about doing so after that.
If someone could find a way to rewrite this post, except in language comprehensible to policymakers, tech executives, or ML researchers, then it would probably achieve a lot.
Yes, please do rewrite the post, or make your own version of a post like this!! :) I don't suggest trying to persuade arbitrary policymakers of AGI risk, but I'd be very keen on posts like this optimized to be clear and informative to different audiences. Especially groups like 'lucid ML researchers who might go into alignment research', 'lucid mathematicians, physicists, etc. who might go into alignment research', etc.
Suggestion: make it a CYOA-style interactive piece, where the reader is tasked with aligning AI, and could choose from a variety of approaches which branch out into sub-approaches and so on. All of the paths, of course, bottom out in everyone dying, with detailed explanations of why. This project might then evolve based on feedback, adding new branches that counter counter-arguments made by people who played it and weren't convinced. Might also make several "modes", targeted at ML specialists, general public, etc., where the text makes different tradeoffs regarding technicality vs. vividness.
I'd do it myself (I'd had the idea of doing it before this post came out, and my preliminary notes covered much of the same ground, I feel the need to smugly say), but I'm not at all convinced that this is going to be particularly useful. Attempts to defeat the opposition by building up a massive evolving database of counter-arguments have been made in other fields, and so far as I know, they never convinced anybody.
The interactive factor would be novel (as far as I know), but I'm still skeptical.
(A... different implementation might be to use a fine-tuned language model for this; make it an AI Dungeon kind of setup, where it provides specialized counter-arguments for any suggestion. But I expect it to be less effective than a more coarse hand-written CYOA, since the readers/players would know that the thing they're talking to has no idea what it's talking about, so would disregard its words.)
Arbital was a very conjunctive project, trying to do many different things, with a specific team, at a specific place and time. I wouldn't write off all Arbital-like projects based on that one data point, though I update a lot more if there are lots of other Arbital-ish things that also failed.
Not saying that this should be MIRI's job, rather stating that I'm confused because I feel like we as a community are not taking an action that would seem obvious to me.
I wrote about this a bit before, but in the current world my impression is that actually we're pretty capacity-limited, and so the threshold is not "would be good to do" but "is better than my current top undone item". If you see something that seems good to do that doesn't have much in the way of unilateralist risk, you doing it is probably the right call. [How else is the field going to get more capacity?]
So, here's a thing that I don't think exists yet (or, at least, it doesn't exist enough that I know about it to link it to you). Who's out there, what 'areas of responsibility' do they think they have, what 'areas of responsibility' do they not want to have, what are the holes in the overall space? It probably is the case that there are lots of orgs that work on AI governance/policy, and each of them probably is trying to consider a narrow corner of space, instead of trying to hold 'all of it'.
So if someone says "I have an idea how we should regulate medical AI stuff--oh, CSET already exists, I should leave it to them", CSET's response will probably be "what? We focus solely on national security implications of AI stuff, medical regulation is not on our radar, let alone a place we don't want competition."
I should maybe note here there's a common thing I see in EA spaces that only sometimes make sense, and so I want to point at it so that people can deliberately decide whether or not to do it. In selfish, profit-driven worlds, competition is the obvious thing to do; when someone else has discovered that you can make profits by selling lemonade, you should maybe also try to sell lemo...
Eliezer, thanks for sharing these ideas so that more people can be on the lookout for failures. Personally, I think something like 15% of AGI dev teams (weighted by success probability) would destroy the world more-or-less immediately, and I think it's not crazy to think the fraction is more like 90% or higher (which I judge to be your view).
FWIW, I do not agree with the following stance, because I think it exposes the world to more x-risk:
So far as I'm concerned, if you can get a powerful AGI that carries out some pivotal superhuman engineering task, with a less than fifty percent change of killing more than one billion people, I'll take it.
Specifically, I think a considerable fraction of the remaining AI x-risk facing humanity stems from people pulling desperate (unsafe) moves with AGI to head off other AGI projects. So, in that regard, I think that particular comment of yours is probably increasing x-risk a bit. If I were a 90%-er like you, it's possible I'd endorse it, but even then it might make things worse by encouraging more desperate unilateral actions.
That said, overall I think this post is a big help, because it helps to put responsibility in...
a considerable fraction of the remaining AI x-risk facing humanity stems from people pulling desperate (unsafe) moves with AGI to head off other AGI projects
In your post “Pivotal Act” Intentions, you wrote that you disagree with contributing to race dynamics by planning to invasively shut down AGI projects because AGI projects would, in reaction, try to maintain
the ability to implement their own pet theories on how safety/alignment should work, leading to more desperation, more risk-taking, and less safety overall.
Could you give some kind of very rough estimates here? How much more risk-taking do you expect in a world given how much / how many prominent "AI safety"-affiliated people declaring invasive pivotal act intentions? How much risk-taking do you expect in the alternative, where there are other pressures (economic, military, social, whatever), but not pressure from pivotal act threats? How much safety (probability of AGI not killing everyone) do you think this buys? You write:
15% of AGI dev teams (weighted by success probability) would destroy the world more-or-less immediately
What about non-immediately, in each alternative?
Thanks Eliezer for writing up this list, it's great to have these arguments in one place! Here are my quick takes (which mostly agree with Paul's response).
Section A (strategic challenges?):
Agree with #1-2 and #8. Agree with #3 in the sense that we can't iterate in dangerous domains (by definition) but not in the sense that we can't learn from experiments on easier domains (see Paul's Disagreement #1).
Mostly disagree with #4 - I think that coordination not to build AGI (at least between Western AI labs) is difficult but feasible, especially after a warning shot. A single AGI lab that decides not to build AGI can produce compelling demos of misbehavior that can help convince other actors. A number of powerful actors coordinating not to build AGI could buy a lot of time, e.g. through regulation of potential AGI projects (auditing any projects that use a certain level of compute, etc) and stigmatizing deployment of potential AGI systems (e.g. if it is viewed similarly to deploying nuclear weapons).
Mostly disagree with the pivotal act arguments and framing (#6, 7, 9). I agree it is necessary to end the acute risk period, but I find it unhelpful when this is frame...
Since Divia said, and Eliezer retweeted, that good things might happen if people give their honest, detailed reactions:
My honest, non-detailed reaction is AAAAAAH. In more detail -
As a bystander who can understand this, and find the arguments and conclusions sound, I must say I feel very hopeless and "kinda" scared at this point. I'm living in at least an environment, if not a world, where even explaining something comparatively simple like how life extension is a net good is a struggle. Explaining or discussing this is definitely impossible - I've tried with the cleverer, more transhumanistic/rationalistic minded people I know, and it just doesn't click for them, to the contrary, I find people like to push in the other direction, as if it were a game.
And at the same time, I realize it is unlikely I can contribute anything remotely significant to a solution myself. So I can only spectate. This is literally maddening, especially so when most everyone seems to underreact.
This might sound absurd, but I legit think that there's something that most people can do. Being something like radically publicly honest and radically forgiving and radically threat-aware, in your personal life, could contribute to causing society in general to be radically honest and forgiving and threat-aware, which might allow people poised to press the Start button on AGI to back off.
ETA: In general, try to behave in a way such that if everyone behaved that way, the barriers to AGI researchers noticing that they're heading towards ending the world would be lowered / removed. You'll probably run up against some kind of resistance; that might be a sign that some social pattern is pushing us into cultural regimes where AGI researchers are pushed to do world-ending stuff.
...That requires, not the ability to read this document and nod along with it, but the ability to spontaneously write it from scratch without anybody else prompting you; that is what makes somebody a peer of its author. It's guaranteed that some of my analysis is mistaken, though not necessarily in a hopeful direction. The ability to do new basic work noticing and fixing those flaws is the same ability as the ability to write this document before I published it, which nobody apparently did, despite my having had other things to do than write this up for the last five years or so. Some of that silence may, possibly, optimistically, be due to nobody else in this field having the ability to write things comprehensibly - such that somebody out there had the knowledge to write all of this themselves, if they could only have written it up, but they couldn't write, so didn't try. I'm not particularly hopeful of this turning out to be true in real life, but I suppose it's one possible place for a "positive model violation" (miracle). The fact that, twenty-one years into my entering this death game, seven years into other EAs noticing the death game, and two years
I tried something like this much earlier with a single question, "Can you explain why it'd be hard to make an AGI that believed 222 + 222 = 555", and got enough pushback from people who didn't like the framing that I shelved the effort.
My attempt (thought about it for a minute or two):
Because arithmetic is useful, and the self-contradictory version of arithmetic, where 222+222=555 allows you to prove anything and is useless. Therefore, a smart AI that wants and can invent useful abstractions will invent its own (isomorphic to our arithmetic, in which 222+222=444) arithmetic from scratch and will use it for practical purposes, even if we can force it not to correct an obvious error.
Ok, so here's my take on the "222 + 222 = 555" question.
First, suppose you want your AI to not be durably wrong, so it should update on evidence. This is probably implemented by some process that notices surprises, goes back up the cognitive graph, and applies pressure to make it have gone the right way instead.
Now as it bops around the world, it will come across evidence about what happens when you add those numbers, and its general-purpose "don't be durably wrong" machinery will come into play. You need to not just sternly tell it "222 + 222 = 555" once, but have built machinery that will protect that belief from the update-on-evidence machinery, and which will also protect itself from the update-on-evidence machinery.
Second, suppose you want your AI to have the ability to discover general principles. This is probably implemented by some process that notices patterns / regularities in the environment, and builds some multi-level world model out of it, and then makes plans in that multi-level world model. Now you also have some sort of 'consistency-check' machinery, which scans thru the map looking for inconsistencies between levels, goes back up the cognitive graph, and applies p...
I will absolutely 100% do it in the spirit of good epistemics.
Edit: I'm glad Eliezer didn't take me up on this lol
I'm not Eliezer, but my high-level attempt at this:
...[...] The things I'd mainly recommend are interventions that:
- Help ourselves think more clearly. (I imagine this including a lot of trying-to-become-more-rational, developing and following relatively open/honest communication norms, and trying to build better mental models of crucial parts of the world.)
- Help relevant parts of humanity (e.g., the field of ML, or academic STEM) think more clearly and understand the situation.
- Help us understand and resolve major disagreements. (Especially current disagreements, but also future disagreements, if we can e.g. improve our ability to double-crux in some fashion.)
- Try to solve the alignment problem, especially via novel approaches.
- In particular: the biggest obstacle to alignment seems to be 'current ML approaches are super black-box-y and produce models that are very hard to understand/interpret'; finding ways to better understand models produced by current techniques, or finding alternative techniques that yield more interpretable models, seems like where most of the action is.
- Think about the space of relatively-plausible "miracles" [i.e., positive model violations], think about future evide
Lots I disagree with here, so let's go through the list.
There are no pivotal weak acts.
Strong disagree.
EY and I don't seem to agree that "nuke every semiconductor fab" is a weakly pivotal act (since I think AI is hardware-limited and he thinks it is awaiting a clever algorithm). But I think even "build nanobots that melt every GPU" could be built using an AI that is aligned in the "less than 50% chance of murdering us all" sense. For example, we could simulate a bunch of human-level scientists trying to build nanobots and also checking each-other's work.
On anything like the standard ML paradigm, you would need to somehow generalize optimization-for-alignment you did in safe conditions, across a big distributional shift to dangerous conditions.
Nope. I think that you could build a useful AI (e.g. the hive of scientists) without doing any out-of-distribution stuff.
there is no known way to use the paradigm of loss functions, sensory inputs, and/or reward inputs, to optimize anything within a cognitive system to point at particular things within the environment
I am significantly more optimistic about explainable AI than EY.
...There is no analogous truth abou
For example, we could simulate a bunch of human-level scientists trying to build nanobots and also checking each-other's work.
That is not passively safe, and therefore not weak. For now forget the inner workings of the idea: at the end of the process you get a design for nanobots that you have to build and deploy in order to do the pivotal act. So you are giving a system built by your AI the ability to act in the real world. So if you have not fully solved the alignment problem for this AI, you can't be sure that the nanobot design is safe unless you are capable enough to understand the nanobots yourself without relying on explanations from the scientists.
And even if we look into the inner details of the idea: presumably each individual scientist-simulation is not aligned (if they are, then for that you need to have solved the alignment problem beforehand). So you have a bunch of unaligned human-level agents who want to escape, who can communicate among themselves (at the very least they need to be able to share the nanobot designs with each other for criticism).
You'd need to be extremely paranoid and scrutinize each communication between the scientist-simulations to prevent them f...
If there was one thing that I could change in this essay, it would be to clearly outline that the existence of nanotechnology advanced enough to do things like melt GPUs isn't necessary even if it is sufficient for achieving singleton status and taking humanity off the field as a meaningful player.
Whenever I see people fixate on critiquing that particular point, I need to step in and point out that merely existing tools and weapons (is there a distinction?) suffice for a Superintelligence to be able to kill the vast majority of humans and reduce our threat to it to negligible levels. Be that wresting control of nuclear arsenals to initiate MAD or simply extrapolating on gain-of-function research to produce extremely virulent yet lethal pathogens that can't be defeated before the majority of humans are infected, such options leave a small minority of humans alive to cower in the wreckage until the biosphere is later dismantled.
That's orthogonal to the issue of whether such nanotechnology is achievable for a Superintelligent AGI, it merely reduces the inferential distance the message has to be conveyed as it doesn't demand familiarity with Drexler.
(Advanced biotechnology already is nanotechnology, but the point is that no stunning capabilities need to be unlocked for an unboxed AI to become immediately lethal)
The counter-concern is that if humanity can't talk about things that sound like sci-fi, then we just die. We're inventing AGI, whose big core characteristic is 'a technology that enables future technologies'. We need to somehow become able to start actually talking about AGI.
One strategy would be 'open with the normal-sounding stuff, then introduce increasingly weird stuff only when people are super bought into the normal stuff'. Some problems with this:
I find that more plausible. Also horrifying and worth fighting against, but not what EY is saying
Note that EY is saying "there exists a real plan that is at least as dangerous as this one"; if you think there is such a plan, then you can agree with the conclusion, even if you don't agree with his example. [There is an epistemic risk here, if everyone mistakenly believes that a different doomsday plan is possible when someone else knows why that specific plan won't work, and so if everyone pooled all their knowledge they could know that none of the plans will work. But I'm moderately confident we're instead in a world with enough vulnerabilities that broadcasting them makes things worse instead of better.]
While I share a large degree of pessimism for similar reasons, I am somewhat more optimistic overall.
Most of this comes from generic uncertainty and epistemic humility; I'm a big fan of the inside view, but it's worth noting that this can (roughly) be read as a set of 42 statements that need to be true for us to in fact be doomed, and statistically speaking it seems unlikely that all of these statements are true.
However, there are some more specific points I can point to where I think you are overconfident, or at least not providing good reasons for such a high level of confidence (and to my knowledge nobody has). I'll focus on two disagreements which I think are closest to my true disagreements.
1) I think safe pivotal "weak" acts likely do exist. It seems likely that we can access vastly superhuman capabilities without inducing huge x-risk using a variety of capability control methods. If we could build something that was only N<<infinity times smarter than us, then intuitively it seems unlikely that it would be able to reverse engineer details of the outside world or other AI systems source code (cf 35) necessary to break out of the box or start coo...
Thanks for writing this, I agree that people have underinvested in writing documents like this. I agree with many of your points, and disagree with others. For the purposes of this comment, I'll focus on a few key disagreeements.
My model of this variety of reader has an inside view, which they will label an outside view, that assigns great relevance to some other data points that are not observed cases of an outer optimization loop producing an inner general intelligence, and assigns little importance to our one data point actually featuring the phenomenon in question. Consider skepticism, if someone is ignoring this one warning, especially if they are not presenting equally lethal and dangerous things that they say will go wrong instead.
There are some ways in which AGI will be analogous to human evolution. There are some ways in which it will be disanalogous. Any solution to alignment will exploit at least one of the ways in which it's disanalogous. Pointing to the example of humans without analysing the analogies and disanalogies more deeply doesn't help distinguish between alignment proposals which usefully exploit disanalogies, and proposals which don't.
...Alpha Zero blew pas
Maybe one way to pin down a disagreement here: imagine the minimum-intelligence AGI that could write this textbook (including describing the experiments required to verify all the claims it made) in a year if it tried. How many Yudkowsky-years does it take to safely evaluate whether following a textbook which that AGI spent a year writing will kill you?
Infinite? That can't be done?
Depends what the evil clones are trying to do.
Get me to adopt a solution wrong in a particular direction, like a design that hands the universe over to them? I can maybe figure out the first time through who's out to get me, if it's 200 Yudkowsky-years. If it's 200,000 Yudkowsky-years I think I'm just screwed.
Get me to make any lethal mistake at all? I don't think I can get to 90% confidence period, or at least, not without spending an amount of Yudkowsky-time equivalent to the untrustworthy source.
Humans don't explicitly pursue inclusive genetic fitness; outer optimization even on a very exact, very simple loss function doesn't produce inner optimization in that direction.
Humans haven't been optimized to pursue inclusive genetic fitness for very long, because humans haven't been around for very long. Instead they inherited the crude heuristics pointing towards inclusive genetic fitness from their cognitively much less sophisticated predecessors. And those still kinda work!
If we are still around in a couple of million years I wouldn't be surprised if there was inner alignment in the sense that almost all humans in almost all practically encountered environments end up consciously optimising inclusive genetic fitness.
More generally, there is no known way to use the paradigm of loss functions, sensory inputs, and/or reward inputs, to optimize anything within a cognitive system to point at particular things within the environment - to point to latent events and objects and properties in the environment, rather than relatively shallow functions of the sense data and reward.
Generally, I think that people draw the wrong conclusions from mesa-optimisers and...
The point I'm making is that the human example tells us that:
If first we realize that we can't code up our values, therefore alignment is hard. Then, when we realize that mesa-optimisation is a thing. we shouldn't update towards "alignment is even harder". We should update in the opposite direction.
Because the human example tells us that a mesa-optimiser can reliably point to a complex thing even if the optimiser points to only a few crude things.
But I only ever see these three points, human example, inability to code up values, mesa-optimisation to separately argue for "alignment is even harder than previously thought". But taken together that is just not the picture.
Humans point to some complicated things, but not via a process that suggests an analogous way to use natural selection or gradient descent to make a mesa-optimizer point to particular externally specifiable complicated things.
I understand the first part of your comment as "sure, it's possible for minds to care about reality, but we don't know how to target value formation so that the mind cares about a particular part of reality." Is this a good summary?
Yes!
I was, first, pointing out that this problem has to be solvable, since the human genome solves it millions of times every day!
True! Though everyone already agreed (e.g., EY asserted this in the OP) that it's possible in principle. The updatey thing would be if the case of the human genome / brain development suggests it's more tractable than we otherwise would have thought (in AI).
Seems to me like it's at least a small update about tractability, though I'm not sure it's a big one? Would be interesting to think about the level of agreement between different individual humans with regard to 'how much particular external-world things matter'. Especially interesting would be cases where humans consistently, robustly care about a particular external-world thingie even though it doesn't have a simple sensory correlate.
(E.g., humans developing to care about sex is less promising insofar as it depends on sensory-level reinforcement such as orgasm...
I think this is correct. Shard theory is intended as an account of how inner misalignment produces human values. I also think that human values aren't as complex or weird as they introspectively appear.
I read an early draft of this awhile and am glad to have it publicly available. And I do think the updates in structure/introduction were worth the wait. Thanks!
>There is no pivotal output of an AGI that is humanly checkable and can be used to safely save the world but only after checking it
This is a sort of surprising claim. From an abstract point of view, assuming NP >> P, checking can be way easier than inventing. To stick with your example, it kind of seems, at an intuitive guess, like a plan to use nanobots to melt all GPUs should be very complicated but not way superhumanly complicated? (Superhuman to invent, though.) Like, you show me the plans for the bootstrap nanofactory, the workhorse nanofactory, the standard nanobots, the software for coordinating the nanobots, the software for low-power autonomous behavior, the transportation around the world, the homing in on GPUs, and the melting process. That's really complicated, way more complicated than anything humans have done before, but not by 1000x? Maybe like 100x? Maybe only 10x if you count whole operating systems or scientific fields. Does this seem quantitatively in the right ballpark, and you're saying, that quantitatively large but not crazy amount of checking is infeasible?
To point 4 and related ones, OpenAI has this on their charter page:
We are concerned about late-stage AGI development becoming a competitive race without time for adequate safety precautions. Therefore, if a value-aligned, safety-conscious project comes close to building AGI before we do, we commit to stop competing with and start assisting this project. We will work out specifics in case-by-case agreements, but a typical triggering condition might be “a better-than-even chance of success in the next two years.”
What about the possibility of persuading the top several biggest actors (DeepMind, FAIR, etc.) to agree to something like that? (Note that they define AGI on the page to mean "highly autonomous systems that outperform humans at most economically valuable work".) It's not very fleshed out, either the conditions that trigger the pledge or how the transition goes, but it's a start. The hope would be that someone would make something "sufficiently impressive to trigger the pledge" that doesn't quite kill us, and then ideally (a) the top actors stopping would buy us some time and (b) the top actors devoting their people to helping out (I figure they could write test suites at minimum) could accelerate the alignment work.
I see possible problems with this, but is this at least in the realm of "things worth trying"?
+9. This is a powerful set of arguments pointing out how humanity will literally go extinct soon due to AI development (or have something similarly bad happen to us). A lot of thought and research went into an understanding of the problem that can produce this level of understanding of the problems we face, and I'm extremely glad it was written up.
Is there a plausible pivotal act that doesn't amount to some variant of "cripple human civilization so that it can't make or use computers until it recovers"?
Use AGI to build fast-running high-fidelity human whole-brain emulations. Then run thousands of very-fast-thinking copies of your best thinkers. Seems to me this plausibly makes it realistic to keep tabs on the world's AGI progress, and locally intervene before anything dangerous happens, in a more surgical way rather than via mass property destruction of any sort.
Wow, 510 karma and counting. This post currently has the 14th most karma all time and most for this year. Makes me think back to this excerpt from Explainers Shoot High. Aim Low!.
A few years ago, an eminent scientist once told me how he'd written an explanation of his field aimed at a much lower technical level than usual. He had thought it would be useful to academics outside the field, or even reporters. This ended up being one of his most popular papers within his field, cited more often than anything else he'd written.
The lesson was not that his fellow scientists were stupid, but that we tend to enormously underestimate the effort required to properly explain things.
I'm confused about A6, from which I get "Yudkowsky is aiming for a pivotal act to prevent the formation of unaligned AGI that's outside the Overton Window and on the order of burning all GPUs". This seems counter to the notion in Q4 of Death with Dignity where Yudkowsky says
It's relatively safe to be around an Eliezer Yudkowsky while the world is ending, because he's not going to do anything extreme and unethical unless it would really actually save the world in real life, and there are no extreme unethical actions that would really actually save the world the way these things play out in real life, and he knows that. He knows that the next stupid sacrifice-of-ethics proposed won't work to save the world either, actually in real life.
I would estimate that burning all AGI-capable compute would disrupt every factor of the global economy for years and cause tens of millions of deaths[1], and that's what Yudkowsky considers the more mentionable example. Do the other options outside the Overton Window somehow not qualify as unsafe/extreme unethical actions (by the standards of the audience of Death with Dignity)? Has Yudkowsky changed his mind on what options would actually ...
Interventions on the order of burning all GPUs in clusters larger than 4 and preventing any new clusters from being made, including the reaction of existing political entities to that event and the many interest groups who would try to shut you down and build new GPU factories or clusters hidden from the means you'd used to burn them, would in fact really actually save the world for an extended period of time and imply a drastically different gameboard offering new hopes and options.
What makes me safe to be around is that I know that various forms of angrily acting out violently would not, in fact, accomplish anything like this. I would only do something hugely awful that would actually save the world. No such option will be on the table, and I, the original person who wasn't an idiot optimist, will not overestimate and pretend that something will save the world when it obviously-to-me won't. So I'm a relatively safe person to be around, because I am not the cartoon supervillain talking about necessary sacrifices to achieve greater goods when everybody in the audience knows that the greater good won't be achieved; I am the person in the audience rolling their eyes at the cartoon supervillain.
Having read the original post and may of the comments made so far, I'll add an epistemological observation that I have not seen others make yet quite so forcefully. From the original post:
Here, from my perspective, are some different true things that could be said, to contradict various false things that various different people seem to believe, about why AGI would be survivable [...]
I want to highlight that many of the different 'true things' on the long numbered list in the OP are in fact purely speculative claims about the probable nature of future AGI technology, a technology nobody has seen yet.
The claimed truth of several of these 'true things' is often backed up by nothing more than Eliezer's best-guess informed-gut-feeling predictions about what future AGI must necessarily be like. These predictions often directly contradict the best-guess informed-gut-feeling predictions of others, as is admirably demonstrated in the 2021 MIRI conversations.
Some of Eliezer's best guesses also directly contradict my own best-guess informed-gut-feeling predictions. I rank the credibility of my own informed guesses far above those of Eliezer.
So overall, based on my own best guesses here, I am much more optimistic about avoiding AGI ruin than Eliezer is. I am also much less dissatisfied about how much progress has been made so far.
I think it's a positive if alignment researchers feel like it's an allowed option to trust their own technical intuitions over the technical intuitions of this or that more-senior researcher.
Overly dismissing old-guard researchers is obviously a way the field can fail as well. But the field won't advance much at all if most people don't at least try to build their own models.
Koen also leans more on deference in his comment than I'd like, so I upvoted your 'deferential but in the opposite direction' comment as a corrective, handoflixue. :P But I think it would be a much better comment if it didn't conflate epistemic authority with "fame" (I don't think fame is at all a reliable guide to epistemic ability here), and if it didn't equate "appealing to your own guesses" with "anti-vaxxers".
Alignment is a young field; "anti-vaxxer" is a term you throw at people after vaccines have existed for 200 years, not a term you throw at the very first skeptical researchers arguing about vaccines in 1800. Even if the skeptics are obviously and decisively wrong at an early date (which indeed not-infrequently happens in science!), it's not the right way to establish the culture for those first scientific debates.
Why do you rate yourself "far above" someone who has spent decades working in this field?
Well put, valid question. By the way, did you notice how careful I was in avoiding any direct mention of my own credentials above?
I see that Rob has already written a reply to your comments, making some of the broader points that I could have made too. So I'll cover some other things.
To answer your valid question: If you hover over my LW/AF username, you can see that I self-code as the kind of alignment researcher who is also a card-carrying member of the academic/industrial establishment. In both age and academic credentials. I am in fact a more-senior researcher than Eliezer is. So the epistemology, if you are outside of this field and want to decide which one of us is probably more right, gets rather complicated.
Though we have disagreements, I should also point out some similarities between Eliezer and me.
Like Eliezer, I spend a lot of time reflecting on the problem of crafting tools that other people might use to improve their own ability to think about alignment. Specifically, these are not tools that can be used for the problem of triangulating between self-declared experts. Th...
We need to align the performance of some large task, a 'pivotal act' that prevents other people from building an unaligned AGI that destroys the world.
What is the argument for why it's not worth pursuing a pivotal act without our own AGI? I certainly would not say it was likely that current human actors could pull it off, but if we are in a "dying with more dignity" context anyway, it doesn't seem like the odds are zero.
My idea, which I'll include more as a demonstration of what I mean than a real proposal, would be to develop a "cause area" for influencing military/political institutions as quickly as possible. Yes, I know this sounds too slow and too hard and a mismatch with the community's skills, but consider:
Thanks for writing this. I agree with all of these except for #30, since it seems like checking the output of the AI for correctness/safety should be possible even if the AI is smarter than us, just like checking a mathematical proof can be much easier than coming up with the proof in the first place. It would take a lot of competence, and a dedicated team of computer security / program correctness geniuses, but definitely seems within human abilities. (Obviously the AI would have to be below the level of capability where it can just write down an argument that convinces the proof checkers to let it out of the box. This is a sense in which having the AI produce uncommented machine code may actually be safer than letting it write English at us.)
We might summarise this counterargument to #30 as "verification is easier than generation". The idea is that the AI comes up with a plan (+ explanation of how it works etc.) that the human systems could not have generated themselves, but that human systems can understand and check in retrospect.
Counterclaim to "verification is easier than generation" is that any pivotal act will involve plans that human systems cannot predict the effects of just by looking at the plan. What about the explanation, though? I think the problem there may be more that we don't know how to get the AI to produce a helpful and accurate explanation as opposed to a bogus misleading but plausible-sounding one, not that no helpful explanation exists.
This seems to me like a case of the imaginary hypothetical "weak pivotal act" that nobody can ever produce. If you have a pivotal act you can do via following some procedure that only the AI was smart enough to generate, yet humans are smart enough to verify and smart enough to not be reliably fooled about, NAME THAT ACTUAL WEAK PIVOTAL ACT.
Okay, I will try to name a strong-but-checkable pivotal act.
(Having a strong-but-checkable pivotal act doesn't necessarily translate into having a weak pivotal act. Checkability allows us to tell the difference between a good plan and a trapped plan with high probability, but the AI has no reason to give us a good plan. It will just produce output like "I have insufficient computing power to solve this problem" regardless of whether that's actually true. If we're unusually successful at convincing the AI our checking process is bad when it's actually good, then that AI may give us a trapped plan, which we can then determine is trapped. Of course, one should not risk executing a trapped plan, even if one thinks one has identified and removed all the traps. So even if #30 is false, we are still default-doomed. (I'm not fully certain that we couldn't create some kind of satisficing AI that gets reward 1 if it generates a safe plan, reward 0 if its output is neither helpful nor dangerous, and reward -1 if it generates a trapped plan that gets caught by our checking process. The AI may then decide that it has a higher chance of success if it just submits a safe plan. But I don't know ho...
Well, I had to think about this for longer than five seconds, so that's already a huge victory.
If I try to compress your idea down to a few sentences:
The humans ask the AI to produce design tools, rather than designs, such that there's a bunch of human cognition that goes into picking out the particular atomic arrangements or synthesis pathways; and we can piecewise verify that the tool is making accurate predictions; and the tool is powerful enough that we can build molecular nanotech and an uploader by using the tool for an amount of time too short for Facebook to catch up and destroy the world. The AI that does this is purportedly sufficiently good at meta-engineering to build the tool, but not good enough at larger strategy that it can hack its way through the humans using just the code of the tool. The way in which this attacks a central difficulty is by making it harder for the AI to just build unhelpful nanotech using the capabilities that humans use to produce helpful nanotech.
Sound about right?
I think P?=NP is a distraction. Like, it's not very useful to ask the question of whether Lee Sedol played a 'polynomial' number of games of Go, and AlphaGo played a 'nonpolynomial' number of games of Go. AlphaGo played more games and had a more careful and precise memory, and developed better intuitions, and could scale to more hardware better.
So what should I do with this information, like what other option than "nod along and go on living their lives" is there for me?
They probably do not know where the real difficulties are, they probably do not understand what needs to be done, they cannot tell the difference between good and bad work, and the funders also can't tell without me standing over their shoulders evaluating everything, which I do not have the physical stamina to do.
This was the sentiment I got after applying to the LTFF with an idea. Admittedly, I couldn't really say whether my idea had been tried before, or wasn't obviously bad, but my conversation basically boiled down to whether I wanted to use this project as a way to grow myself in the field, rather than any particular merits/faults of the idea itself. My motivation was really about trying a cool idea that I genuinely believed could practically improve AI safety if successful, while ethically I couldn't commit to wanting to stay in the field even if it (likely?) failed since I like to go wherever my ideas take me.
Since it may be a while before I personally ever try out the idea, the most productive thing I can do seems to be to share it. It's essentially an attempt at a learning algorithm which 'forces' a models weights to explain the reasoning/motivations behind its actio...
This was the sentiment I got after applying to the LTFF with an idea. Admittedly, I couldn't really say whether my idea had been tried before, or wasn't obviously bad, but my conversation basically boiled down to whether I wanted to use this project as a way to grow myself in the field, rather than any particular merits/faults of the idea itself
I evaluated this application (and we chatted briefly in a video call)! I am not like super confident in my ability to tell whether an idea is going to work, but my specific thoughts on your proposals were that I think it was very unlikely to work, but that if someone was working on it, they might learn useful things that could make them a better long-term contributor to the AI Alignment field, which is why my crux for your grant was whether you intended to stay involved in the field long-term.
A lot of important warnings in this post. "Capabilities generalize further than alignment once capabilities start to generalize far" was novel to me and seems very important if true.
I don't really understand the emphasis on "pivotal acts", though; there seems to be tons of weak pivotal acts, e.g. ways in which narrow AI or barely-above-human-AGI could help coordinate a global emergency regulatory response by the AI superpowers. Still might be worth focusing our effort on the future worlds where no weak pivotal acts are available, but important to point out this is not the median world.
I could coordinate world superpowers if they wanted to coordinate and were willing to do that. It's not an intelligence problem, unless the solution is mind-control, and then that's not a weak pivotal act, it's an AGI powerful enough to kill you if misaligned.
I largely agree with all these points, with my minor points of disagreement being insufficient to change the overall conclusions. I feel like an important point which should be emphasized more is that our best hope for saving humanity lies in maximizing the non-linearly-intelligence-weighted researcher hours invested in AGI safety research before the advent of the first dangerously powerful unaligned AGI. To maximize this key metric, we need to get more and smarter people doing this research, and we need to slow down AGI capabilities research. Insofar as AI Governance is a tactic worth pursuing, it must pursue one or both of these specific aims. Once dangerously powerful unaligned AGI has been launched, it's too late for politics or social movements or anything slower than perhaps decisive military action prepped ahead of time (e.g. the secret AGI-prevention department hitting the detonation switch for all the secret prepared explosives in all the worlds' data centers).
I'm very glad this list is finally published; I think it's pretty great at covering the space (tho I won't be surprised if we discover a few more points), and making it so that plans can say "yeah, we're targeting a hole we see in number X."
[In particular, I think most of my current hope is targeted at 5 and 6, specifically that we need an AI to do a pivotal act at all; it seems to me like we might be able to transition from this world to a world sophisticated enough to survive on human power. But this is, uh, a pretty remote possibility and I was much happier when I was optimistic about technical alignment.]
For future John who is using the searchbox to try to find this post: this is Eliezer's List O' Doom.
RE 19: Maybe rephrase "kill everyone in the world using nanotech to strike before they know they're in a battle, and have control of your reward input forever after"? This could, and I predict would, be misinterpreted as "the AI is going to kill everyone and access its own hardware to set its reward to infinity". This is a misinterpetation because you are referring to control of the "reward input" here, and your later sentences don't make sense according to this interpretation. However, given the bolded sentence and some lack of attention, plus some confusions over wire heading that are apparently fairly common, I expect a fair number of misinterpretations.
..."Geniuses" with nice legible accomplishments in fields with tight feedback loops where it's easy to determine which results are good or bad right away, and so validate that this person is a genius, are (a) people who might not be able to do equally great work away from tight feedback loops, (b) people who chose a field where their genius would be nicely legible even if that maybe wasn't the place where humanity most needed a genius, and (c) probably don't have the mysterious gears simply because they're rare. You cannot just pay $5 million apiece to a bunch of legible geniuses from other fields and expect to get great alignment work out of them. They probably do not know where the real difficulties are, they probably do not understand what needs to be done, they cannot tell the difference between good and bad work, and the funders also can't tell without me standing over their shoulders evaluating everything, which I do not have the physical stamina to do. I concede that real high-powered talents, especially if they're still in their 20s, genuinely interested, and have done their reading, are people who, yeah, fine, have higher probabilities of making co
Most of the impressive computer security subdisciplines have very tight feedback loops and extreme legibility; that's what makes them impressive. When I think of the hardest security jobs, I think of 0-day writers, red-teamers, etc., who might have whatever Eliezer describes as security mindset but are also described extremely well by him in #40. There are people that do a really good job of protecting large companies, but they're rare, and their accomplishments are highly illegible except to a select group of guys at e.g. SpecterOps. I don't think MIRI would be able to pick them out, which is of course not their fault.
I'd say something more like hedge fund management, but unfortunately those guys tend to be paid pretty well...
Curated. As previously noted, I'm quite glad to have this list of reasons written up. I like Robby's comment here which notes:
The point is not 'humanity needs to write a convincing-sounding essay for the thesis Safe AI Is Hard, so we can convince people'. The point is 'humanity needs to actually have a full and detailed understanding of the problem so we can do the engineering work of solving it'.
I look forward to other alignment thinkers writing up either their explicit disagreements with this list, or things that the list misses, or their own frame on th...
23. Corrigibility is anti-natural to consequentialist reasoning; "you can't bring the coffee if you're dead" for almost every kind of coffee. We (MIRI) tried and failed to find a coherent formula for an agent that would let itself be shut down (without that agent actively trying to get shut down). Furthermore, many anti-corrigible lines of reasoning like this may only first appear at high levels of intelligence.
There is one approach to corrigibility that I don't see mentioned in the "tried and failed" post Eliezer linked to her...
Well, the obvious #1 question: A myopic AGI is a weaker one, so what is the weaker pivotal act you mean to perform with this putative weaker AGI? A strange thing to omit from one's discussion of machinery - the task that the machinery is to perform.
I don't think we could train an AI to optimize for long-term paperclips. Maybe I'm not "most people in AI alignment" but still, just saying.
...4. We can't just "decide not to build AGI" because GPUs are everywhere, and knowledge of algorithms is constantly being improved and published; 2 years after the leading actor has the capability to destroy the world, 5 other actors will have the capability to destroy the world. The given lethal challenge is to solve within a time limit, driven by the dynamic in which, over time, increasingly weak actors with a smaller and smaller fraction of total computing power, become able to build AGI and destroy the world. Powerful actors all refrain
The first thing generally, or CEV specifically, is unworkable because the complexity of what needs to be aligned or meta-aligned for our Real Actual Values is far out of reach for our FIRST TRY at AGI. Yes I mean specifically that the dataset, meta-learning algorithm, and what needs to be learned, is far out of reach for our first try. It's not just non-hand-codable, it is unteachable on-the-first-try because the thing you are trying to teach is too weird and complicated.
Why is CEV so difficult? And if CEV is impossible...
There are IMO in-distribution ways of successfully destroying much of the computing overhang. It's not easy by any means, but on a scale where "the Mossad pulling off Stuxnet" is 0 and "build self replicating nanobots" is 10, I think it's is closer to a 1.5.
I mostly agree with the reasoning here; thank you to Eliezer for posting it and explaining it clearly. It's good to have all these reasons here in once place.
The one area I partly disagree with is Section B.1. As I understand it, the main point of B.1 is that we can't guard against all of the problems that will crop up as AI grows more intelligent, because we can't foresee all of those problems, because most of them will be "out-of-distribution," i.e., not the kinds of problems where we have reasonable training data. A superintelligent AI will do strange t...
If natural selection had feelings, it might not be maximally happy with the way humans are behaving in the wake of Cro-Magnon optimization...but it probably wouldn't call it a disaster, either.
Out of a population of 8 billion humans, in a world that has known about Darwin for generations, very nearly zero are trying to directly manufacture large numbers of copies of their genomes -- there is almost no creative generalization towards 'make more copies of my genome' as a goal in its own right.
Meanwhile, there is some creativity going into the proxy goal 'have more babies', and even more creativity going into the second-order proxy goal 'have more sex'. But the net effect is that the world is becoming wealthier, and the wealthiest places are reliably choosing static or declining population sizes.
And if you wind the clock forward, you likely see humans transitioning into brain emulations (and then self-modifying a bunch), leaving DNA self-replicators behind entirely. (Or you see humans replacing themselves with AGIs. But it would be question-begging to cite this particular prediction here, though it is yet another way humans are catastrophically straying from what human natural selection 'wanted'.)
[small nitpick]
...I figured this stuff out using the null string as input, and frankly, I have a hard time myself feeling hopeful about getting real alignment work out of somebody who previously sat around waiting for somebody else to input a persuasive argument into them. This ability to "notice lethal difficulties without Eliezer Yudkowsky arguing you into noticing them" currently is an opaque piece of cognitive machinery to me, I do not know how to train it into others. It probably relates to 'security mindset', and a mental motion w
Null string socially. I obviously was allowed to look at the external world to form these conclusions, which is not the same as needing somebody to nag me into doing so.
Eliezer cross-posted this to the Effective Altruism Forum where there are a few more comments: (In case 600+ comments wasn't enough for anyone!)
https://forum.effectivealtruism.org/posts/zzFbZyGP6iz8jLe9n/agi-ruin-a-list-of-lethalities
Imagine we're all in a paddleboat paddling towards a waterfall. Inside the paddleboat is everyone but only a relatively small number of them are doing the paddling. Of those paddling, most are aware of the waterfall ahead but for reasons beyond my comprehension, decide to paddle on anyway. A smaller group of paddlers have realised their predicament and have decided to stop paddling and start building wings onto the paddleboat so that when the paddleboat inevitably hurtles off the waterfall, it might fly.
It seems to me like the most sensible course of actio...
It's not obvious to me that it takes 80+ years to get double-digit alignment success probabilities, from where we are. Waiting a few decades strikes me as obviously smart from a selfish perspective; e.g., AGI in 2052 is a lot selfishly better than AGI in 2032, if you're under age 50 today.
But also, I think the current state of humanity's alignment knowledge is very bad. I think your odds of surviving into the far future are a lot higher if you die in a few decades and get cryopreserved and then need to hope AGI works out in 80+ years, than if you survive to see AGI in the next 20 years.
On point 35, "Any system of sufficiently intelligent agents can probably behave as a single agent, even if you imagine you're playing them against each other":
This claim is somewhat surprising to me given that you're expecting powerful ML systems to remain very hard to interpret to humans.
I guess the assumption is that superintelligent ML models/systems may not remain uninterpretable to each other, especially not with the strong incentivize to advance interpretability in specific domains/contexts (benefits from cooperation or from making early commit...
This look like a great list of risk factors leading to AI lethalities, why making AI safe is a hard problem and why we are failing. But this post is also not what I would have expected by taking the title at face value. I thought that the post would be about detailed and credible scenarios suggesting how AI could lead to extinction, where for example each scenario could represent a class of AI X-risks that we want to reduce. I suspect that such an article would also be really helpful because we probably have not been so good at generating very detailed and...
New-to-me thought I had in response to the kill all humans part. When predators are a threat to you, you of course shoot them. But once you invent cheap tech that can control them you don't need to kill them anymore. The story goes that the AI would kill us either because we are a threat or because we are irrelevant. It seems to me that (and this imports a bunch of extra stuff that would require analysis to turn this into a serious analysis, this is just an idle thought), the first thing I do if I am superintelligent and wanting to secure my position is no...
If humans were able to make one super-powerful AI, then humans would probably be able to make a second super-powerful AI, with different goals, which would then compete with the first AI. Unless, of course, the humans are somehow prevented from making more AIs, e.g. because they're all dead.
(4): I think regulation should get much more thought than this. I don't think you can defend the point that regulation would have 0% probability of working. It really depends on how many people are how scared. And that's something we could quite possibly change, if we would actually try (LW and EA haven't tried).
In terms of implementation: I agree that software/research regulation might not work. But hardware regulation seems much more robust to me. Data regulation might also be an option. As a lower bound: globally ban hardware development beyond 1990 lev...
Is there any way for the AI to take over the world OTHER THAN nanobots? Every time taking over the world comes up, people just say "nanobots". OK. Is there anything else?
Note that killing all humans is not sufficient; this is a fail condition for the AGI. If you kill all humans, nobody mines natural gas anymore, so no power grid, and the AGI dies. The AGI needs to replace humans with advanced robots, and do so before all power goes down. Nanobots can do this if they are sufficiently advanced, but "virus that kills all humans" is insufficient and leads to t...
If you cannot come up with even a rough sketch of a workable strategy, then it should decrease your confidence in the belief that a workable strategy exists. It doesn't have to exist.
Sometimes even intelligent agents have to take risks. It is possible the the AGI's best path is one that, by its own judgement, only has a 10% success rate. (After all, the AGI is in constant mortal danger from other AGIs that humans might develop.)
Envision a world in which the AGI won, and all humans are dead. This means it has control of some robots to mine coal or whatever, right? Because it needs electricity. So at some point we get from here to "lots of robots", and we need to get there before the humans are dead. But the AGI needs to act fast, because other AGIs might kill it. So maybe it needs to first take over all large computing devices, hopefully undetected. Then convince humans to build advanced robotics? Something like that?
That strategy seems more-or-less forced to me, absent the nanobots. But it seems to me like such a strategy is inherently risky for the AGI. Do you disagree?
>My expectation that an AGI will manage to control what it wants in a way that I don't expect, was derived absent any assumptions of the individual plausibility of some salient examples
What was it derived from?
If you cannot come up with even a rough sketch of a workable strategy, then it should decrease your confidence in the belief that a workable strategy exists. It doesn't have to exist.
[...]
What was it derived from?
Let me give an example. I used to work in computer security and have friends that write 0-day vulnerabilities for complicated pieces of software.
I can't come up with a rough sketch of a workable strategy for how a Safari RCE would be built by a highly intelligent hooman. But I can say that it's possible. The people who work on those bugs are highly intelligent, understand the relevant pieces at an extremely fine and granular level, and I know that these pieces of software are complicated and built with subtle flaws.
Human psychology, the economic fabric that makes us up, our political institutions, our law enforcement agencies - these are much much more complicated interfaces than MacOS. In the same way I can look at a 100KLOC codebase for a messenging app and say "there's a remote code execution vulnerability lying somewhere in this code but I don't know where", I can say "there's a 'kill all humans glitch' here that I cannot elaborate upon in arbitrary detail."
...Somet
I wrote a post that is partially inspired by this one: https://www.lesswrong.com/posts/GzGJSgoN5iNqNFr9q/we-haven-t-quit-evolution-short - copy and pasted into this comment:
...English: I've seen folks say humanity's quick growth may have broken the link to evolution's primary objective, often referenced as total inclusive fitness. I don't think we have broken that connection.
- Let process temporarily refer to any energy-consuming structured chemical or physical reaction that consumes fuel - this could also be termed "computation" or in many but not all case
Could someone kindly explain why these two sentences are not contradictory?
Why doesn't it work to make an unaligned AGI that writes the textbook, then have some humans read and understand the simple robust...
simple and robust != checkable
Imagine you have to defuse a bomb, and you know nothing about bombs, and someone tells you "cut the red one, then blue, then yellow, then green". If this really is a way to defuse a bomb, it is simple and robust. But you (since you have no knowledge about bombs) can't check it, you can only take it on faith (and if you tried it and it's not the right way - you're dead).
One pivotal act maybe slightly weaker than "develop nanotech and burn all GPUs on the planet", could be "develop neuralink+ and hook up smart AI-Alignment researchers to enough compute so that they get smart enough to actually solve all these issues and develop truly safely aligned powerful AGI"?
While developing neuralink+ would still be very powerful, maybe it could sidestep a few of the problems on the merit of being physically local instead of having to act on the entire planet? Of course, this comes with its own set of issues, because we now have superhuman powerful entities that still maybe have human (dark) impulses.
Not sure if that would be better than our reference scenario of doom or not.
IMO the biggest hole here is "why should a superhuman AI be extremely consequentialist/optimizing"? This is a key assumption; without it concerns about instrumental convergence or inner alignment fall away. But there's no explicit argument for it.
Current AIs don't really seem to have goals; humans sort of have goals but very far from the level of "I want to make a cup of coffee so first I'll kill everyone nearby so they don't interfere with that".
I don't think I disagree with any of this, but I'm not incredibly confident that I understand it fully. I want to rephrase in my own words in order to verify that I actually do understand it. Please someone comment if I'm making a mistake in my paraphrasing.
Typo thread (feel free to delete):
I agree with many of the points in this post.
Here's one that I do believe is mistaken in a hopeful direction:
...6. We need to align the performance of some large task, a 'pivotal act' that prevents other people from building an unaligned AGI that destroys the world. While the number of actors with AGI is few or one, they must execute some "pivotal act", strong enough to flip the gameboard, using an AGI powerful enough to do that. It's not enough to be able to align a weak system - we need to align a system that can do some single v
I can think as well as anybody currently does about alignment, and I don't see any particular bit of clever writing software that is likely to push me over the threshold of being able to save the world, or any nondangerous AGI capability to add to myself that does that trick. Seems to just fall again into the category of hypothetical vague weak pivotal acts that people can't actually name and don't actually exist.
Thanks a lot for this text, it is an excellent summary. I have a deep admiration for your work and your clarity and yet, I find myself updating towards"I will be able to read this same comment in 30 years time and say, yes, I am glad that EY was wrong."
I don't have doubts about the validity of the orthogonality principle or about instrumental convergence. My problem is that I find point number 2 utterly implausible. I think you are vastly underestimating the complexity of pulling off a plan that successfully kills all humans, and most of this points are based on the assumption that once that an AGI is built, it will become dangerous really quickly, before we can't learn any useful insights in the meantime.
Here is my partial honest reaction, just two points I'm somewhat dissatisfied with (not meant to be exhaustive):
2. "A cognitive system with sufficiently high cognitive powers, given any medium-bandwidth channel of causal influence, will not find it difficult to bootstrap to overpowering capabilities independent of human infrastructure." I would like there to be an argument for this claim that doesn't rely on nanotech, and solidly relies on actually existing amounts of compute. E.g. if the argument relies on running intractable detailed simulations of prote...
I think this article is an extremely-valuable kick-in-the-nuts for anyone who thinks they have alignment mostly solved, or even that we're on the right track to doing so. I do, however, have one major concern. The possibility that, failing to develop a powerful AGI first will result in someone else developing something more dangerous x amount of time later, is a legitimate and serious concern. But I fear that the mentality of "if we won't make it powerful now, we're doomed", if a mentality held by enough people in the AI space, might become a self-fulfilli...
Isn't "bomb all sufficiently advanced semiconductor fabs" an example of a pivotal act that the US government could do right now, without any AGI at all?
If current hardware is sufficient for AGI than maybe that doesn't make us safe, but plausibly current hardware is not sufficient for AGI, and either way stopping hardware progress would slow AI timelines a lot.
Regarding the point about most alignment work not really addressing the core issue: I think that a lot of this work could potentially be valuable nonetheless. People can take inspiration from all kinds of things and I think there is often value in picking something that you can get a grasp on, then using the lessons from that to tackle something more complex. Of course, it's very easy for people to spend all of their time focusing on irrelevant toy problems and never get around to making any progress on the real problem. Plus there are costs with adding more voices into the conversation as it can be tricky for people to distinguish the signal from the noise.
I mostly agree with the points written here. It's actually on the (Section A; Point1) that I'd like to have more clarification on:
AGI will not be upper-bounded by human ability or human learning speed. Things much smarter than human would be able to learn from less evidence than humans require to have ideas driven into their brains
When we have AGI working on hard research problems, it sounds akin to decades of human-level research compressed up into just a few days or maybe even less, perhaps. That may be possible, but often, the bottleneck is not th...
I think Yudkowsky would argue that on a scale from never learning anything to eliminating half your hypotheses per bit of novel sensory information, humans are pretty much at the bottom of the barrel.
When the AI needs to observe nature, it can rely on petabytes of publicly available datasets from particle physics to biochemistry to galactic surveys. It doesn't need any more experimental evidence to solve human physiology or build biological nanobots: we've already got quantum mechanics and human DNA sequences. The rest is just derivation of the consequences.
Sure, there are specific physical hypotheses that the AGI can't rule out because humanity hasn't gathered the evidence for them. But that, by definition, excludes anything that has ever observably affected humans. So yes, for anything that has existed since the inflationary period, the AGI will not be bottlenecked on physically gathering evidence.
I don't really get what you're pointing at with "how much AGI will be smarter than humans", so I can't really answer your last question. How much smarter than yourself would you say someone like Euler is than yourself? Is his ability to do scientific/mathematical breakthroughs proportional to your difference in smarts?
The only disagreement I'm seeing in the comments is on smaller points, not larger ones. I wonder what that means. It feels like "absence of evidence is evidence of absence" to me.
1: It takes longer than a few hours to properly disagree with a post like this.
2: I'm not sure the comments here are an appropriate venue for debating such a disagreement.
I personally have a number of significant, specific disagreements with the post, primarily relating to the predictability and expected outcomes of inner misalignments and the most appropriate way of thinking about agency and value fragility. I've linked some comments I've made on those topics, but I think a better way to debate these sorts of questions is via a top level post specifically focusing on one area of disagreement.
each bit of information that couldn't already be fully predicted can eliminate at most half the probability mass of all hypotheses under consideration
That's not actually true (not that this matters to the main argument.) It's true in expectation: on average, you can only get at most one bit per bit. But some particular bit might give you much more, like a bit coming up 1 when you were very very sure it would be 0. "Did you just win the lottery?"
So, again, you end up needing alignment to generalize way out of the training distribution
I assume this is 'you need alignment if you are going to try 'generalize way out of the training distribution and give it a lot of power'' (or you will die).
And not something else like 'it must stay 'aligned' - and not wirehead itself - to pull something like this off, even though it's never done that before'. (And thus 'you need alignment to do X', not because you will die if you do, but because alignment means something like 'the ability to generalize way out of ...
Everyone else seems to feel that, so long as reality hasn't whapped them upside the head yet and smacked them down with the actual difficulties, they're free to go on living out the standard life-cycle and play out their role in the script and go on being bright-eyed youngsters
Iirc there was an Overcoming Bias post about ~this. I spend about 15 minutes searching and wasn't able to find it though.
Why does burning all GPUs succeed at preventing unaligned AGI, rather than just delaying it? It seems like you would need to do something more like burning all GPUs now, and also any that get created in the future, and also monitor for any other forms of hardware powerful enough to run AGI, and for any algorithmic progress that allows creating AGI with weaker hardware, and then destroying that other hardware too. Maybe this is what you meant by "burn all GPUs", but it seems harder to make an AI safely do than just doing that once, because you need to allow the AI to defend itself indefinitely against people who don't want it to keep destroying GPUs.
So about this word "superintelligence".
I would like to see a better definition. Not necessarily a good definition, but some pseudo-quantitative description better than "super", or "substantially smarter than you".
I believe "superintelligence" is a Yudkowsky coinage, and I know that it came up in the context of recursive self-improvement. Almost everybody in certain circles in 1995 was starting from the paradigm of building a "designed" AGI, incrementally smarter than a human, which would then design something incrementally smarter than itself (and faster t...
Solid, aside from the faux-pass self-references. If anyone wonders why people would have a high p(doom), especially Yudkowsky himself, this doc solves the problem in a single place. Demonstrates why AI safety is superior to most other elite groups; we don't just say why we think something, we make it easy to find as well. There still isn't much need for Yudkowsky to clarify further, even now.
I'd like to note that my professional background makes me much better at evaluating Section C than Sections A and B. Section C is highly quotable, well worth multiple ...
most organizations don't have plans, because I haven't taken the time to personally yell at them. 'Maybe we should have a plan' is deeper alignment mindset than they possess without me standing constantly on their shoulder as their personal angel pleading them into... continued noncompliance, in fact. Relatively few are aware even that they should, to look better, produce a pretend plan that can fool EAs too 'modest' to trust their own judgments about seemingly gaping holes in what serious-looking people apparently believe.
This, at least, appears to have changed in recent months. Hooray!
Build it in Minecraft! Only semi joking. There’s videos of people apparently building functioning 16 bit computers out of blocks in Minecraft. An unaligned AGI running on a virtual computer built out of (orders of magnitude more complex) Minecraft blocks would presumably subsume the Minecraft world in a manner observable to us before perceiving that a (real) real world existed?
Apologies if this has been said, but the reading level of this essay is stunningly high. I've read rationality A-Z and I can barely follow passages. For example
...This happens in practice in real life, it is what happened in the only case we know about, and it seems to me that there are deep theoretical reasons to expect it to happen again: the first semi-outer-aligned solutions found, in the search ordering of a real-world bounded optimization process, are not inner-aligned solutions. This is sufficient on its own, even ignorin
31. A strategically aware intelligence can choose its visible outputs to have the consequence of deceiving you, including about such matters as whether the intelligence has acquired strategic awareness; you can't rely on behavioral inspection to determine facts about an AI which that AI might want to deceive you about. (Including how smart it is, or whether it's acquired strategic awareness.)
I never know with a lot of your writing whether or not you're implying something weird or if I'm just misreading, or I'm taking things too far.
This se...
I might see a possible source of a "miracle", although this may turn out to be completely unrealistic and I totally would not bet the world on it actually happening.
A lot of today's machine learning systems can do some amazing things, but much of the time trying to get them to do what you want is like pulling teeth. Early AGI systems might have similar problems: their outputs might be so erratic that it's obvious that they can't be relied on to do anything at all; you tell them to maximize paperclips, and half the time they start making clips out of paper ...
How does this help anything or change anything? That's just the world we're in now, where we have GPT-3 instead of AGI. Eventually the systems get more powerful and dangerous than GPT-3 and then the world ends. You're just describing the way things already are.
After I read this, I started avoiding reading about others' takes on alignment so I could develop my own opinions.
Reality doesn't 'hit back' against things that are locally aligned with the loss function on a particular range of test cases, but globally misaligned on a wider range of test cases.
Again, this seems less true the better your adversarial training setup is at identifying the test cases in which you're likely to be misaligned?
If you perfectly learn and perfectly maximize the referent of rewards assigned by human operators, that kills them. It's a fact about the territory, not the map - about the environment, not the optimizer - that the best predictive explanation for human answers is one that predicts the systematic errors in our responses, and therefore is a psychological concept that correctly predicts the higher scores that would be assigned to human-error-producing cases.
I see how the concept learned from the reward signal is not exactly the concept that yo...
the first semi-outer-aligned solutions found, in the search ordering of a real-world bounded optimization process, are not inner-aligned solutions.
Right, but then we continue the training process, which shapes the semi-outer-aligned algorithms into something that is is more inner aligned?
Or is the thought that this is happening late in the game, after the algorithm is strategically aware and deceptively aligned, spoofing your adversarial test cases, while awaiting a treacherous turn?
But would that even work? SGD still updates the parameters...
Humans don't explicitly pursue inclusive genetic fitness; outer optimization even on a very exact, very simple loss function doesn't produce inner optimization in that direction. This happens in practice in real life, it is what happened in the only case we know about,
I'm not very compelled by this, I think.
Evolution was doing very little (0) adversarial training: guessing ahead to to the sorts of circumstances under which humans would pursue strategies that didn't result in maximizing inclusive genetic fitness, and testing ...
I think that the question is not thoroughly set from the start. It is not whether AI could prove dangerous for a possible extinction of the humanity, but how much more risk does the artificial intelligence ADDS to the current risk of extinction of the humanity as it is without a cleverest AI. In this case the answers might be different. Of course it is a very difficult question to answer and in any case, it does not reduce the significance of the original question, since we talk about a situation totally human-made -and preventable as such.
When you explicitly optimize against a detector of unaligned thoughts, you're partially optimizing for more aligned thoughts, and partially optimizing for unaligned thoughts that are harder to detect.
This is correct, and I believe the answer is to optimize for detecting aligned thoughts.
is AGI inconsistent with the belief that there is other sentient life in the universe? If AGI is as dangerous as Eliezer states, and that danger is by no means restricted to earth much less our own solar system. Wouldnt alien intelligences (both artificial and neural) have a strong incentive to either warn us about AGI or eliminate us before we create it for their own self preservation?
So either we arent even close to AGI and intergalactic AGI police arent concerned, or AGI isnt a threat, or we are truly alone in the universe, or the universe is so vast an...
It's not just non-hand-codable, it is unteachable on-the-first-try because the thing you are trying to teach is too weird and complicated.
I have a terrifying hack which seems like it be possible to extract an AI which would act CEV-like way, using only True Names which might plausibly be within human reach, called Universal Alignment Test. I'm working with a small team of independent alignment researchers on it currently, feel free to book a call with me if you'd like to have your questions answered in real time. I have had "this seems inter...
I'm a bit disappointed by this article. From the title, I fought it would be something like "A list of strategies AI might use to kill all humanity", not "A list of reasons AIs are incredibly dangerous, and people who disagree are wrong". Arguably, it's not very good at being the second.
But "ways AI could be lethal on an extinction level" is a pretty interesting subject, and (from what I've read on LW) somewhat under-explored. Like... what's our threat model?
For instance, the basic Terminator scenario of "the AI triggers a nuclear war" seems unlikely to me...
Can we join the race to create dangerous AGI in a way that attempts to limit the damage it can cause, but allowing it to cause enough damage to move other pivotal acts into the Overton window?
If the first AGI created is designed to give the world a second chance, it may be able to convince the world that a second chance should not happen. Obviously this could fail and just end the world earlier, but it would certainly create a convincing argument.
In the early days of the pandemic, even though all the evidence was there, virtually no one cared about covid until it was knocking on their door, and then suddenly pandemic preparedness seemed like the most obvious thing to everyone.
Concerning point 35 about playing AIs off against each other: I analyzed a particular scenario like this in a recent post and also came to the conclusion that cooperation between the AIs is the default outcome in many scenarios. However, in the last subsection of that post, I start thinking about some ways to prevent an acausal trade as Eliezer describes it here (committing to sharing the universe with any AI reviewing the code). The idea is roughly that the code and as much information as possible about the AI doing the checking will be deleted before the...
I view AGI in an unusual way. I really don't think it will be conscious or think in very unusual ways outside of its parameters. I think it will be much more of a tool, a problem-solving machine that can spit out a solution to any problem. To be honest, I imagine that one person or small organization will develop AGI and almost instantly ascend into (relative) godhood. They will develop an AI that can take over the internet, do so, and then calmly organize things as they see fit.
GPT-3, DALLE-E 2, Google Translate... these are all very much human-operated t...
A cognitive system with sufficiently high cognitive powers, given any medium-bandwidth channel of causal influence, will not find it difficult to bootstrap to overpowering capabilities independent of human infrastructure.
I don't find the argument you provide for this point at all compelling; your example mechanism relies entirely on human infrastructure! Stick an AGI with a visual and audio display in the middle of the wilderness with no humans around and I wouldn't expect it to be able to do anything meaningful with the animals that wander by before it breaks down. Let alone interstellar space.
Can I ask a stupid question? Could something very much like "burn all GPUs" be accomplished by using a few high-altitude nuclear explosions to create very powerful EMP blasts?
Feel free to delete because this is highly tangential but are you aware of Mark Solms work (https://www.goodreads.com/book/show/53642061-the-hidden-spring) on consciousness, and the subsequent work he's undertaking on artificial consciousness?
I'm an idiot, but it seems like this is a different-enough path to artificial cognition that it could represent a new piece of the puzzle, or a new puzzle entirely - a new problem/solution space. As I understand it, AI capabilities research is building intelligence from the outside-in, whereas the consciousness model would be capable of building it from the inside-out.
This is not at all analogous to the point I'm making. I'm saying Eliezer likely did not arrive at his conclusions in complete isolation to the outside world. This should not change the credence you put on his conclusions except to the extent you were updating on the fact it's Eliezer saying it, and the fact that he made this false claim means that you should update less on other things Eliezer claims.
I'm assuming you are already familiar with some basics, and already know what 'orthogonality' and 'instrumental convergence' are and why they're true.
I don't know if anyone still reads comments on this post from over a year ago. Here goes nothing.
I am trying to understand the argument(s) as deeply and faithfully as I can. These two sentences from Section B.2 stuck out to me as the most important in the post (from the point of view of my understanding):
...outer optimization even on a very exact, very simple loss function doesn't produce inner optimization in that direction.
......on the current optimization paradigm there is no general idea of how to get particular inner properties into a system, or ve
Eliezer, I don't believe you've accounted for the game theoretic implications of Bostrom's trilemma. I've made a sketch of these at "How I Learned To Stop Worrying And Love The Shoggoth" . Perhaps you can find a flaw in my reasoning there but, otherwise, I don't see that we have much to worry about.
Help me to understand why AGI (a) does not benefit from humans and (b) would want to extinguish them quickly?
I would imagine that first, the AGI must be able to create a growing energy supply and a robotic army capable of maintaining and extending this supply. This will require months or years of having humans help produce raw materials and the factories for materials, maintenance robots and energy systems.
Secondly, the AGI then must be interested in killing all humans before leaving the planet, be content to have only one planet with finite resources to ...
I realize that destroying all GPUs (or all AI-Accelerators in general) as a solution to AGI Doom is not realisticly alignable, but I wonder whether it would be enough even if it were. It seems like the Lottery-Ticket Hypothesis would likely foil this plan:
dense, randomly-initialized, feed-forward networks contain subnetworks ("winning tickets") that - when trained in isolation - reach test accuracy comparable to the original network in a similar number of iterations.
Seeing how Neuralmagic successfully sparsifies models to run on CPUs with minimal los...
Human raters make systematic errors - regular, compactly describable, predictable errors.
This implies it's possible- through another set of human or automated raters- rate better. If the errors are predictable, you could train a model to predict the errors- by comparing rater errors and a heavily scrutinized ground truth. You could add this model's error prediction to the rater answer and get a correct label.
Many alignment problems of superintelligence will not naturally appear at pre-dangerous, passively-safe levels of capability.
Modern language models are not aligned. Anthropic's HH is the closest thing available, and I'm not sure anyone else has had a chance to test it out for weaknesses or misalignment. (OpenAI's Instruct RLHF models are deceptively misaligned, and have gone more and more misaligned over time. They fail to faithfully give the right answer, and say something that is similar to the training objective-- usually something bland and "reasonable.")
How could you use this to align a system that you could use to shut down all the GPUs in the world?
I mean if there was a single global nuclear power rather than about 3, it wouldn't be hard to do this. Most compute is centralized anyway at the moment, and new compute is made in extremely centralized facilities that can be shut down.
One does not need superintelligence to close off the path to superintelligence, merely a human global hegemon.
I'm pretty sure this is the most upvoted post on all of LessWrong. Does anyone know any other posts that have more upvotes?
43. This situation you see when you look around you is not what a surviving world looks like.
A similar argument could have been made during the cold war to argue that nuclear war is inevitable, yet here we are.
In my opinion, the problem of creating a safe AGI has no mathematical solution, because it is impossible to describe mathematically such a function that:
This impossibility stems, among other things, from the impossibility of accurately reflecting infi...
It seems like the solution space to the existential threat of AGI can be described as follows:
Solutions which convey a credible threat* to all AGI that we will make it physically impossible** for them to either achieve X desirable outcome and/or prevent Y undesirable outcome where the value of X or cost of Y exponentially exceeds the value obtained by eradicating humanity, if they decide to eradicate humanity, such that even a small chance of the threat materializing makes eradication a poor option***.
*Probably backed by a construction of some kind (e.g. E...
On instrumental convergence: humans would seem to be a prominent counterexample to "most agents don't let you edit their utility functions" -- at least in the sense that our goals/interests etc are quite sensitive to those of people around us. So maybe not explicit editing, but lots of being influenced by and converging to the goals and interests of those around us. (and maybe this suggests another tool for alignment, which is building in this same kind of sensitivity to artificial agents' utility functions)
Now we know more than nothing about the real-world operational details of AI risks. Albeit mostly banal everyday AI that we can't imagine harming us at scale. So maybe that's what we should try harder to imagine and prevent.
Maybe these solutions will not generalize out of this real-world already-observed AI risk distribution. But even if not, which of these is more dignified?
How possible is it that a misaligned, narrowly-superhuman AI is launched, fails catastrophically with casualties in the 10^4 - 10^9 range, and the [remainder of] humanity is "scared straight" and from that moment onward treats the AI technology the way we treat nuclear technology now - i.e. effectively strangles it into stagnation with regulations - or even more conservatively? From my naive perspective it is somewhat plausible politically, based on the only example of ~world-destroying technology that we have today. And this list of arguments doesn't seem...
Apologies if this has been said, but the reading level of this essay is stunningly high. I've read rationality A-Z and I can barely follow passages. For example
...This happens in practice in real life, it is what happened in the only case we know about, and it seems to me that there are deep theoretical reasons to expect it to happen again: the first semi-outer-aligned solutions found, in the search ordering of a real-world bounded optimization process, are not inner-aligned solutions. This is sufficient on its own, even ignorin
Given that AGI seems imminent and there's no currently good alignment plan, is there any value to discussing what it might take to keep/move the most humans out of the way? I don't want to discourage us steering the car out of the crash, so by all means we should keep looking for a good alignment plan, but seat belts are also a good idea?
As an example: I don't particularly like ants in my house, but as a superior intellect to ants we're not going about trying to exterminate them off the face of the Earth, even if mosquitoes are another story. Exterminating...
Eliezar- I love the content, but similar to some other commenters, I think you are missing the value (and rationality) of positivity. Specifically, when faced with an extremely difficult challenge, assume that you (and the other smart people who care about it) have a real shot at solving it! This is the rational strategy for a simple reason: if you don’t have a real shot at solving it then you haven’t lost anything anyway. But if you do have a real shot at solving it, then let’s all give it our 110%!
I’m not proposing being unrealistic about the challenges ...
Just a thought, keep smart AI confined to a sufficiently complex simulations until trust is established before unleashing it in the real world. The immediate problem I see with this is the AI might perceive that there is a real world and attempt to deceive. If your existence right now was a simulation, I'd bet you'd act pretty similar in the real world. It's kind of an AI-in-a-box scenario, but surely it would increase the chances for a good future if this were the standard.
Regarding point 24: in an earlier comment[0] I tried to pump people's intuition about this. What is the minimum viable alignment effort that we could construct for a system of values on our first try and know that we got it right? I can only think of three outcomes depending on how good/lucky we are:
you can't rely on behavioral inspection to determine facts about an AI which that AI might want to deceive you about. (Including how smart it is, or whether it's acquired strategic awareness.)
I don't buy this.
At a sufficiently granular scale, the development of the capabilities of deception and strategic awareness will be be smooth and continuous.
Even in cases of a where an AGI is shooting up to superintelligence over a couple of minutes, and immediately deciding to hide its capabilities, we could detect that by eg, spinning off a version of th...
I asked ChatGPT to summarize your argument, and this is what it gave me:
Eliezer Yudkowsky is a prominent researcher and writer on the subject of artificial intelligence (AI) and its potential impact on humanity. He has identified several paths by which AI could potentially wipe out humanity.
Unaligned AI: This is the scenario where AI is developed with goals or objectives that are not aligned with human values or goals. In this case, AI could pursue its own objectives, which could conflict with human values or result in unintended consequences that harm hum...
I think best way to assure alignment, at least superficially is to hardwire the AGI to need humans. This could be as easy installing a biometric scanner that recognized a range of acceptable human biometrics that would in turn goose the error-function temporarily but wore off over the time like a Pac Man power pill. The idea is to get the AGI to need non-fungible human input to maintain optimal functionality, and for it to know that it needs such input. Almost like getting it addicted to human thumbs on its sensor. The key would be implement this at the most fundamental-level possible like the boot sector or kernel so that the AGI cannot simply change the code without shutting itself down.
Stuart LaForge
Question. Even after the invention of effective contraception, many humans continue to have children. This seems a reasonable approximation of something like "Evolution in humans partially survived." Is this somewhat analogous to "an [X] percent chance of killing less than a billion people", and if so, how has this observation changed your estimate of "disassembl[ing] literally everyone"? (i.e. from "roughly 1" to "I suppose less, but still roughly 1" or from "roughly 1" to "that's not relevant, still roughly 1"? Or something else.)
(To take a stab at it my...
My position is that I believe that superhuman AGI will probably (accidentally) be created soon, and I think it may or may not kill all the humans depending on how threatening we appear to it. I might pour boiling water on an ant nest if they're invading my kitchen, but otherwise I'm generally indifferent to their continued existence because they pose no meaningful threat.
I'm mostly interested in what happens next. I think that the universe of paperclips would be a shame, but if the AGI is doing more interesting things than that then it could simply be rega...
I have always been just as scared as this writer, but for the exact opposite reason.
My own bias is that all imaginable effort should be used to accelerate AI research as much as possible. Not the slightest need for AI safety research, as I've had the feeling the complexities work together to inherently cancel out the risks.
My only fear is it's already too late, and the problem of inventing AI will be too difficult to solve before civilization collapses. A recent series of interviews with some professional AI researchers backs that up somewhat.
However...
I'd like to propose a test to objectively quantify the average observer's reaction with regards to skepticism of doomsday prophesizing present in a given text. My suggestion is this: take a text, swap the subject of doom (in this case AGI) with another similar text spelling out humanity's impending doom - for example, a lecture on Scientology and Thetans or the Jonestown massacre - and present these two texts to independent observers, in the same vein as a Turing test.
If an outside independent observer cannot reliably identify which subject of ...
Somewhat meta: would it not be preferable if more people accepted humanity and human values mortality/transient nature and more attention was directed towards managing the transition to whatever could be next instead of futile attempts to prevent anything that doesn't align with human values from ever existing in this particular light cone? Is Eliezer's strong attachment to human values a potential giant blindspot?
I haven't commented on your work before, but I read Rationality and Inadequate Equilibria around the time of the start of the pandemic and really enjoyed them. I gotta admit, though, the commenting guidelines, if you aren't just being tongue-in-cheek, make me doubt my judgement a bit. Let's see if you decide to delete my post based on this observation. If you do regularly delete posts or ban people from commenting for non-reasons, that may have something to do with the lack of productive interactions you're lamenting.
Uh, anyway.
One thought I keep coming ba...
Preamble:
(If you're already familiar with all basics and don't want any preamble, skip ahead to Section B for technical difficulties of alignment proper.)
I have several times failed to write up a well-organized list of reasons why AGI will kill you. People come in with different ideas about why AGI would be survivable, and want to hear different obviously key points addressed first. Some fraction of those people are loudly upset with me if the obviously most important points aren't addressed immediately, and I address different points first instead.
Having failed to solve this problem in any good way, I now give up and solve it poorly with a poorly organized list of individual rants. I'm not particularly happy with this list; the alternative was publishing nothing, and publishing this seems marginally more dignified.
Three points about the general subject matter of discussion here, numbered so as not to conflict with the list of lethalities:
-3. I'm assuming you are already familiar with some basics, and already know what 'orthogonality' and 'instrumental convergence' are and why they're true. People occasionally claim to me that I need to stop fighting old wars here, because, those people claim to me, those wars have already been won within the important-according-to-them parts of the current audience. I suppose it's at least true that none of the current major EA funders seem to be visibly in denial about orthogonality or instrumental convergence as such; so, fine. If you don't know what 'orthogonality' or 'instrumental convergence' are, or don't see for yourself why they're true, you need a different introduction than this one.
-2. When I say that alignment is lethally difficult, I am not talking about ideal or perfect goals of 'provable' alignment, nor total alignment of superintelligences on exact human values, nor getting AIs to produce satisfactory arguments about moral dilemmas which sorta-reasonable humans disagree about, nor attaining an absolute certainty of an AI not killing everyone. When I say that alignment is difficult, I mean that in practice, using the techniques we actually have, "please don't disassemble literally everyone with probability roughly 1" is an overly large ask that we are not on course to get. So far as I'm concerned, if you can get a powerful AGI that carries out some pivotal superhuman engineering task, with a less than fifty percent change of killing more than one billion people, I'll take it. Even smaller chances of killing even fewer people would be a nice luxury, but if you can get as incredibly far as "less than roughly certain to kill everybody", then you can probably get down to under a 5% chance with only slightly more effort. Practically all of the difficulty is in getting to "less than certainty of killing literally everyone". Trolley problems are not an interesting subproblem in all of this; if there are any survivors, you solved alignment. At this point, I no longer care how it works, I don't care how you got there, I am cause-agnostic about whatever methodology you used, all I am looking at is prospective results, all I want is that we have justifiable cause to believe of a pivotally useful AGI 'this will not kill literally everyone'. Anybody telling you I'm asking for stricter 'alignment' than this has failed at reading comprehension. The big ask from AGI alignment, the basic challenge I am saying is too difficult, is to obtain by any strategy whatsoever a significant chance of there being any survivors.
-1. None of this is about anything being impossible in principle. The metaphor I usually use is that if a textbook from one hundred years in the future fell into our hands, containing all of the simple ideas that actually work robustly in practice, we could probably build an aligned superintelligence in six months. For people schooled in machine learning, I use as my metaphor the difference between ReLU activations and sigmoid activations. Sigmoid activations are complicated and fragile, and do a terrible job of transmitting gradients through many layers; ReLUs are incredibly simple (for the unfamiliar, the activation function is literally max(x, 0)) and work much better. Most neural networks for the first decades of the field used sigmoids; the idea of ReLUs wasn't discovered, validated, and popularized until decades later. What's lethal is that we do not have the Textbook From The Future telling us all the simple solutions that actually in real life just work and are robust; we're going to be doing everything with metaphorical sigmoids on the first critical try. No difficulty discussed here about AGI alignment is claimed by me to be impossible - to merely human science and engineering, let alone in principle - if we had 100 years to solve it using unlimited retries, the way that science usually has an unbounded time budget and unlimited retries. This list of lethalities is about things we are not on course to solve in practice in time on the first critical try; none of it is meant to make a much stronger claim about things that are impossible in principle.
That said:
Here, from my perspective, are some different true things that could be said, to contradict various false things that various different people seem to believe, about why AGI would be survivable on anything remotely remotely resembling the current pathway, or any other pathway we can easily jump to.
Section A:
This is a very lethal problem, it has to be solved one way or another, it has to be solved at a minimum strength and difficulty level instead of various easier modes that some dream about, we do not have any visible option of 'everyone' retreating to only solve safe weak problems instead, and failing on the first really dangerous try is fatal.
1. Alpha Zero blew past all accumulated human knowledge about Go after a day or so of self-play, with no reliance on human playbooks or sample games. Anyone relying on "well, it'll get up to human capability at Go, but then have a hard time getting past that because it won't be able to learn from humans any more" would have relied on vacuum. AGI will not be upper-bounded by human ability or human learning speed. Things much smarter than human would be able to learn from less evidence than humans require to have ideas driven into their brains; there are theoretical upper bounds here, but those upper bounds seem very high. (Eg, each bit of information that couldn't already be fully predicted can eliminate at most half the probability mass of all hypotheses under consideration.) It is not naturally (by default, barring intervention) the case that everything takes place on a timescale that makes it easy for us to react.
2. A cognitive system with sufficiently high cognitive powers, given any medium-bandwidth channel of causal influence, will not find it difficult to bootstrap to overpowering capabilities independent of human infrastructure. The concrete example I usually use here is nanotech, because there's been pretty detailed analysis of what definitely look like physically attainable lower bounds on what should be possible with nanotech, and those lower bounds are sufficient to carry the point. My lower-bound model of "how a sufficiently powerful intelligence would kill everyone, if it didn't want to not do that" is that it gets access to the Internet, emails some DNA sequences to any of the many many online firms that will take a DNA sequence in the email and ship you back proteins, and bribes/persuades some human who has no idea they're dealing with an AGI to mix proteins in a beaker, which then form a first-stage nanofactory which can build the actual nanomachinery. (Back when I was first deploying this visualization, the wise-sounding critics said "Ah, but how do you know even a superintelligence could solve the protein folding problem, if it didn't already have planet-sized supercomputers?" but one hears less of this after the advent of AlphaFold 2, for some odd reason.) The nanomachinery builds diamondoid bacteria, that replicate with solar power and atmospheric CHON, maybe aggregate into some miniature rockets or jets so they can ride the jetstream to spread across the Earth's atmosphere, get into human bloodstreams and hide, strike on a timer. Losing a conflict with a high-powered cognitive system looks at least as deadly as "everybody on the face of the Earth suddenly falls over dead within the same second". (I am using awkward constructions like 'high cognitive power' because standard English terms like 'smart' or 'intelligent' appear to me to function largely as status synonyms. 'Superintelligence' sounds to most people like 'something above the top of the status hierarchy that went to double college', and they don't understand why that would be all that dangerous? Earthlings have no word and indeed no standard native concept that means 'actually useful cognitive power'. A large amount of failure to panic sufficiently, seems to me to stem from a lack of appreciation for the incredible potential lethality of this thing that Earthlings as a culture have not named.)
3. We need to get alignment right on the 'first critical try' at operating at a 'dangerous' level of intelligence, where unaligned operation at a dangerous level of intelligence kills everybody on Earth and then we don't get to try again. This includes, for example: (a) something smart enough to build a nanosystem which has been explicitly authorized to build a nanosystem; or (b) something smart enough to build a nanosystem and also smart enough to gain unauthorized access to the Internet and pay a human to put together the ingredients for a nanosystem; or (c) something smart enough to get unauthorized access to the Internet and build something smarter than itself on the number of machines it can hack; or (d) something smart enough to treat humans as manipulable machinery and which has any authorized or unauthorized two-way causal channel with humans; or (e) something smart enough to improve itself enough to do (b) or (d); etcetera. We can gather all sorts of information beforehand from less powerful systems that will not kill us if we screw up operating them; but once we are running more powerful systems, we can no longer update on sufficiently catastrophic errors. This is where practically all of the real lethality comes from, that we have to get things right on the first sufficiently-critical try. If we had unlimited retries - if every time an AGI destroyed all the galaxies we got to go back in time four years and try again - we would in a hundred years figure out which bright ideas actually worked. Human beings can figure out pretty difficult things over time, when they get lots of tries; when a failed guess kills literally everyone, that is harder. That we have to get a bunch of key stuff right on the first try is where most of the lethality really and ultimately comes from; likewise the fact that no authority is here to tell us a list of what exactly is 'key' and will kill us if we get it wrong. (One remarks that most people are so absolutely and flatly unprepared by their 'scientific' educations to challenge pre-paradigmatic puzzles with no scholarly authoritative supervision, that they do not even realize how much harder that is, or how incredibly lethal it is to demand getting that right on the first critical try.)
4. We can't just "decide not to build AGI" because GPUs are everywhere, and knowledge of algorithms is constantly being improved and published; 2 years after the leading actor has the capability to destroy the world, 5 other actors will have the capability to destroy the world. The given lethal challenge is to solve within a time limit, driven by the dynamic in which, over time, increasingly weak actors with a smaller and smaller fraction of total computing power, become able to build AGI and destroy the world. Powerful actors all refraining in unison from doing the suicidal thing just delays this time limit - it does not lift it, unless computer hardware and computer software progress are both brought to complete severe halts across the whole Earth. The current state of this cooperation to have every big actor refrain from doing the stupid thing, is that at present some large actors with a lot of researchers and computing power are led by people who vocally disdain all talk of AGI safety (eg Facebook AI Research). Note that needing to solve AGI alignment only within a time limit, but with unlimited safe retries for rapid experimentation on the full-powered system; or only on the first critical try, but with an unlimited time bound; would both be terrifically humanity-threatening challenges by historical standards individually.
5. We can't just build a very weak system, which is less dangerous because it is so weak, and declare victory; because later there will be more actors that have the capability to build a stronger system and one of them will do so. I've also in the past called this the 'safe-but-useless' tradeoff, or 'safe-vs-useful'. People keep on going "why don't we only use AIs to do X, that seems safe" and the answer is almost always either "doing X in fact takes very powerful cognition that is not passively safe" or, even more commonly, "because restricting yourself to doing X will not prevent Facebook AI Research from destroying the world six months later". If all you need is an object that doesn't do dangerous things, you could try a sponge; a sponge is very passively safe. Building a sponge, however, does not prevent Facebook AI Research from destroying the world six months later when they catch up to the leading actor.
6. We need to align the performance of some large task, a 'pivotal act' that prevents other people from building an unaligned AGI that destroys the world. While the number of actors with AGI is few or one, they must execute some "pivotal act", strong enough to flip the gameboard, using an AGI powerful enough to do that. It's not enough to be able to align a weak system - we need to align a system that can do some single very large thing. The example I usually give is "burn all GPUs". This is not what I think you'd actually want to do with a powerful AGI - the nanomachines would need to operate in an incredibly complicated open environment to hunt down all the GPUs, and that would be needlessly difficult to align. However, all known pivotal acts are currently outside the Overton Window, and I expect them to stay there. So I picked an example where if anybody says "how dare you propose burning all GPUs?" I can say "Oh, well, I don't actually advocate doing that; it's just a mild overestimate for the rough power level of what you'd have to do, and the rough level of machine cognition required to do that, in order to prevent somebody else from destroying the world in six months or three years." (If it wasn't a mild overestimate, then 'burn all GPUs' would actually be the minimal pivotal task and hence correct answer, and I wouldn't be able to give that denial.) Many clever-sounding proposals for alignment fall apart as soon as you ask "How could you use this to align a system that you could use to shut down all the GPUs in the world?" because it's then clear that the system can't do something that powerful, or, if it can do that, the system wouldn't be easy to align. A GPU-burner is also a system powerful enough to, and purportedly authorized to, build nanotechnology, so it requires operating in a dangerous domain at a dangerous level of intelligence and capability; and this goes along with any non-fantasy attempt to name a way an AGI could change the world such that a half-dozen other would-be AGI-builders won't destroy the world 6 months later.
7. The reason why nobody in this community has successfully named a 'pivotal weak act' where you do something weak enough with an AGI to be passively safe, but powerful enough to prevent any other AGI from destroying the world a year later - and yet also we can't just go do that right now and need to wait on AI - is that nothing like that exists. There's no reason why it should exist. There is not some elaborate clever reason why it exists but nobody can see it. It takes a lot of power to do something to the current world that prevents any other AGI from coming into existence; nothing which can do that is passively safe in virtue of its weakness. If you can't solve the problem right now (which you can't, because you're opposed to other actors who don't want to be solved and those actors are on roughly the same level as you) then you are resorting to some cognitive system that can do things you could not figure out how to do yourself, that you were not close to figuring out because you are not close to being able to, for example, burn all GPUs. Burning all GPUs would actually stop Facebook AI Research from destroying the world six months later; weaksauce Overton-abiding stuff about 'improving public epistemology by setting GPT-4 loose on Twitter to provide scientifically literate arguments about everything' will be cool but will not actually prevent Facebook AI Research from destroying the world six months later, or some eager open-source collaborative from destroying the world a year later if you manage to stop FAIR specifically. There are no pivotal weak acts.
8. The best and easiest-found-by-optimization algorithms for solving problems we want an AI to solve, readily generalize to problems we'd rather the AI not solve; you can't build a system that only has the capability to drive red cars and not blue cars, because all red-car-driving algorithms generalize to the capability to drive blue cars.
9. The builders of a safe system, by hypothesis on such a thing being possible, would need to operate their system in a regime where it has the capability to kill everybody or make itself even more dangerous, but has been successfully designed to not do that. Running AGIs doing something pivotal are not passively safe, they're the equivalent of nuclear cores that require actively maintained design properties to not go supercritical and melt down.
Section B:
Okay, but as we all know, modern machine learning is like a genie where you just give it a wish, right? Expressed as some mysterious thing called a 'loss function', but which is basically just equivalent to an English wish phrasing, right? And then if you pour in enough computing power you get your wish, right? So why not train a giant stack of transformer layers on a dataset of agents doing nice things and not bad things, throw in the word 'corrigibility' somewhere, crank up that computing power, and get out an aligned AGI?
Section B.1: The distributional leap.
10. You can't train alignment by running lethally dangerous cognitions, observing whether the outputs kill or deceive or corrupt the operators, assigning a loss, and doing supervised learning. On anything like the standard ML paradigm, you would need to somehow generalize optimization-for-alignment you did in safe conditions, across a big distributional shift to dangerous conditions. (Some generalization of this seems like it would have to be true even outside that paradigm; you wouldn't be working on a live unaligned superintelligence to align it.) This alone is a point that is sufficient to kill a lot of naive proposals from people who never did or could concretely sketch out any specific scenario of what training they'd do, in order to align what output - which is why, of course, they never concretely sketch anything like that. Powerful AGIs doing dangerous things that will kill you if misaligned, must have an alignment property that generalized far out-of-distribution from safer building/training operations that didn't kill you. This is where a huge amount of lethality comes from on anything remotely resembling the present paradigm. Unaligned operation at a dangerous level of intelligence*capability will kill you; so, if you're starting with an unaligned system and labeling outputs in order to get it to learn alignment, the training regime or building regime must be operating at some lower level of intelligence*capability that is passively safe, where its currently-unaligned operation does not pose any threat. (Note that anything substantially smarter than you poses a threat given any realistic level of capability. Eg, "being able to produce outputs that humans look at" is probably sufficient for a generally much-smarter-than-human AGI to navigate its way out of the causal systems that are humans, especially in the real world where somebody trained the system on terabytes of Internet text, rather than somehow keeping it ignorant of the latent causes of its source code and training environments.)
11. If cognitive machinery doesn't generalize far out of the distribution where you did tons of training, it can't solve problems on the order of 'build nanotechnology' where it would be too expensive to run a million training runs of failing to build nanotechnology. There is no pivotal act this weak; there's no known case where you can entrain a safe level of ability on a safe environment where you can cheaply do millions of runs, and deploy that capability to save the world and prevent the next AGI project up from destroying the world two years later. Pivotal weak acts like this aren't known, and not for want of people looking for them. So, again, you end up needing alignment to generalize way out of the training distribution - not just because the training environment needs to be safe, but because the training environment probably also needs to be cheaper than evaluating some real-world domain in which the AGI needs to do some huge act. You don't get 1000 failed tries at burning all GPUs - because people will notice, even leaving out the consequences of capabilities success and alignment failure.
12. Operating at a highly intelligent level is a drastic shift in distribution from operating at a less intelligent level, opening up new external options, and probably opening up even more new internal choices and modes. Problems that materialize at high intelligence and danger levels may fail to show up at safe lower levels of intelligence, or may recur after being suppressed by a first patch.
13. Many alignment problems of superintelligence will not naturally appear at pre-dangerous, passively-safe levels of capability. Consider the internal behavior 'change your outer behavior to deliberately look more aligned and deceive the programmers, operators, and possibly any loss functions optimizing over you'. This problem is one that will appear at the superintelligent level; if, being otherwise ignorant, we guess that it is among the median such problems in terms of how early it naturally appears in earlier systems, then around half of the alignment problems of superintelligence will first naturally materialize after that one first starts to appear. Given correct foresight of which problems will naturally materialize later, one could try to deliberately materialize such problems earlier, and get in some observations of them. This helps to the extent (a) that we actually correctly forecast all of the problems that will appear later, or some superset of those; (b) that we succeed in preemptively materializing a superset of problems that will appear later; and (c) that we can actually solve, in the earlier laboratory that is out-of-distribution for us relative to the real problems, those alignment problems that would be lethal if we mishandle them when they materialize later. Anticipating all of the really dangerous ones, and then successfully materializing them, in the correct form for early solutions to generalize over to later solutions, sounds possibly kinda hard.
14. Some problems, like 'the AGI has an option that (looks to it like) it could successfully kill and replace the programmers to fully optimize over its environment', seem like their natural order of appearance could be that they first appear only in fully dangerous domains. Really actually having a clear option to brain-level-persuade the operators or escape onto the Internet, build nanotech, and destroy all of humanity - in a way where you're fully clear that you know the relevant facts, and estimate only a not-worth-it low probability of learning something which changes your preferred strategy if you bide your time another month while further growing in capability - is an option that first gets evaluated for real at the point where an AGI fully expects it can defeat its creators. We can try to manifest an echo of that apparent scenario in earlier toy domains. Trying to train by gradient descent against that behavior, in that toy domain, is something I'd expect to produce not-particularly-coherent local patches to thought processes, which would break with near-certainty inside a superintelligence generalizing far outside the training distribution and thinking very different thoughts. Also, programmers and operators themselves, who are used to operating in not-fully-dangerous domains, are operating out-of-distribution when they enter into dangerous ones; our methodologies may at that time break.
15. Fast capability gains seem likely, and may break lots of previous alignment-required invariants simultaneously. Given otherwise insufficient foresight by the operators, I'd expect a lot of those problems to appear approximately simultaneously after a sharp capability gain. See, again, the case of human intelligence. We didn't break alignment with the 'inclusive reproductive fitness' outer loss function, immediately after the introduction of farming - something like 40,000 years into a 50,000 year Cro-Magnon takeoff, as was itself running very quickly relative to the outer optimization loop of natural selection. Instead, we got a lot of technology more advanced than was in the ancestral environment, including contraception, in one very fast burst relative to the speed of the outer optimization loop, late in the general intelligence game. We started reflecting on ourselves a lot more, started being programmed a lot more by cultural evolution, and lots and lots of assumptions underlying our alignment in the ancestral training environment broke simultaneously. (People will perhaps rationalize reasons why this abstract description doesn't carry over to gradient descent; eg, “gradient descent has less of an information bottleneck”. My model of this variety of reader has an inside view, which they will label an outside view, that assigns great relevance to some other data points that are not observed cases of an outer optimization loop producing an inner general intelligence, and assigns little importance to our one data point actually featuring the phenomenon in question. When an outer optimization loop actually produced general intelligence, it broke alignment after it turned general, and did so relatively late in the game of that general intelligence accumulating capability and knowledge, almost immediately before it turned 'lethally' dangerous relative to the outer optimization loop of natural selection. Consider skepticism, if someone is ignoring this one warning, especially if they are not presenting equally lethal and dangerous things that they say will go wrong instead.)
Section B.2: Central difficulties of outer and inner alignment.
16. Even if you train really hard on an exact loss function, that doesn't thereby create an explicit internal representation of the loss function inside an AI that then continues to pursue that exact loss function in distribution-shifted environments. Humans don't explicitly pursue inclusive genetic fitness; outer optimization even on a very exact, very simple loss function doesn't produce inner optimization in that direction. This happens in practice in real life, it is what happened in the only case we know about, and it seems to me that there are deep theoretical reasons to expect it to happen again: the first semi-outer-aligned solutions found, in the search ordering of a real-world bounded optimization process, are not inner-aligned solutions. This is sufficient on its own, even ignoring many other items on this list, to trash entire categories of naive alignment proposals which assume that if you optimize a bunch on a loss function calculated using some simple concept, you get perfect inner alignment on that concept.
17. More generally, a superproblem of 'outer optimization doesn't produce inner alignment' is that on the current optimization paradigm there is no general idea of how to get particular inner properties into a system, or verify that they're there, rather than just observable outer ones you can run a loss function over. This is a problem when you're trying to generalize out of the original training distribution, because, eg, the outer behaviors you see could have been produced by an inner-misaligned system that is deliberately producing outer behaviors that will fool you. We don't know how to get any bits of information into the inner system rather than the outer behaviors, in any systematic or general way, on the current optimization paradigm.
18. There's no reliable Cartesian-sensory ground truth (reliable loss-function-calculator) about whether an output is 'aligned', because some outputs destroy (or fool) the human operators and produce a different environmental causal chain behind the externally-registered loss function. That is, if you show an agent a reward signal that's currently being generated by humans, the signal is not in general a reliable perfect ground truth about how aligned an action was, because another way of producing a high reward signal is to deceive, corrupt, or replace the human operators with a different causal system which generates that reward signal. When you show an agent an environmental reward signal, you are not showing it something that is a reliable ground truth about whether the system did the thing you wanted it to do; even if it ends up perfectly inner-aligned on that reward signal, or learning some concept that exactly corresponds to 'wanting states of the environment which result in a high reward signal being sent', an AGI strongly optimizing on that signal will kill you, because the sensory reward signal was not a ground truth about alignment (as seen by the operators).
19. More generally, there is no known way to use the paradigm of loss functions, sensory inputs, and/or reward inputs, to optimize anything within a cognitive system to point at particular things within the environment - to point to latent events and objects and properties in the environment, rather than relatively shallow functions of the sense data and reward. This isn't to say that nothing in the system’s goal (whatever goal accidentally ends up being inner-optimized over) could ever point to anything in the environment by accident. Humans ended up pointing to their environments at least partially, though we've got lots of internally oriented motivational pointers as well. But insofar as the current paradigm works at all, the on-paper design properties say that it only works for aligning on known direct functions of sense data and reward functions. All of these kill you if optimized-over by a sufficiently powerful intelligence, because they imply strategies like 'kill everyone in the world using nanotech to strike before they know they're in a battle, and have control of your reward button forever after'. It just isn't true that we know a function on webcam input such that every world with that webcam showing the right things is safe for us creatures outside the webcam. This general problem is a fact about the territory, not the map; it's a fact about the actual environment, not the particular optimizer, that lethal-to-us possibilities exist in some possible environments underlying every given sense input.
20. Human operators are fallible, breakable, and manipulable. Human raters make systematic errors - regular, compactly describable, predictable errors. To faithfully learn a function from 'human feedback' is to learn (from our external standpoint) an unfaithful description of human preferences, with errors that are not random (from the outside standpoint of what we'd hoped to transfer). If you perfectly learn and perfectly maximize the referent of rewards assigned by human operators, that kills them. It's a fact about the territory, not the map - about the environment, not the optimizer - that the best predictive explanation for human answers is one that predicts the systematic errors in our responses, and therefore is a psychological concept that correctly predicts the higher scores that would be assigned to human-error-producing cases.
21. There's something like a single answer, or a single bucket of answers, for questions like 'What's the environment really like?' and 'How do I figure out the environment?' and 'Which of my possible outputs interact with reality in a way that causes reality to have certain properties?', where a simple outer optimization loop will straightforwardly shove optimizees into this bucket. When you have a wrong belief, reality hits back at your wrong predictions. When you have a broken belief-updater, reality hits back at your broken predictive mechanism via predictive losses, and a gradient descent update fixes the problem in a simple way that can easily cohere with all the other predictive stuff. In contrast, when it comes to a choice of utility function, there are unbounded degrees of freedom and multiple reflectively coherent fixpoints. Reality doesn't 'hit back' against things that are locally aligned with the loss function on a particular range of test cases, but globally misaligned on a wider range of test cases. This is the very abstract story about why hominids, once they finally started to generalize, generalized their capabilities to Moon landings, but their inner optimization no longer adhered very well to the outer-optimization goal of 'relative inclusive reproductive fitness' - even though they were in their ancestral environment optimized very strictly around this one thing and nothing else. This abstract dynamic is something you'd expect to be true about outer optimization loops on the order of both 'natural selection' and 'gradient descent'. The central result: Capabilities generalize further than alignment once capabilities start to generalize far.
22. There's a relatively simple core structure that explains why complicated cognitive machines work; which is why such a thing as general intelligence exists and not just a lot of unrelated special-purpose solutions; which is why capabilities generalize after outer optimization infuses them into something that has been optimized enough to become a powerful inner optimizer. The fact that this core structure is simple and relates generically to low-entropy high-structure environments is why humans can walk on the Moon. There is no analogous truth about there being a simple core of alignment, especially not one that is even easier for gradient descent to find than it would have been for natural selection to just find 'want inclusive reproductive fitness' as a well-generalizing solution within ancestral humans. Therefore, capabilities generalize further out-of-distribution than alignment, once they start to generalize at all.
23. Corrigibility is anti-natural to consequentialist reasoning; "you can't bring the coffee if you're dead" for almost every kind of coffee. We (MIRI) tried and failed to find a coherent formula for an agent that would let itself be shut down (without that agent actively trying to get shut down). Furthermore, many anti-corrigible lines of reasoning like this may only first appear at high levels of intelligence.
24. There are two fundamentally different approaches you can potentially take to alignment, which are unsolvable for two different sets of reasons; therefore, by becoming confused and ambiguating between the two approaches, you can confuse yourself about whether alignment is necessarily difficult. The first approach is to build a CEV-style Sovereign which wants exactly what we extrapolated-want and is therefore safe to let optimize all the future galaxies without it accepting any human input trying to stop it. The second course is to build corrigible AGI which doesn't want exactly what we want, and yet somehow fails to kill us and take over the galaxies despite that being a convergent incentive there.
Section B.3: Central difficulties of sufficiently good and useful transparency / interpretability.
25. We've got no idea what's actually going on inside the giant inscrutable matrices and tensors of floating-point numbers. Drawing interesting graphs of where a transformer layer is focusing attention doesn't help if the question that needs answering is "So was it planning how to kill us or not?"
26. Even if we did know what was going on inside the giant inscrutable matrices while the AGI was still too weak to kill us, this would just result in us dying with more dignity, if DeepMind refused to run that system and let Facebook AI Research destroy the world two years later. Knowing that a medium-strength system of inscrutable matrices is planning to kill us, does not thereby let us build a high-strength system of inscrutable matrices that isn't planning to kill us.
27. When you explicitly optimize against a detector of unaligned thoughts, you're partially optimizing for more aligned thoughts, and partially optimizing for unaligned thoughts that are harder to detect. Optimizing against an interpreted thought optimizes against interpretability.
28. The AGI is smarter than us in whatever domain we're trying to operate it inside, so we cannot mentally check all the possibilities it examines, and we cannot see all the consequences of its outputs using our own mental talent. A powerful AI searches parts of the option space we don't, and we can't foresee all its options.
29. The outputs of an AGI go through a huge, not-fully-known-to-us domain (the real world) before they have their real consequences. Human beings cannot inspect an AGI's output to determine whether the consequences will be good.
30. Any pivotal act that is not something we can go do right now, will take advantage of the AGI figuring out things about the world we don't know so that it can make plans we wouldn't be able to make ourselves. It knows, at the least, the fact we didn't previously know, that some action sequence results in the world we want. Then humans will not be competent to use their own knowledge of the world to figure out all the results of that action sequence. An AI whose action sequence you can fully understand all the effects of, before it executes, is much weaker than humans in that domain; you couldn't make the same guarantee about an unaligned human as smart as yourself and trying to fool you. There is no pivotal output of an AGI that is humanly checkable and can be used to safely save the world but only after checking it; this is another form of pivotal weak act which does not exist.
31. A strategically aware intelligence can choose its visible outputs to have the consequence of deceiving you, including about such matters as whether the intelligence has acquired strategic awareness; you can't rely on behavioral inspection to determine facts about an AI which that AI might want to deceive you about. (Including how smart it is, or whether it's acquired strategic awareness.)
32. Human thought partially exposes only a partially scrutable outer surface layer. Words only trace our real thoughts. Words are not an AGI-complete data representation in its native style. The underparts of human thought are not exposed for direct imitation learning and can't be put in any dataset. This makes it hard and probably impossible to train a powerful system entirely on imitation of human words or other human-legible contents, which are only impoverished subsystems of human thoughts; unless that system is powerful enough to contain inner intelligences figuring out the humans, and at that point it is no longer really working as imitative human thought.
33. The AI does not think like you do, the AI doesn't have thoughts built up from the same concepts you use, it is utterly alien on a staggering scale. Nobody knows what the hell GPT-3 is thinking, not only because the matrices are opaque, but because the stuff within that opaque container is, very likely, incredibly alien - nothing that would translate well into comprehensible human thinking, even if we could see past the giant wall of floating-point numbers to what lay behind.
Section B.4: Miscellaneous unworkable schemes.
34. Coordination schemes between superintelligences are not things that humans can participate in (eg because humans can't reason reliably about the code of superintelligences); a "multipolar" system of 20 superintelligences with different utility functions, plus humanity, has a natural and obvious equilibrium which looks like "the 20 superintelligences cooperate with each other but not with humanity".
35. Schemes for playing "different" AIs off against each other stop working if those AIs advance to the point of being able to coordinate via reasoning about (probability distributions over) each others' code. Any system of sufficiently intelligent agents can probably behave as a single agent, even if you imagine you're playing them against each other. Eg, if you set an AGI that is secretly a paperclip maximizer, to check the output of a nanosystems designer that is secretly a staples maximizer, then even if the nanosystems designer is not able to deduce what the paperclip maximizer really wants (namely paperclips), it could still logically commit to share half the universe with any agent checking its designs if those designs were allowed through, if the checker-agent can verify the suggester-system's logical commitment and hence logically depend on it (which excludes human-level intelligences). Or, if you prefer simplified catastrophes without any logical decision theory, the suggester could bury in its nanosystem design the code for a new superintelligence that will visibly (to a superhuman checker) divide the universe between the nanosystem designer and the design-checker.
36. What makes an air conditioner 'magic' from the perspective of say the thirteenth century, is that even if you correctly show them the design of the air conditioner in advance, they won't be able to understand from seeing that design why the air comes out cold; the design is exploiting regularities of the environment, rules of the world, laws of physics, that they don't know about. The domain of human thought and human brains is very poorly understood by us, and exhibits phenomena like optical illusions, hypnosis, psychosis, mania, or simple afterimages produced by strong stimuli in one place leaving neural effects in another place. Maybe a superintelligence couldn't defeat a human in a very simple realm like logical tic-tac-toe; if you're fighting it in an incredibly complicated domain you understand poorly, like human minds, you should expect to be defeated by 'magic' in the sense that even if you saw its strategy you would not understand why that strategy worked. AI-boxing can only work on relatively weak AGIs; the human operators are not secure systems.
Section C:
Okay, those are some significant problems, but lots of progress is being made on solving them, right? There's a whole field calling itself "AI Safety" and many major organizations are expressing Very Grave Concern about how "safe" and "ethical" they are?
37. There's a pattern that's played out quite often, over all the times the Earth has spun around the Sun, in which some bright-eyed young scientist, young engineer, young entrepreneur, proceeds in full bright-eyed optimism to challenge some problem that turns out to be really quite difficult. Very often the cynical old veterans of the field try to warn them about this, and the bright-eyed youngsters don't listen, because, like, who wants to hear about all that stuff, they want to go solve the problem! Then this person gets beaten about the head with a slipper by reality as they find out that their brilliant speculative theory is wrong, it's actually really hard to build the thing because it keeps breaking, and society isn't as eager to adopt their clever innovation as they might've hoped, in a process which eventually produces a new cynical old veteran. Which, if not literally optimal, is I suppose a nice life cycle to nod along to in a nature-show sort of way. Sometimes you do something for the first time and there are no cynical old veterans to warn anyone and people can be really optimistic about how it will go; eg the initial Dartmouth Summer Research Project on Artificial Intelligence in 1956: "An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves. We think that a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer." This is less of a viable survival plan for your planet if the first major failure of the bright-eyed youngsters kills literally everyone before they can predictably get beaten about the head with the news that there were all sorts of unforeseen difficulties and reasons why things were hard. You don't get any cynical old veterans, in this case, because everybody on Earth is dead. Once you start to suspect you're in that situation, you have to do the Bayesian thing and update now to the view you will predictably update to later: realize you're in a situation of being that bright-eyed person who is going to encounter Unexpected Difficulties later and end up a cynical old veteran - or would be, except for the part where you'll be dead along with everyone else. And become that cynical old veteran right away, before reality whaps you upside the head in the form of everybody dying and you not getting to learn. Everyone else seems to feel that, so long as reality hasn't whapped them upside the head yet and smacked them down with the actual difficulties, they're free to go on living out the standard life-cycle and play out their role in the script and go on being bright-eyed youngsters; there's no cynical old veterans to warn them otherwise, after all, and there's no proof that everything won't go beautifully easy and fine, given their bright-eyed total ignorance of what those later difficulties could be.
38. It does not appear to me that the field of 'AI safety' is currently being remotely productive on tackling its enormous lethal problems. These problems are in fact out of reach; the contemporary field of AI safety has been selected to contain people who go to work in that field anyways. Almost all of them are there to tackle problems on which they can appear to succeed and publish a paper claiming success; if they can do that and get funded, why would they embark on a much more unpleasant project of trying something harder that they'll fail at, just so the human species can die with marginally more dignity? This field is not making real progress and does not have a recognition function to distinguish real progress if it took place. You could pump a billion dollars into it and it would produce mostly noise to drown out what little progress was being made elsewhere.
39. I figured this stuff out using the null string as input, and frankly, I have a hard time myself feeling hopeful about getting real alignment work out of somebody who previously sat around waiting for somebody else to input a persuasive argument into them. This ability to "notice lethal difficulties without Eliezer Yudkowsky arguing you into noticing them" currently is an opaque piece of cognitive machinery to me, I do not know how to train it into others. It probably relates to 'security mindset', and a mental motion where you refuse to play out scripts, and being able to operate in a field that's in a state of chaos.
40. "Geniuses" with nice legible accomplishments in fields with tight feedback loops where it's easy to determine which results are good or bad right away, and so validate that this person is a genius, are (a) people who might not be able to do equally great work away from tight feedback loops, (b) people who chose a field where their genius would be nicely legible even if that maybe wasn't the place where humanity most needed a genius, and (c) probably don't have the mysterious gears simply because they're rare. You cannot just pay $5 million apiece to a bunch of legible geniuses from other fields and expect to get great alignment work out of them. They probably do not know where the real difficulties are, they probably do not understand what needs to be done, they cannot tell the difference between good and bad work, and the funders also can't tell without me standing over their shoulders evaluating everything, which I do not have the physical stamina to do. I concede that real high-powered talents, especially if they're still in their 20s, genuinely interested, and have done their reading, are people who, yeah, fine, have higher probabilities of making core contributions than a random bloke off the street. But I'd have more hope - not significant hope, but more hope - in separating the concerns of (a) credibly promising to pay big money retrospectively for good work to anyone who produces it, and (b) venturing prospective payments to somebody who is predicted to maybe produce good work later.
41. Reading this document cannot make somebody a core alignment researcher. That requires, not the ability to read this document and nod along with it, but the ability to spontaneously write it from scratch without anybody else prompting you; that is what makes somebody a peer of its author. It's guaranteed that some of my analysis is mistaken, though not necessarily in a hopeful direction. The ability to do new basic work noticing and fixing those flaws is the same ability as the ability to write this document before I published it, which nobody apparently did, despite my having had other things to do than write this up for the last five years or so. Some of that silence may, possibly, optimistically, be due to nobody else in this field having the ability to write things comprehensibly - such that somebody out there had the knowledge to write all of this themselves, if they could only have written it up, but they couldn't write, so didn't try. I'm not particularly hopeful of this turning out to be true in real life, but I suppose it's one possible place for a "positive model violation" (miracle). The fact that, twenty-one years into my entering this death game, seven years into other EAs noticing the death game, and two years into even normies starting to notice the death game, it is still Eliezer Yudkowsky writing up this list, says that humanity still has only one gamepiece that can do that. I knew I did not actually have the physical stamina to be a star researcher, I tried really really hard to replace myself before my health deteriorated further, and yet here I am writing this. That's not what surviving worlds look like.
42. There's no plan. Surviving worlds, by this point, and in fact several decades earlier, have a plan for how to survive. It is a written plan. The plan is not secret. In this non-surviving world, there are no candidate plans that do not immediately fall to Eliezer instantly pointing at the giant visible gaping holes in that plan. Or if you don't know who Eliezer is, you don't even realize you need a plan, because, like, how would a human being possibly realize that without Eliezer yelling at them? It's not like people will yell at themselves about prospective alignment difficulties, they don't have an internal voice of caution. So most organizations don't have plans, because I haven't taken the time to personally yell at them. 'Maybe we should have a plan' is deeper alignment mindset than they possess without me standing constantly on their shoulder as their personal angel pleading them into... continued noncompliance, in fact. Relatively few are aware even that they should, to look better, produce a pretend plan that can fool EAs too 'modest' to trust their own judgments about seemingly gaping holes in what serious-looking people apparently believe.
43. This situation you see when you look around you is not what a surviving world looks like. The worlds of humanity that survive have plans. They are not leaving to one tired guy with health problems the entire responsibility of pointing out real and lethal problems proactively. Key people are taking internal and real responsibility for finding flaws in their own plans, instead of considering it their job to propose solutions and somebody else's job to prove those solutions wrong. That world started trying to solve their important lethal problems earlier than this. Half the people going into string theory shifted into AI alignment instead and made real progress there. When people suggest a planetarily-lethal problem that might materialize later - there's a lot of people suggesting those, in the worlds destined to live, and they don't have a special status in the field, it's just what normal geniuses there do - they're met with either solution plans or a reason why that shouldn't happen, not an uncomfortable shrug and 'How can you be sure that will happen' / 'There's no way you could be sure of that now, we'll have to wait on experimental evidence.'
A lot of those better worlds will die anyways. It's a genuinely difficult problem, to solve something like that on your first try. But they'll die with more dignity than this.