He discovered several papers that described software-assisted hardware recovery. The basic idea was simple: if hardware suffers more transient failures as it gets smaller, why not allow software to detect erroneous computations and re-execute them? This idea seemed promising until John realized THAT IT WAS THE WORST IDEA EVER. Modern software barely works when the hardware is correct, so relying on software to correct hardware errors is like asking Godzilla to prevent Mega-Godzilla from terrorizing Japan. THIS DOES NOT LEAD TO RISING PROPERTY VALUES IN TOKYO.
I happen to work for a company whose software uses checksums at many layers, and RAID encoding and low-density parity codes at the lowest layers, to detect and recover from hardware failures. It works pretty well, and the company has sold billions of dollars of products of which that is a key component. Also, many (most?) enterprise servers use RAM with error-correcting codes; I think the common configuration allows it to correct single-bit errors and detect double-bit errors, and my company's machines will reset themselves when they detect double-bit errors and other problems that impugn the integrity of their runt...
One important difference between data storage vs computation or AI: courtesy of Shannon and Hamming, we have a really good understanding of information transmission (which includes information storage). All those nice error-correction codes are downstream of very well-understood theory.
If we had theory as solid as information theory for AI and alignment, then yeah, I'd be a hell of a lot more optimistic about using one AI to oversee another somewhere in the process. Like, imagine we had the alignment analogue of an error-detecting code which provably detects two-bit errors and corrects one-bit errors with only a logarithmic amount of overhead. With theory that strong (and battle-tested in reality) it becomes plausible that unknown unknowns won't inevitably ruin all our plans.
Well, the basic idea "adding more safeguards decreases the likelihood they'll all fail simultaneously, as long as there isn't a perfect correlation of failure modes" is a simple mathematical fact. "What is the probability of this safeguard failing to detect a rogue AI?" is hard to answer, but "What might this new safeguard do that the other safeguards don't do?" is easier.
For example. If interpretability work gets anywhere, then one might imagine a suite of safeguards that check for parts of the developing neural net that compute things like "how to detect security holes in C or machine code" or "how quickly humans die to certain poisons" (when that's not supposed to be the goal); safeguards that check for parts of the net that have many nodes and are not understandable by the other safeguards; safeguards that inspect the usage of CPU or other resources and have some idea of what's usual; safeguards that try to look for the net thinking strategically about what resource usage looks natural; and so on. These safeguards might all suck / only work in a small fraction of cases, but if you have hundreds or thousands of them, then your odds might get decent.
Or, at least...
I am mostly objecting to strategies which posit one AI saving us from another as the primary mechanism of alignment - for instance, most of the strategies in 11 Proposals. If we had sufficiently great interpretability, then sure, we could maybe leverage that to make a Godzilla strategy with a decent chance of working (or at least failing in detectable-in-advance ways), but with interpretability tools that good we could probably just make a plan without Godzilla have a decent chance of working (or at least failing in detectable-in-advance ways) by doing basically the same things minus Godzilla. It's the interpretability tools which take that plan from "close to zero chance of working" to "close to 100% chance of working"; the interpretability is where all the robustness comes from. The Godzilla part adds relatively little and is plausibly net negative (due to making the ML components more complex and brittle).
(Another minor point: "adding more safeguards decreases the likelihood they'll all fail simultaneously, as long as there isn't a perfect correlation of failure modes" is only true when the "safeguards" are guaranteed to not increase the chance of failure.)
...And—as stated, each of
Individual humans do make off much better when they get to select between products from competing companies rather than monopolies, benefitting from companies going out of their way to demonstrate when their products are verifiably better than rivals'. Humans get treated better by sociopathic powerful politicians and parties when those politicians face the threat of election rivals (e.g. no famines). Small states get treated better when multiple superpowers compete for their allegiance. Competitive science with occasional refutations of false claims produces much more truth for science consumers than intellectual monopolies. Multiple sources with secret information are more reliable than one.
It's just routine for weaker less sophisticated parties to do better in both assessment of choices and realized outcomes when multiple better informed or powerful parties compete for their approval vs just one monopoly/cartel.
Also, a flaw in your analogy is that schemes that use AIs as checks and balances on each other don't mean more AIs. The choice is not between monster A and monsters A plus B, but between two copies of monster A (or a double-size monster A), and a split of one A and one B, where we hold something of value that we can use to help throw the contest to either A or B (or successors further evolved to win such contests). In the latter case there's no more total monster capacity, but there's greater hope of our influence being worthwhile and selecting the more helpful winner (which we can iterate some number of times).
So, the analogy here is that there's hundreds (or more) of Godzillas all running around, doing whatever it is Godzillas want to do. Humanity helps out whatever Godzillas humanity likes best, which in turn creates an incentive for the Godzillas to make humanity like them.
THIS DOES NOT BODE WELL FOR TOKYO'S REAL ESTATE MARKET.
Still within the analogy: part of the literary point of Godzilla is that humanity's efforts to fight it are mostly pretty ineffective. In inter-Godzilla fights, humanity is like an annoying fly buzzing around. The humans just aren't all that strategically relevant. Sure, humanity's assistance might add some tiny marginal advantage, but from a Godzilla's standpoint that advantage is unlikely to be enough to balance the tactical/strategic disadvantages of trying not to step on people.
... and that all seems like it should carry over directly to AI, once AI gets to-or-somewhat-past human level, and definitely by the time we get to strongly superhuman intelligence. Even with just human level, the scaling/coordination/learning advantages of being able to cheaply copy a mind are probably enough for the AIs to reasonably-quickly achieve strategic dominance by enough mar...
I was going to make a comment to the effect that humans are already a species of Godzilla (humans aren't safe, human morality is scary, yada yada), only to find you making the same analogy, but with an optimistic slant. :)
Competition between the powerful can lead to the ability of the less powerful to extract value. It can also lead to the less powerful being more ruthlessly exploited by the powerful as a result of their competition. It depends on the ability to the less powerful to choose between the more powerful. I am not confident humanity or parts of it will have the ability to choose between competing AGIs.
James Mickens is writing comedy. He worked in distributed systems. A "distributed system" is another way to say "a scenario in which you absolutely will have to use software to deal with your broken hardware". I can 100% guarantee that this was written with his tongue in his cheek.
The modern world is built on software that works around HW failures.
I agree that the SW/HW analogy is not a good analogy for AGI safety (I think security is actually a better analogy), but I would like to present a defence of the idea that normal systems reliability engineering is not enough for alignment (this is not necessarily a defence of any of the analogies/claims in the OP).
Systems safety engineering leans heavily on the idea that failures happen randomly and (mostly) independently, so that enough failures happening together by coincidence to break the guarantees of the system is rare. That is:
Ok, but why isn't it better to have Godzilla fighting Mega-Godzilla instead of leaving Mega-Godzilla unchallenged?
This post is one more addition to the worrying trend in LW that asks for black and white solutions as it there were no middle ground. Would you say that having no army is better than having an army at all? I would feel more comfortable knowing that we have Godzilla in our side than having nothing
One thing I can end up worrying about is that useful tricks get ignored due to a dynamic of:
For instance, consider debate. Debate is not magic and there's lots of things it can't do. But (constructively understood) logical operators such as "for all" and "exists" can be given meaning using a technique called "game semantics", and "debate" seems like a potential way to implement this in AI.
Can this do even a fraction of the things that people want debate to do? No. Can I think of anything that needs these game semantics? Not right now, no. But is it a tool that seems potentially powerful for the future? Yeah, I'd say so; it expands the range of things we can express, should we ever find a case where we want to express it, and so it is a good idea to be ready to deploy it.
I am not saying that alignment is easy to solve, or that failing it would not result in catastrophe. But all these arguments seem like universal arguments against any kind of solution at all. Just because it will eventually involve some sort of Godzilla. It is like somebody tries to make a plane that can fly safely and not fall from the Sky, and somebody keeps repeating "well, if anything goes wrong in your safety scheme, then the plane will fall from the Sky" or "I notice that your plane is going to fly in the Sky, which means it can potentially fall from it".
I am not saying that I have better ideas about checking whether any plan will work or not. They all inevitably involve Godzilla or Sky. And the slightest mistake might cost us our lives. But I don't think that pointing repeatedly at the same scary thing, which will be one way or the other in every single plan, will get us anywhere.
I expect there are ways of dealing with Godzilla which are a lot less brittle.
If we have excellent detailed knowledge of Godzilla's internals and psychology, we know what sort of things will drive Godzilla into a frenzy or slow him down or put him to sleep, we know how to get Godzilla to go in one direction rather than another, if we knew when and how tests on small lizards would generalize to Godzilla... those would all be robustly useful things. If we had all those pieces plus more like them, then it starts to look like a scenario where dealing with Godzilla is basically viable. There's lots of fallback options, and many opportunities to recover from errors. It's not a brittle situation which falls apart as soon as something goes wrong.
The non-straw versions of Godzilla Strategies do not start from the Godzilla fighting Mega-Godzilla. Starting from this side is doomed.
It starts with, let's say, a Tokyo policeman. Notably, Tokyo policeman isn't a scary monster - but roughly a normal human, where you can get some sort of mutual understanding. The next step is to create a policeman[1], who also isn't a scary monster, but is just a bit more powerful, trained policeman (maybe using a bunch of policeman[0])Where, if the relation gen[n+1] is doing what gen[n] wants holds, the idea is you get to super-Tokio-police, who is still doing what you want. Or you get somewhere midway, where the still aligned policeman[p] tells you "sorry, the next gen would really be a Godzilla, and I don't know how to avoid it".
(This isn't to express opinions on the viability of the first step, or the amplification procedure.)
Alright, so, let's imagine a chain of 100... creatures... on a smooth spectrum from policeman to Godzilla, and each is trying to keep the next creature up the chain in check. And then the mayor attempts to direct Godzilla via the policeman at one end of this chain.
THIS DOES NOT LEAD TO RISING PROPERTY VALUES IN TOKYO.
It's like someone took the Godzilla vs Mega-Godzilla plan, and said "this Godzilla-fights-Mega-Godzilla plan is WAY too simple and robust, what we need is a hundred levels of recursion to make ABSOLUTELY SURE that something goes wrong!".
Imagine more chains, often interlinked.
Some chain links will break. Which is the point - single link failures are survivable. Also for sure there are some corrupt police officers in Tokyo, but they aren't such a big deal.
I initially liked this post a lot, then saw a lot of pushback in the comments, mostly of the (very valid!) form of "we actually build reliable things out of unreliable things, particularly with computers, all the time". I think this is a fair criticism of the post (and choice of examples/metaphors therein), but I think it may be missing (one of) the core message(s) trying to be delivered.
I wanna give an interpretation/steelman of what I think John is trying to convey here (which I don't know whether he would endorse or not):
"There are important assumptions that need to be made for the usual kind of systems security design to work (e.g. uncorrelation of failures). Some of these assumptions will (likely) not apply with AGI. Therefor, extrapolating this kind of thinking to this domain is Bad™️." ("Epistemological vigilance is critical")
So maybe rather than saying "trying to build robust things out of brittle things is a bad idea", it's more like "we can build robust things out of certain brittle things, e.g. computers, but Godzilla is not a computer, and so you should only extrapolate from computers to Godzilla if you're really, really sure you know what you're doing."
But of course you can use software to mitigate hardware failures, this is how Hadoop works! You store 3 copies of every data, and if one copy gets corrupted, you can recover the true value. Error-correcting codes is another example in that vein. I had this intuition, too, that aligning AIs using more AIs will obviously fail, now you made me question it.
Fixing hardware failures in software is literally how quantum computing is supposed to work, and it's clearly not a silly idea.
Generally speaking, there's a lot of appeal to intuition here, but I don't find it convincing. This isn't good for Tokyo property prices? Well maybe, but how good of a heuristic is that when Mechagodzilla is on its way regardless.
I'm surprised that this failure mode is so common. Like... obviously if you unleash one powerful but not well understood force to counteract another powerful but not well understood force, you will likely end up dealing with two powerful but not well understood forces. A magnified cane toad effect of sorts.
Downvoted, this is very far from a well-structured argument, and doesn't give me intuitions I can trust either
So either we:
But when option one is proposed, people say that it has proved to be probably infeasible, and when option two is proposed, people say that the political and economic systems at present cannot be shifted to make such a moratorium happen effectively. If you really believed that alignment was likely impossible, you would advocate for #2 even if you didn't think it was likely to happen due to politics. The pessimism here just doesn't make any sense to me.
What if one of the Godzillas is a 1,000x sped-up brain emulation of Eliezer Yudkowsky? (Possibly self-modifying, possibly not)
Thank you for writing this. I needed a conceptual handle like this to give shape to an intuition that's been hanging around for a while.
It seems to me that our current civilizational arrangement is itself poorly aligned or at least prone to generating unaligned subentities. In other words, we have a generalized agent-alignment problem. Asking unaligned non-AI agents to align an AI is a Godzilla strategy and as such work on aligning already-existing entities is instrumental for AI alignment.
(On a side note, I suspect that there's a lot of overlap between AI alignment and generalized alignment but that's another argument entirely.)
The basic idea was simple: if hardware suffers more transient failures as it gets smaller, why not allow software to detect erroneous computations and re-execute them? This idea seemed promising until John realized THAT IT WAS THE WORST IDEA EVER. Modern software barely works when the hardware is correct, so relying on software to correct hardware errors is like asking Godzilla to prevent Mega-Godzilla from terrorizing Japan. THIS DOES NOT LEAD TO RISING PROPERTY VALUES IN TOKYO.
Having worked at Google for several years, they are legendary masters of "allo...
It's kind of aside, but I think this about safety systems in general. Don't give me a backup system to shut down the nuclear reactor if the water stops pumping; design it so the reaction depends on the water. Don't give me great ways to dispose of a chemical that destroys your flesh if it touches you; don't make the chemical to begin with. Don't give me a super-strong set of policies to keep the function-gained virus in the lab; don't make function-gained viruses. Wish they'd listened to that last one 3 years ago.
Admittedly it may be too late in a lot of w...
I don't have a very insightful comment, but I strongly downvoted this post and I kinda feel the need to justify myself when I do that.
Summary of post: John Wentworth argues that AI Safety plans which involve using powerful AIs to oversee other powerful AIs is brittle by default. In order to get such situations to work, we need to have already solved the hard parts of alignment, including having a really good understanding of our systems. Some people respond to these situations by thinking of specific failure modes we must avoid, but that approach of,...
Most AI safety criticisms carry a multitude of implicite assumptions. This argument grants the assumption and attacks the wrong strategy.
We are better off improving a single high-level AI than making a second one. There is not battle between multiple high-level AIs if there is only one.
Godzilla strategies now in action: https://simonwillison.net/2022/Sep/12/prompt-injection/#more-ai :)
It seems to me that it is quite possible that language models develop into really good world modelers before they become consequentialist agents or contain consequentialist subagents. While I would be very concerned with using an agentic AI to control another agentic AI for the reasons you listed and so am pessimistic about eg debate, AI still seems like it could be very useful for solving alignment.
You seem to believe that any plan involving what you call "godzilla strategies" is brittle. This is something I am not confidant in. Someone may find some strategy that can be shown to not be brittle.
Refering to all forms of debate, overseeing, etc. as "Godzilla strategies" is loaded language. Should we refrain from summoning Batman because we may end up summoning Godzilla by mistake? Ideally, we want to solve alignment without summoning anything. However, applying some humility, we should consider that the problem may be too difficult for human intelligence to solve.
I read your critique as roughly "Our prior on systems more powerful than us should be that they are not controllable or foreseeable. So trying to use one system as a tool to another system's safety, we can not even know all failure modes."
I think this is true if the systems are general enough that we can not predict their behavior. However, my impression of, e.g., debate or AI helpers for alignment research is that those would be narrow, e.g., only next token prediction. The Godzilla analogy implies something where we have no say in its design and can not reason about its decisions, which both seem off looking at what current language models can do.
What if we
resurrected literal Godzilla to the future to fight AI
There’s a lot of AI alignment strategies which can reasonably be described as “ask Godzilla to prevent Mega-Godzilla from terrorizing Japan”. Use one AI to oversee another AI. Have two AIs debate each other. Use one maybe-somewhat-aligned AI to help design another. Etc.
Alignment researchers discuss various failure modes of asking Godzilla to prevent Mega-Godzilla from terrorizing Japan. Maybe one of the two ends up much more powerful than the other. Maybe the two make an acausal agreement. Maybe the Nash Equilibrium between Godzilla and Mega-Godzilla just isn’t very good for humans in the first place. Etc. These failure modes are useful for guiding technical research.
… but I worry that talking about the known failure modes misleads people about the strategic viability of Godzilla strategies. It makes people think (whether consciously/intentionally or not) “well, if we could handle these particular failure modes, maybe asking Godzilla to prevent Mega-Godzilla from terrorizing Japan would work”.
What I like about the Godzilla analogy is that it gives a strategic intuition which much better matches the real world. When someone claims that their elaborate clever scheme will allow us to safely summon Godzilla in order to fight Mega-Godzilla, the intuitively-obviously-correct response is “THIS DOES NOT LEAD TO RISING PROPERTY VALUES IN TOKYO”.
“But look!” says the clever researcher, “My clever scheme handles problems X, Y and Z!”
Response:
“Ok, but what if we had a really good implementation?” asks the clever researcher.
Response:
“Oh come on!” says the clever researcher, “You’re not even taking this seriously! At least say something about how it would fail.”
Don’t worry, we’re going to get to that. But before we do: let’s imagine you’re the Mayor of Tokyo evaluating a proposal to ask Godzilla to fight Mega-Godzilla. Your clever researchers have given you a whole lengthy explanation about how their elaborate and clever safeguards will ensure that this plan does not destroy Tokyo. You are unable to think of any potential problems which they did not address. Should you conclude that asking Godzilla to fight Mega-Godzilla will not result in Tokyo’s destruction?
No. Obviously not. THIS DOES NOT LEAD TO RISING PROPERTY VALUES IN TOKYO. You may not be able to articulate why the answer is obviously “no”, but asking Godzilla to fight Mega-Godzilla will still obviously destroy Tokyo, and your intuitions are right about that even if you are unable to articulate clever arguments.
With that said, let’s talk about why those intuitions are right and why the Godzilla analogy works well.
Brittle Plans and Unknown Unknowns
The basic problem with Godzilla plans is that they’re brittle. The moment anything goes wrong, the plan shatters, and then you’ve got somewhere between one and two giant monsters rampaging around downtown.
And of course, it is a fundamental Law of the universe that nothing ever goes exactly according to plan. Especially when trying to pit two giant monsters against each other. This is the sort of situation where there will definitely be unknown unknowns.
Unknown unknowns + brittle plan = definitely not rising property values in Tokyo.
Do we know what specifically will go wrong? No. Will something go wrong? Very confident yes. And brittleness means that whatever goes wrong, goes very wrong. Errors are not recoverable, when asking Godzilla to fight Mega-Godzilla.
If we use one AI to oversee another AI, and something goes wrong, that’s not a recoverable error; we’re using AI assistance in the first place because we can’t notice the relevant problems without it. If two AIs debate each other in hopes of generating a good plan for a human, and something goes wrong, that’s not a recoverable error; it’s the AIs themselves which we depend on to notice problems. If we use one maybe-somewhat-aligned AI to build another, and something goes wrong, that’s not a recoverable error; if we had better ways to detect misalignment in the child we’d already have used them on the parent.
The real world will always throw some unexpected problems at our plans. When asking Godzilla to fight Mega-Godzilla, those problems are not recoverable. THIS DOES NOT LEAD TO RISING PROPERTY VALUES IN TOKYO.
Meta note: I expect this post to have a lively comment section! Before you leave the twentieth comment saying that maybe Godzilla fighting Mega-Godzilla is better than Mega-Godzilla rampaging unchallenged, maybe check whether somebody else has already written that one, so I don't need to write the same response twenty times. (But definitely do leave that comment if you're the first one, I intentionally kept this essay short on the assumption that lots of discussion would be in the comments.)