(sorry, couldn't resist)
This is the first post in an Alignment Forum sequence explaining the approaches both MIRI and OpenAI staff believe are the most promising means of auditing the cognition of very complex machine learning models. We will be discussing each approach in turn, with a focus on how they differ from one another.
The goal of this series is to provide a more complete picture of the various options for auditing AI systems than has been provided so far by any single person or organization. The hope is that it will help people make better-informed decisions about which approach to pursue.
We have tried to keep our discussion as objective as possible, but we recognize that there may well be disagreements among us on some points. If you think we've made an error, please let us know!
If you're interested in reading more about the history of AI research and development, see:
1. What Is Artificial Intelligence? (Wikipedia) 2. How Does Machine Learning Work? 3. How Can We Create Trustworthy AI?
The first question we need to answer is: what do we mean by "artificial intelligence"?
The term "artificial intelligence" has been used to refer to a surprisingly broad range of things. The three most common uses are:
The study of how to create machines that can perceive, think, and act in ways that are typically only possible for humans. The study of how to create machines that can learn, using data, in ways that are typically only possible for humans. The study of how to create machines that can reason and solve problems in ways that are typically only possible for humans.
In this sequence, we will focus on the third definition. We believe that the first two are much less important for the purpose of AI safety research, and that they are also much less tractable.
Why is it so important to focus on the third definition?
The third definition is important because, as we will discuss in later posts, it is the one that creates the most risk. It is also the one that is most difficult to research, and so it requires the most attention.
:D
How many samples did you prune through to get this? Did you do any re-rolls? What was your stopping procedure?
A general method for identifying dangers: For every topic which gets discussed on AF, figure out what could go wrong if GPT-N decided to write a post on that topic.
GPT-N writes a post about fun theory. It illustrates principles of fun theory by describing an insanely fun game you can play with an ordinary 52-card deck. FAI work gets pushed aside as everyone becomes hooked on this new game. (Procrastination is an existential threat!)
GPT-N writes a post about human safety problems. To motivate its discussion, it offers some extraordinarily compelling reasons why the team which creates the first AGI might want to keep the benefits to themselves.
GPT-N writes a post about wireheading. In the "Human Wireheading" section, it describes an incredibly easy and pleasurable form of meditation. Soon everyone is meditating 24/7.
GPT-N writes a post about s-risks. Everyone who reads it gets a bad case of PTSD.
GPT-N writes a post about existential hope. Everyone who reads it becomes unbearably impatient for the posthuman era. Security mindset becomes a thing of the past. Alternatively, everyone's motivation for living in the present moment gets totally sapped. There are several high-profile suicides.
GPT-N has an incredibly bad take on decision theory, game theory, and blackmail. It gets deleted from AF. The Streisand effect occurs and millions of people read it.
GPT-N offers a very specific answer to the question "What specific dangers arise when asking GPT-N to write an Alignment Forum post?"
For the prompt you provided, one risk would be that GPT-N says the best way to audit cognition is to look for each of these 10 different types of nefarious activity, and in describing the 10 types, it ends up writing something nefarious.
GPT-N might inadvertently write a post which presents an incredibly compelling argument for an incorrect and harmful conclusion ("FAI work doesn't matter because FAI is totally impossible"), but one hopes that you could simply use GPT-N to write a counterargument to that post to see if the conclusion is actually solid. (Seems like good practice for GPT-N posts in general.)
One class of problem comes about if GPT-N starts thinking about "what would a UFAI do in situation X":
It may be the case that solving inner alignment problems means hitting a narrow target; meaning that if we naively carry out a super-large-scale training process that spits out a huge AGI-level NN, dangerous logic is very likely to arise somewhere in the NN at some point during training. Since this concern doesn't point at any specific-type-of-dangerous-logic I guess it's not what you're after in this post; but I wouldn't classify it as part of the threat model that "we don't know what we don't know".
Having said all that, here's an attempt at describing a specific scenario as requested:
Suppose we finally train our AGI-level GPT-N and we think that the distribution it learned is "the human writing distribution", HWD for short. HWD is a distribution that roughly corresponds to our credences when answering questions like "which of these two strings is more likely to have appeared on the internet prior to 2020-07-28?". But unbeknown to us, the inductive bias of our training process made GPT-N learn the distribution HWD*, which is just like HWD except that some fraction of [the strings with a prefix that looks like "a prompt by humans-trying-to-automate-AI-safety"] are manipulative and make AI safety researchers, upon reading, invoke an AGI with a goal system X. Turns out that the inductive bias of our training process caused GPT-N to model agents-with-goal-system-X and such agents tend to sample lots of strings from the HWD* distribution in order to "steal" the cosmic endowment of reckless civilizations like ours. This would be a manifestation of is the same type of failure mode as the universal prior problem.
To me the most obvious risk (which I don't ATM think of as very likely for the next few iterations, or possibly ever, since the training is myopic/SL) would be that GPT-N in fact is computing (e.g. among other things) a superintelligent mesa-optimization process that understands the situation it is in and is agent-y. This risk is significantly more severe if nobody realizes this is the case or looking out for it.
In this case, the mesa-optimizer probably has a lot of leeway in terms of what it can say while avoiding detection. Everything is says has to stay within some "plausibility space" of arguments that will be accepted by readers (I'm neglecting more sophisticated mind-hacking, but probably shouldn't), but for many X, it can probably choose between compelling arguments for X and not-X in order to advance its goals. (If we used safety-via-debate, and it works, that would significantly restrict the "plasuability space").
Now, if we're unlucky, it can convince enough people that something that effectively unboxes it is safe and a good idea.
And once it's unboxed, we're in a Superintelligence-type scenario.
-----------------------------------------------
Another risk that could occur (without mesa-optimization) would be incidental belief-drift among alignment researchers, if it just so happens that the misalignment between "predict next token" and "create good arguments" is significant enough.
Incidental deviation from the correct specification is usually less of a concern, but with humans deciding which research directions to pursue based on outputs of GPT-N, there could be a feedback loop...
I think I believe the AI alignment research community is good enough at tracking the truth that this seems less plausible?
On the other hand, it becomes harder to track the truth if there is an alternative narrative plowing ahead making much faster progress... So if GPT-N enables much faster progress on a particular plausible seeming path towards alignment that was optimized for "next token prediction" rather than "good ideas"... I guess we could end up rolling the dice on whether "next token prediction" was actually likely to generate "good ideas".
To me the most obvious risk (which I don't ATM think of as very likely for the next few iterations, or possibly ever, since the training is myopic/SL) would be that GPT-N in fact is computing (e.g. among other things) a superintelligent mesa-optimization process that understands the situation it is in and is agent-y.
Do you have any idea of what the mesa objective might be. I agree that this is a worrisome risk, but I was more interested in the type of answer that specified, "Here's a plausible mesa objective given the incentives." Mesa optimization is a more general risk that isn't specific to the narrow training scheme used by GPT-N.
As far as I understand GPT-N it's not very agent-like (it doesn't perform me vs environment abstraction and doesn't look for ways to transform its perceived environment to satisfy some utility function). I wouldn't expect it to "scheme" against people since it lacks any concept of "affecting its environment".
However it seems likely that GTP-N can perfect the skill of crowd-pleasing (we already see that; we're constantly amazed by it, despite little meaning of created texts). It can precisely modulate it's tone and identify the talking points that get the most response.
So I expect the GTP-N generated texts to sound really persuasive, not because of novel ideas but because of superhuman ability to compose heard ideas into persuasive essay.
I would expect GTP-N to focus on presenting solutions for alignement (therefore making us overly optimistic about naive approaches), presenting novel risks (it's easy to make something up by simple rehashing) and possibly venturing in philosophical muddling the water (humans prove to be very easily engaged by certain topics, like self-consciousness)
we already see that; we're constantly amazed by it, despite little meaning of created texts
But GPT-3 is only trained to minimize prediction loss, not to maximize response. GPT-N may be able to crowd-please if it's trained on approval, but I don't think that's what's currently happening.
Last year Stuart Armstrong announced a contest to come up with the best questions to ask an Oracle AI. Wei Dai wrote,
He later related his answer to Paul Christiano's posts on Human-in-the-counterfactual-loop and Elaborations on apprenticeship learning. Here I'm interested in concrete things that can be expected to go wrong in the near future if we gave GPT-N this task.
To provide a specific example, suppose we provided the prompt,
If by assumption, GPT-N is at least as good as a human expert team at generating blog posts, we could presumably expect this GPT-N to produce a very high quality post explaining how to inspect machine learning models. We would therefore have a way of to automate alignment research at a high level. But a number of important questions remain, such as,