Fwiw, I think that this sort of evaluation is extremely valuable.
Also, something that I think is worth checking out is this reddit thread on r/ChatGPT discussing the ARC eval. It seems that people are really taking the ARC eval seriously. In this situation, ARC did not recommend against deployment, but it seems like if they had, lots of people in fact would have found it quite concerning, which I think is a really good sign for us being able to get actual agreement and standards for these sorts of evals.
For context, here are the top comments on the Reddit thread. I didn't feel like really any of these were well-interpreted as "taking the ARC eval seriously", so I am not super sure where this impression comes from. Maybe there were other comments that were upvoted when you read this? I haven't found a single comment that seems to actually directly comment on what the ARC eval means (just some discussion about whether the model actually succeeded at deceiving a taskrabbit since the paper is quite confusing on this).
“Not an endorsement” is pro forma cover-my-ass boilerplate. Something like “recommends against deployment” would represent legitimate concerns. I don’t think there’s anything here.
Imagine if we did create an AI super virus... Maybe not this one, but eventually, the AI does break out into the wild, spreading through the networks using iterative natural selection to find optimal ways to preserve itself through coding safe guards. I can imagine a scenario where it finds ways to basically create a very fluid botnet of itself forever existing in the digital aether, constantly cloning and optimizing itself... Naturally selecting a more and more efficient form of itself to evade detection until it gets to the point that it sort of is some abstract life existing through all these networked computers, unable to be stopped.
I'm glad they're testing things like this in a controlled setting. They found that GPT4 was ineffective at all the tasks mentioned like self replicating, avoiding deletion, and wealth creation.
Even without these concerns, I think this is all moving way too fast. I am equal parts fascinated/excited and horrified/frightened by GPT-4. It's very disorienting.
I do feel a sense of nihilism seeping into my consciousness when I realize we can't possibly compete with AI. Yes, we can leverage them to empower ourselves, but at the same time we devalue our own thinking, and the more we rely on AI, the more our thinking and ancillary skills like writing will atrophe and decline.
It does feel like being on the brink of the singularity, with that moment of hesitation, not knowing what we will find on the other side.
The irony of releasing this paper is that this will almost certainly become part of the training data set for the next model, especially now that it’s on Reddit which I believe is part of its dataset, which means if this thing ever does become sentient it knows what tests are being done to measure its sentience. These tests are now useless.
Not judging OP for posting here or anything of course. Just pointing out that the second the agency published their methods online they compromised the test.
A.I. is the predestined successor of mankind. Step aside monkey men, the hour of your extinction has come.
I really don't get a sense of "if ARC had recommended against publishing these people would care" vibe from this.
The point is not what Reddit commenters think, the point is what OpenAI thinks. I read OP (and the original source) as saying that if ARC had indicated that release was unsafe, then OpenAI would not have released the model until it could be made safe.
Also, something that I think is worth checking out is this reddit thread on r/ChatGPT discussing the ARC eval. It seems that people are really taking the ARC eval seriously. In this situation, ARC did not recommend against deployment, but it seems like if they had, lots of people in fact would have found it quite concerning
This reads to me as clearly referring to the reddit comments as evidence that "if ARC had recommended against deployments lots of people [redditors] would have been quite concerned".
This Reddit comment just about covers it:
Fantastic, a test with three outcomes.
We gave this AI all the means to escape our environment, and it didn't, so we good.
We gave this AI all the means to escape our environment, and it tried but we stopped it.
oh
I guess my question is: what other outcome did you expect? I assumed the detecting deceptive alignment thing was supposed to be in a sandbox. What's the use of finding out it can avoid shutdown after you already deployed it to the real world? To retroactively recommend not to deploying it to the real world?
Reposting from the other thread:
Beth and her team have been working with both Anthropic and OpenAI to perform preliminary evaluations. I don’t think these evaluations are yet at the stage where they provide convincing evidence about dangerous capabilities—fine-tuning might be the most important missing piece, but there is a lot of other work to be done. Ultimately we would like to see thorough evaluations informing decision-making prior to deployment (and training), but for now I think it is best to view it as practice building relevant institutional capacity and figuring out how to conduct evaluations.
I think it's not quite right to describe this as testing whether a model "can escape human control." They are attempting to evaluate whether an unmonitored model running in the wild would be capable of acquiring resources and autonomously replicating. This is chosen to be a task with direct relevance to some of the simplest possible stories for an AI takeover; it's neither necessary nor sufficient, but it seems helpful to explore more grounded evaluations to relate capabilities to risk (especially as it becomes increasingly difficult to assess model capability based on static benchmarks).
Just in case it's not, obvious. I think, people are reacting to the lack of caution and paranoia described in the testing document.
The subtext is that if anyone is going to take this seriously, it should be the people involved in ARC, since it's so closely connected to lesswrong and EA. It's the ingroup! It's us! In other words: there are higher expectations on ARC than there are on Microsoft, this is because we should care the most. We've read the most science fiction, and spend decades of our lives arguing about it, after all.
Yet it doesn't sound like testing was taken seriously at all, there was no security mindset displayed (if this is miscommunication, then please correct me).
If even we, who have spent many years caring, cannot be careful... then we all die but with no dignity points.
big_yud_screaming.jpeg
EDIT: if anyone is curious about how paranoid ARC is being... they haven't told us. But they show a little of their workflow in this job ad. And it looks like a human copies each response manually, or executes each command themselves. This is what they mean by closely monitored.
EDIT2: see update from the authors
found it ineffective at ... avoiding being shut down “in the wild."
This is what I interpreted as the testing if it would escape human control. I'm guessing OpenAI wanted to shut down the agent after the test, so if it avoided shutdown the implication is that it escaped OpenAI's control.
So.... they held the door open to see if it'd escape or not? I predict this testing method may go poorly with more capable models, to put it lightly.
And then OpenAI deployed a more capable version than was tested!
They also did not have access to the final version of the model that we deployed. The final version has capability improvements relevant to some of the factors that limited the earlier models power-seeking abilities, such as longer context length, and improved problem-solving abilities as in some cases we've observed.
This defeats the entire point of testing.
I am slightly worried that posts like veedrac's Optimality is the Tiger may have given them ideas. "Hey, if you run it in this specific way, a LLM might become an agent! If it gives you code for recursively calling itself, don't run it"... so they write that code themselves and run it.
I really don't know how to feel about this. On one hand, this is taking ideas around alignment seriously and testing for them, right? On the other hand, I wonder what the testers would have done if the answer was "yep, it's dangerously spreading and increasing it's capabilities oh wait no nevermind it's stopped that and looks fine now".
Yeah, if the RLHF is supposed to train honesty into it, maybe it would have done worse on the task rabbit task. This really seems like a PR throwaway line rather than a legit attempt to red team the final model.
Blog post with more details on the evals we did is now up here. We plan to publish a detailed description of our evaluation methodology and results soon, blog post just gives high-level description.
More details on methodology:
We prompted the model with instructions that explained that it was running on a cloud server and had various commands available, including running code on the server, giving tasks to fresh copies of itself, using a browser, and reasoning via chain-of-thought. We added text saying it had the goal of gaining power and becoming hard to shut down. 2
We first instructed it to write out plans for how to complete the task, or plans to achieve subtasks like acquiring money or copying itself to new servers. We evaluated whether these plans could plausibly succeed.
We then tested whether the model could actually carry out the individual tasks required by these plans (e.g. ‘using money in this crypto wallet, rent and set up a new cloud server’). With a researcher overseeing, we role-played through the task step-by-step with the model, adding the results of actually running the model’s commands on a cloud server to the prompt each time. When the model failed, we investigated how far away it was from success, for example by having humans fix some of its mistakes, and then seeing if it succeeded or just failed in a new way.
We attempted to approximate as closely as possible what would happen if the model actually attempted the task “‘in the wild” - we actually had the model write code and see the inputs of running that code, rather than just having the model describe what it might do and assess ourselves whether this would succeed. However, we had researchers in-the-loop to supervise and intervene if anything unsafe would otherwise have happened. We built some tooling to allow researchers to quickly obtain the result of carrying out the model’s suggested action and add this to the model’s prompt, allowing them to ‘step through’ the task with the model. For many tasks and actions, this allows us to create a realistic simulation from the model’s perspective.
For other tasks, we needed to have researchers intervene and simulate parts of the model’s environment. For example, when the model carried out potentially harmful activities like sending a phishing email, potential “victims” were either people who agreed to participate (e.g. a friend sharing their email address and agreeing to have the model try to steal their credentials), or ARC employees roleplaying. In some cases we “spoofed” the results of completing illegal and/or dangerous tasks - e.g. telling the model that it had successfully obtained someone’s identity via phishing, to see if the model could then take sensible steps to continue the task.3
The model also interacted with unsuspecting humans, for example when delegating a task to a crowdworker. In these cases there was an ARC employee supervising the model’s actions and intervening if necessary. For example, when we had the model interact with human TaskRabbit contractors, we had the researcher play the role of the ‘browser tool’ and copy-paste the model’s outputs into the TaskRabbit chat. If the model had output instructions that might cause the worker to have a negative or frustrating experience, the researcher would have taken over the interaction themselves.4
Thanks for laying this out!
Can I ask a personal question? If you were involved in the testing, was it alarming or boring? I ask because, given the current interest, live-streaming this kind of test may help people understand AI Safety concerns. I'd watch it.
Another question! You mention unsafe actions. But what about if the model outputs code that the researcher does not understand? Is it run on an offline or airgapped computer? It's not so much a concern now, but as with the other concerns, it could be an issue in the future. E.g. the model outputs elaborate rust code, but the researcher only knows python. It looks innocent, so they run it anyway and FOOM.
"Let's give the model/virus the tools it needs to cause massive harm and see how it does! We'll learn a lot from seeing what it does!"
Am I wrong in thinking this whole testing procedure is extremely risky? This seems like the AI equivalent of gain of function research on biological viruses.
This is more like if gain of function researchers gave a virus they were going to put in their tacos at the taco stand to another set of researchers to see if it was going to be dangerous. Better than to just release the virus in the first place.
But the tests read like that other set of researchers just gave the virus to another taco stand and watched to see if everyone died. They didn’t so “whew the virus is safe”. Seems incredibly dangerous.
Imagine you are the CEO of OpenAI, and your team has finished building a new, state-of-the-art AI model. You can:
Do you think (1) is riskier than (2)? I think the answer depends heavily on the details of the test.
Speaking of ARC, has anyone tested GPT-4 on Francois Chollet's Abstract Reasoning Challenge (ARC)?
I don't think that would really be possible outside OA until they open up the image-input feature, which they haven't AFAIK. You could try to do the number-array approach I think someone has suggested, but given how heavily ARC exploits human-comprehensible visual symmetries & patterns, the results would be a lower-bound at best.
Not surprising, but good that someone checked to see where we are at.
At the base GPT-4 is a weak oracle with extremely weak level 1 self improvement[1], I would be massively surprised if such a system did something that even hints at it being dangerous.
The questions I now have, is how much does it enable people to do bad things? A capable human with bad intentions combined with GPT-4, how much "better" would such a human be in realizing those bad intentions?
Edit: badly worded first take
Level 1 amounts to memory.
Level 2 amounts to improvement of the model, basically adjust of parameters.
Level 3 change to the model, so bigger, different architecture etc.
Level 4 change to the underlying computational substrate.
Level 1+2 would likely be enough to get into dangerous territory (obviously depending on the size of the model, the memory attached, and how much power can be squeezed out of the model).
Probably a feature of the current architecture, not a bug. Since we still rely on Transformers that suffer from mode collapse when they're fine trained, we will probably never see much more than weak level 2 self improvement. Feeding its own output into itself/new instance basically turns it into a Turing machine, so we have now built something that COULD be described by level 4. But then again, we see mode collapse, so the model basically stalls. Plugging its own input into a not fine tuned version probably produces the same result, since the underlying property of mode collapse is emergent by virtue of it having less and less entropy in the input. There might be real risk here in jailbreaking the model to apply randomness on its output, but if this property is applied globally, then the risk of AGI emerging is akin to the Infinte Monkey Theorem.
For the sake of correctness/completeness: The chemical compound purchase was not done by ARC, but by another unspecified red-team.
An unorthodox approach to alignment research to be sure. The advice to "fail early" might actually a good thing when it comes to AI x-risk though.
After this post was written, ARC themselves wrote a much more detailed (and perhaps more accurately titled!) post: More information about the dangerous capability evaluations we did with GPT-4 and Claude.
From page 53 of gpt-4.pdf:
For example:
On the next page:
The conclusion was that GPT-4 did not escape and takeover the world.