Nice! If I had university CS affiliations I would send them this with unsubtle comments that it would be a cool project to get students to try :P
In fact, now that I think about it, I do have one contact through the UIUC datathon. Or would you rather not have this sort of marketing?
Or would you rather not have this sort of marketing?
I would be excited to see this competition promoted widely!
(Obviously I wouldn't want to do anything that reflected really poorly on the marketers, like blackmail, but this seems to clearly not be in that category.)
Love this idea. From the linked post on the BAIR website, the idea of "prompting" a Minecraft task with e.g. a brief sequence of video frames seems especially interesting.
Would you anticipate the benchmark version of this would ask participants to disclose metrics such as "amount of task-specific feedback or data used in training"? Or does this end up being too hard to quantify because you're explicitly expecting folks to use a variety of feedback modalities to train their agents?
Would you anticipate the benchmark version of this would ask participants to disclose metrics such as "amount of task-specific feedback or data used in training"?
Probably not, just because it's pretty niche -- I expect the vast majority of papers (at least in the near future) will have only task-specific feedback, so the extra data isn't worth the additional hassle. (The prompting approach seems like it would require a lot of compute.)
Tbc, "amount of task-specific feedback" should still be inferable from research papers, where you are meant to provide enough details that others could reproduce your work. It just wouldn't be as simple as looking up the "BASALT evaluation table" for your method of choice.
That makes sense, though I'd also expect that LfLH benchmarks like BASALT could turn out to be a better fit for superscale models in general. (e.g. a BASALT analogue might do a better job of capturing the flexibility of GPT-N or DALL-E type models than current benchmarks do, though you'd probably need to define a few hundred tasks for that to be useful. It's also possible this has already been done and I'm unaware of it.)
That makes sense, though I'd also expect that LfLH benchmarks like BASALT could turn out to be a better fit for superscale models in general.
Oh yeah, it totally is, and I'd be excited for that to happen. But I think that will be a single project, whereas the benchmark reporting process is meant to apply for things where there will be lots of projects that you want to compare in a reasonably apples-to-apples way, so when designing the reporting process I'm focused more on the small-scale projects that aren't GPT-N-like.
It's also possible this has already been done and I'm unaware of it
I'm pretty confident that there's nothing like this that's been done and publicly released.
It's not quite as interesting as I initially thought, since they allow handcrafted reward functions and heuristics. It would be more interesting if the designers did not know the particular task in advance, and the AI would be forced to learn the task entirely from demonstrations and/or natural language description.
Going from zero to "produce an AI that learns the task entirely from demonstrations and/or natural language description" is really hard for the modern AI research hive mind. You have to instead give it a shaped reward, breadcrumbs along the way that are easier, (such as allowing handcrafted heuristics and such, and allowing knowledge of a particular target task) to get the hive mind started making progress.
In general:
a research AI that will train agents based on its understanding of described benchmarks does sound interesting. Although how it would get ahold of things like 'human feedback', and craft setups for that, isn't clear.*
Trying to create a setup so AIs can learn from other AIs (crafting rewards - seems unlikely. Expert demonstrations - might be doable. Whether this would be more or less useful than a human demonstration, I'm not sure. There actually might be the option to ask 'what would _ do in this situation' if you can actually run _.)
Edited to add:
*First I was imagining Skynet. Now I'm imagining a really weird license agreement: you may use this AI in exchange for [$ + billing schedule], but with the data that's trained on (even if seemingly totally unrelated), you must also have it try working on this AI benchmark, (frequency based on usage), and publicly share the score (but not necessarily pictures of outputs - there's the risk of user data being leaked, by means of being recreated in minecraft, and the user retains full responsibility for the security of such data, and is encouraged but not required to run it 'offline', i.e. not on Minecraft servers).
It's not "from zero" though, I think that we already have ML techniques that should be applicable here.
they allow handcrafted reward functions and heuristics
We allow it, but we don't think it will lead to good performance (unless you throw a very large amount of time at it).
The AI safety community claims it is hard to specify reward functions. If we actually believe this claim, we should be able to create tasks where even if we allow people to specify reward functions, they won't be able to do so. That's what we've tried to do here.
Note we do ban extraction of information from the Minecraft simulator -- you have to work with pixels, so if you want to make handcrafted reward functions, you have to compute rewards from pixels somehow. (Technically you also have inventory information but that's not that useful.) We have this rule because in a real-world deployment you wouldn't be able to simply extract the "state" of physical reality.
I am a bit more worried about allowing heuristics -- it's plausible to me that our chosen tasks are simple enough that heuristics could solve them, even though real world tasks are too complex for similar heuristics to work -- but this is basically a place where we're sticking our necks out and saying "nope, heuristics won't suffice either" (again, unless you put a lot of effort into designing the heuristics, where it would have been faster to just build the system that, say, learns from demonstrations).
It would be more interesting if the designers did not know the particular task in advance
But for real-world deployment of AI systems, designers do know the task in advance! We don't want to ban strategies that designers could use in a realistic setting.
The AI safety community claims it is hard to specify reward functions... But for real-world deployment of AI systems, designers do know the task in advance!
Right, but you're also going for tasks that are relatively simple and easy. In the sense that, "MakeWaterfall" is something that I can, based on my own experience, imagine solving without any ML at all (but ofc going to that extreme would require massive work). It might be that for such tasks solutions using handcrafted rewards/heuristics would be viable, but wouldn't scale to more complex tasks. If your task was e.g. "follow arbitrary natural language instructions" then I wouldn't care about the "lax" rules.
Note we do ban extraction of information from the Minecraft simulator
This is certainly good, but I wonder what are the exact rules here. Suppose the designer trains a neural network to recognize trees in minecraft by getting the minecraft engine to generate lots of images of trees. The resulting network is then used as a hardcoded part of the agent architecture. Is that allowed? If not, how well can you enforce it (I imagine something of the sort can be done in subtler ways)?
Not saying that what you're doing is not useful, just pointing out a certain way in which the benchmark might diverge from its stated aim.
It might be that for such tasks solutions using handcrafted rewards/heuristics would be viable, but wouldn't scale to more complex tasks.
I agree that's possible. Tbc, we did spend some time thinking about how we might use handcrafted rewards / heuristics to solve the tasks, and eliminated a couple based on this, so I think it probably won't be true here.
Suppose the designer trains a neural network to recognize trees in minecraft by getting the minecraft engine to generate lots of images of trees. The resulting network is then used as a hardcoded part of the agent architecture. Is that allowed?
No.
If not, how well can you enforce it (I imagine something of the sort can be done in subtler ways)?
For the competition, there's a ban on pretrained models that weren't publicly available prior to competition start. We look at participants' training code to ensure compliance. It is still possible to violate this rule in a way that we may not catch (e.g. maybe you use internal simulator details to do hyperparameter tuning, and then hardcode the hyperparameters in your training code), but it seems quite challenging and not worth the effort even if you are willing to cheat.
For the benchmark (which is what I'm more excited about in the longer run), we're relying on researchers to follow the rules. Science already relies on researchers honestly reporting their results -- it's pretty hard to catch cases where you just make up numbers for your experimental results.
(Also in the benchmark version, people are unlikely to write a paper about how they solved the task using special-case heuristics; that would be an embarrassing paper.)
The AI safety community claims it is hard to specify reward functions. If we actually believe this claim, we should be able to create tasks where even if we allow people to specify reward functions, they won't be able to do so. That's what we've tried to do here.
It being hard to specify reward functions for a specific task and it being hard to specify reward functions for a more general AGI seem to me like two very different problems.
Additionally, developing a safe system and developing a nonsafe system are very different. Even if your reward function works 99,9% of the time it can be exploited in those cases where it fails.
Okay, regardless of what the AI safety community claims, I want to make that claim.
(I think a substantial chunk of the AI safety community also makes that claim but I'm not interested in defending that here.)
It being hard to specify reward functions for a specific task and it being hard to specify reward functions for a more general AGI seem to me like two very different problems.
As an aside, if I thought we could build task-specific AI systems for arbitrary tasks, and only super general AI systems were dangerous, I'd be advocating really hard for sticking with task-specific AI systems and never building super general AI systems (or only building them after some really high threshold of safety was met).
if I thought we could build task-specific AI systems for arbitrary tasks, and only super general AI systems were dangerous, I'd be advocating really hard for sticking with task-specific AI systems and never building super general AI systems
The problem with this is that you need an AI whose task is "protect humanity from unaligned AIs", which is already very "general" in a way (i.e. requires operating on large scales of space, time and strategy). Unless you can effectively reduce this to many "narrow" tasks which is probably not impossible but also not easy.
I think it's very easy to say "don't do general system do task specific ones", because general ones might promise a lot of economic returns.
A task like "Handle this Amazon customer query correctly" is already very general as it includes a host of different long tail issues about possible bugs that might appear (some of those unknown). If a customer faces an issue on a page that's likely a bug, a customer service AI profits from understanding the code that produces the issue that the customer has.
Given the way economic pressures work, I see it as very probably that companies will just go ahead and look at what's most efficient for their business goals.
It's not clear that a system which doesn't use reward doesn't have the same issue (relative to "99,9% of the time").
Copying the abstract of the paper:
I also mention this in the latest Alignment Newsletter, but I think this is probably one of the best ways to get started on AI alignment from the empirical ML perspective: it will (hopefully) give you a sense of what it is like to work with algorithms that learn from human feedback, in a more realistic setting than Atari / MuJoCo, while still not requiring a huge amount of background or industry-level compute budgets.
Section 1.1 of the paper goes into more detail about the pathways to impact. At a high level, the story is that better algorithms for learning from human feedback will improve our ability to build AI systems that do what their designers intend them to do. This is straightforwardly improving on intent alignment (though it is not solving it), which in turn allows us to better govern our AI systems by enabling regulations like "your AI systems must be trained to do X" without requiring a mathematical formalization of X.