Thanks for the post and the critiques. I won't respond at length, other than to say two things: (i) it seems right to me that we'll need something like licensing or pre-approvals of deployments, ideally also decisions to train particularly risky models. Also that such a regime would be undergirded by various compute governance efforts to identify and punish non-compliance. This could e.g. involve cloud providers needing to check if a customer buying more than X compute have the relevant license or confirm that they are not using the compute to train a model above a certain size. In short, my view is that what's needed are the more intense versions of what's proposed in the paper. Though I'll note that there are lots of things I'm unsure about. E.g. there are issues with putting in place regulation while the requirements that would be imposed on development are so nascent.
(ii) the primary value and goal of the paper in my mind (as suggested by Justin) is in pulling together a somewhat broad coalition of authors from many different organizations making the case for regulation of frontier models. Writing pieces with lots of co-authors is difficult, especially if the topic is contentious, as this one is, and will often lead to recommendations being weaker than they otherwise would be. But overall, I think that's worth the cost. It's also useful to note that I think it can be counterproductive for calls for regulation (in particular regulation that is considered particularly onerous) to be coming loudly from industry actors, who people may assume have ulterior motives.
Thanks for the post and the critiques. I won't respond at length, other than to say two things: (i) it seems right to me that we'll need something like licensing or pre-approvals of deployments, ideally also decisions to train particularly risky models. Also that such a regime would be undergirded by various compute governance efforts to identify and punish non-compliance. This could e.g. involve cloud providers needing to check if a customer buying more than X compute have the relevant license or confirm that they are not using the compute to train a model above a certain size. In short, my view is that what's needed are the more intense versions of what's proposed in the paper. Though I'll note that there are lots of things I'm unsure about. E.g. there are issues with putting in place regulation while the requirements that would be imposed on development are so nascent.
(ii) the primary value and goal of the paper in my mind (as suggested by Justin) is in pulling together a somewhat broad coalition of authors from many different organizations making the case for regulation of frontier models. Writing pieces with lots of co-authors is difficult, especially if the topic is contentious, as this one is, and will often lead to recommendations being weaker than they otherwise would be. But overall, I think that's worth the cost. It's also useful to note that I think it can be counterproductive for calls for regulation (in particular regulation that is considered particularly onerous) to be coming loudly from industry actors, who people may assume have ulterior motives.