Already, there are dozens of fine-tuned Llama2 models scoring above 70 on MMLU. They are laughably far from threats. This does seem like an exceptionally low bar. GPT-4, given the right prompt crafting, and adjusting for errors in MMLU has just been shown to be capable of 89 on MMLU. It would not be surprising for Llama models to achieve >80 on MMLU in the next 6 months.
I think focusing on a benchmark like MMLU is not the right approach, and will be very quickly outmoded. If we look at the other criteria (which, as you propose it now, any and all are a ...
If you are specifically trying to just ensure that all big AI labs are under common oversight, the most direct way is via compute budget. E.g., any organization with compute budget >$100M allocated for AI research. Would capture all the big labs. (OpenAI spent >$400M on compute in 2022 alone).
No need to complicate it with anything else.
If I remember correctly, Eliezer's old site had some references to this. Was this not common invocation on SL4?
Yes! Or even further, "I am now focusing my life on risk reduction and have significantly reduced akrasia in all facets of my life."
Worth noting, it's givewell.net. givewell.com links to a Visa card program, givewell.net is a site which aims to answer "Where should I donate?"
On further reflection, I'd tentatively propose something along these lines as an additional measure:
As I've now seen others suggest, trigger limits determined only as a percentage of the state of the art's performance.
This could be implemented as a proposal to give a government agency the power to work as the overseer and final arbiter of deciding, once per year for the following year (and ad-hoc on an emergency basis), the metrics and threshold percentages of indexing what is determined state of the art.
This would be done in consultation with representati... (read more)