Adversarial Policies Beat Professional-Level Go AIs
An interesting adversarial attack at KataGo, a professional level Go AI. Apparently funded by Fund for Alignment Research (FAR). Seems to be a good use of fund.
AI 2027 Compute Forecast basically completely ignores China for Compute Production section and I don't think it can be justified. This paper from Huawei is a timely reminder.
I used to think while OpenAI is pretty deceitful (eg for-profit conversion) it generally won't lie about its research. This is a pretty definitive case of lying, so I updated accordingly. I am posting here because it doesn't seem to be widely known.
Recently automated reasoning system was developed to solve IMO problems. It is a very impressive and exciting advance, but it must be noted IMO problems are to be solved in two days. Recently it was also proposed to make use of this advance to greatly automate formal verification of practical...
Comment deadline is June 10, 2023.
The announcement is of obvious importance to global AI governance. As I understand, you can email your comments to wajscy@cac.gov.cn, and I recommend everyone to do so.
Finbarr Timbers makes a point, obvious in retrospect, but which many people, including people forecasting AI timeline, seem to miss: since training cost is amortized over inference, optimal training depends on expected amount of inference. Both scaling laws from OpenAI and DeepMind assume zero (or negligible) inference, which is obviously...
> We performed a blind pairwise comparison between text-davinci-003 and Alpaca 7B, and we found that these two models have very similar performance: Alpaca wins 90 versus 89 comparisons against text-davinci-003. Interestingly, Alpaca is trained using supervised finetuning, not RLHF. (text-davinci-003 is trained using RLHF.) This seems to confirm my...