Our experience so far is while reasoning models don't improve performance directly (3.7 is better than 3.6, but 3.7 extended thinking is NOT better than 3.7), they do so indirectly because thinking trace helps us debug prompts and tool output when models misunderstand them. This was not the result we expected but it is the case.
I happen to work on the exact sample problem (application security pentesting) and I confirm I observe the same. Sonnet 3.5/3.6/3.7 were big releases, others didn't help, etc. As for OpenAI o-series models, we are debating whether it is model capability problem or model elicitation problem, because from interactive usage it seems clear it needs different prompting and we haven't yet seriously optimized prompting for o-series. Evaluation is scarce, but we built something along the line of CWE-Bench-Java discussed in this paper, this was a major effort and we are reasonably sure we can evaluate. As for grounding, fighting false positives, and avoiding models to report "potential" problems to sound good, we found grounding on code coverage to be effective. Run JaCoCo, tell models PoC || GTFO, where PoC is structured as vulnerability description with source code file and line and triggering input. Write the oracle verifier of this PoC: at the very least you can confirm execution reaches the line in a way models can't ever fake.
OpenAI wasted a whole year between GPT-3 and GPT-4. (Source: Greg Brockman said this in an OpenAI developer event.) So yes, I think OpenAI was 12+ months ahead at one time.
I think if you weren't carefully reading OpenAI's documentation it was pretty easy to believe that text-davinci-002 was InstructGPT (and hence trained with RLHF).
Not only was it easy, in fact many people did (including myself). In fact, can you point a single case of people NOT making this reading mistake? As in, after January 2022 instruction following announcement, but before October 2022 model index for researchers. Jan Leike's tweet you linked to postdates October 2022 and does not count. The allegation is that OpenAI lied (or at the very least was extremely misleading) for ten months of 2022. I am more ambivalent about post October 2022.
This comment is probably not very useful, but my first thought was: "we invented a polygraph for AI!".
When I imagine models inventing a language my imagination is something like Shinichi Mochizuki's Inter-universal Teichmüller theory invented for his supposed proof of abc conjecture. It is clearly something like mathematical English and you could say it is "quite intelligible" compared to "neuralese", but at the end, it is not very intelligible.
I understand many people here are native English speakers, but I am not, and one thing I think about a lot is how much people should spend on learning English. Learning English is a big investment. Will AI advances make language barriers irrelevant? I am very uncertain about this and I would like to hear your opinions.
This is a good idea and it already works, it is just that AI is wholly unnecessary. Have a look at 2018 post Protecting Applications with Automated Software Diversity.
If we do get powerful AI, it seems highly plausible that even if we stay in control we will 'go too fast' in deploying it relative to society's ability to adapt, if only because of the need to grow fast and stay ahead of others, and because the market doesn't care that society wants it to go slower.
After reading my interpretation was this: assuming we stay in control, that happens only if powerful AI is aligned. The market doesn't care that society wants to go slower, but AI will care that society wants to go slower, so when the market tries to force AI to go faster, AI will refuse.
I reflected on whether I am being too generous, but I don't think I am. Other readings didn't make sense to me, and I am assuming Dario is trying to make sense, while you seem doubtful. That is, I think this is plausibly Dario's actual prediction of how fast things will go, not a hope it won't go faster. But importantly, that is assuming alignment. Since that assumption is already hopeful, it is natural the prediction under that assumption sounds hopeful.
Paul Crowley: It's a strange essay, in that it asks us to imagine a world in which a single datacenter contains 1E6 Nobelists expert in every field and thinking at 100x speed, and asks what happens if "sci-fi" outcomes somehow don’t happen. Of course "sci-fi" stuff happens almost immediately.
I mean, yes, sci-fi style stuff does seem rather obviously like it would happen? If it didn't, then that’s a rather chilling indictment of the field of sci-fi?
To re-state, sci-fi outcomes don't happen because AI is aligned. Proof: if sci-fi outcomes happened, AI would be unaligned. I actually think this point is extremely clear in the essay. It literally states: "An aligned AI would not want to do these things (and if we have an unaligned AI, we're back to talking about risks)".
Maybe we are talking about different problems, but we found instructing models to give up (literally "give up", I just checked the source) under certain conditions to be effective.