I didn't have in mind o1, these exact results seem consistent. Here's an example I had in mind:
Claude 3.5 Sonnet (old) scores 48% on ProtocolQA, and 7.1% on BioLP-bench
GPT-4o scores 53% on ProtocolQA and 17% on BioLP-bench
Good post.
The craziest thing for me is that the results of different evals, like ProtocolQA and my BioLP-bench, that suppose to evaluate similar things, are highly inconsistent. For example, two models can have similar scores on ProtocolQA, but one scores twice as much answers on BioLP-bench as the other. It means that we might not measure things we think we measure. And no one knows what causes this difference in the results.
This is an amazing overview of the field. Even if it won't collect tons of upvotes, it is super important, and saved me many hours. Thank you.
Totally agree. But in other cases, when the agent was discouraged against dceiving, it did it too.
Thanks for your feedback. It's always a pleasure to see that my work is helpful for people. I hope you will write articles that are way better than mine!
Thanks for your thoughtful answer. It's interesting how I just describe my observations, and people make conclusions out of it that I didn't think of
For me it was a medication for my bipolar disorder quetiapine
Thanks. I got a bit clickbaity in the title.
And I'm unsure that experts are comparable, to be frank. Due to financial limitations, I used graduate students in BioLP, while the authors of LAB-bench used PhD-level scientists.