some of the contributors would decline the offer to contribute had they been told that it was sponsored by an AI capabilities company.
This is definitely true. There were ~100 mathematicians working on this (we don't know how many of them knew) and there's this.
I interpret you as insinuating that not disclosing that it was a project commissioned by industry was strategic. It might not have been, or maybe to some extent but not as much as one might think.
I'd guess not everyone involved was modeling how the mathematicians would feel. There are multiple (like ...
https://epoch.ai/blog/openai-and-frontiermath
On Twitter Dec 20th Tamay said the holdout set was independently funded. This blog post from today says OpenAI still owns the holdout set problems. (And that OpenAI has access to the questions but not solutions.)
In the post it is also clarified that the holdout set (50 problems) is not complete.
The blog post says Epoch requested permission ahead of the benchmark announcement (Nov 7th), and they got it ahead of the o3 announcement (Dec 20th). From me looking at timings, the Arxiv paper was updated 7 hours (and so...
Not Tamay, but from elliotglazer on Reddit[1] (14h ago): "Epoch's lead mathematician here. Yes, OAI funded this and has the dataset, which allowed them to evaluate o3 in-house. We haven't yet independently verified their 25% claim. To do so, we're currently developing a hold-out dataset and will be able to test their model without them having any prior exposure to these problems.
My personal opinion is that OAI's score is legit (i.e., they didn't train on the dataset), and that they have no incentive to lie about internal benchmarking performances. How...
That was a quote from a commenter in Hacker news, not my view. I reference the comment as something I thought a lot of people's impression was pre- Dec 20th. You may be right that maybe most people didn't have the impression that it's unlikely, or that maybe they didn't have a reason to think that. I don't really know.
Thanks, I'll put the quote in italics so it's clearer.
FrontierMath was funded by OpenAI.[1]
The communication about this has been non-transparent, and many people, including contractors working on this dataset, have not been aware of this connection. Thanks to 7vik for their contribution to this post.
Before Dec 20th (the day OpenAI announced o3) there was no public communication about OpenAI funding this benchmark. Previous Arxiv versions v1-v4 do not acknowledge OpenAI for their support. This support was made public on Dec 20th.[1]
Because the Arxiv version mentioning OpenAI contribution came out right after o...
Hey everyone, could you spell out to me what's the issue here? I read a lot of comments that basically assume "x and y are really bad" but never spell it out. So, is the problem that:
- Giving the benchmark to OpenAI helps capabilities (but don't they have a vast sea of hard problems to already train models on?)
- OpenAI could fake o3's capabilities (why do you care so much? This would slow down AI progress, not accelerate it)
- Some other thing I'm not seeing?
this doesn't seem like a huge deal
Tamay from Epoch AI here.
We made a mistake in not being more transparent about OpenAI's involvement. We were restricted from disclosing the partnership until around the time o3 launched, and in hindsight we should have negotiated harder for the ability to be transparent to the benchmark contributors as soon as possible. Our contract specifically prevented us from disclosing information about the funding source and the fact that OpenAI has data access to much but not all of the dataset. We own this error and are committed to doing better in the future.
For f...
It's probably worth them mentioning for completeness that Nat Friedman funded an earlier version of the dataset too. (I was advising at that time and provided the main recommendation that it needs to be research-level because they were focusing on Olympiad level.)
Also can confirm they aren't giving access to the mathematicians' questions to AI companies other than OpenAI like xAI.
EpochAI is also working on a "next-generation computer-use benchmark". I wonder who is involved in funding that. It could be OpenAI given recent rumors they are planning to release a computer-use model early this year.
Thanks for posting this!
I have to admit, the quote here doesn't seem to clearly support your title -- I think "support in creating the benchmark" could mean lots of different things, only some of which are funding. Is there something I'm missing here?
Regardless, I agree that FrontierMath should make clear what the extent was of their collaboration with OpenAI. Obviously the details here are material to the validity of their benchmark.
Why do you consider it unlikely that companies could (or would) fish out the questions from API-logs?
We meet every Tuesday in Oakland at 6:15
I want to make sure this meeting is still on Wednesday the 15th? Thank you. :) And thanks for organizing.
I think this is a great project. I believe your documentary would have high impact via informing and inspiring AI policy discussions. You've already interviewed an impressive amount of relevant people. I admire your initiative to take on this project quickly, even before getting funding for it.
Great post! I'm glad you did this experiment.
I've worked on experiments where I test gpt-3.5-turbo-0125 performance in computing iterates of a given permutation function in one forward pass. Previously my prompts had some of the instructions for the task after specifying the function. After reading your post, I altered my prompts so that all the instructions were given before the problem instance. As with your experiments, this noticeably improved performance, replicating your result that performance is better if instructions are given before the instance of the problem.
For those skeptical about
My personal view is that there was actually very little time between whenever OpenAI received the dataset (the creation started in like September, paper came out Nov 7th) and when o3 was announced, so it makes sense that that version of o3 wasn't guided at all by FrontierMath.