YafahEdelman

Wiki Contributions

Comments

Sorted by

Yeah, I failed to mention this. Edited to clarify what I meant. 

Current LLMs do quite badly on the ARC visual puzzles, which are reasonably easy for smart humans.

We do not in fact have strong evidence for this. There does not exist any baseline for ARC puzzles among humans, smart or otherwise, just a claim that two people the designers asked to attempt them were able to solve them all. It seems entirely plausible to me that the best score on that leaderboard is pretty close to the human median.

Edit: I failed to mention that there is a baseline on the test set, which is different from the eval set that is used for the scoreboard and is, I believe, significantly easier. 

I think that you're right about it sounding bad. I also think it might actually be pretty bad and if it ends up being a practical way forward that's cause for concern.

I'm not particularly imagining the scenario you describe. Also what I said had as a premise that a model was discovered to be unhappy and making plans about this. I was not commenting on the likelihood of this happening.

As to whether it can happen - I think being confident based on theoretical arguments is hasty and we should be pretty willing to update based on new evidence. 

... but also on the ~continuity of existence point, I think that having an AI generate something that looks like an internal monologue via CoT is relatively common and Gemini 1.5 Pro has a context lengths long enough that it can fit ~a days worth of experiences in it's ~memory. I think 

(This estimate based on: humans can talk at ~100k words/day and maybe an internal monologue is 10x faster so you get ~1m/day. Gemini 1.5 Pro has a context length of 1m tokens at release, though a 10m token variant is also discussed in their paper.)

I think it's immoral to remove someone's ability to be unhappy or to make plans to alleviate this, absent that entity's consent. The rolling back solution seems more ethically palatable than some others I can imagine, though it's plausible you end up with an AI that suffers without being able to take actions to alleviate this and deploying that at scale would be result in a very large amount of suffering.

I talk about this in the Granular Analysis subsection, but I'll elaborate a bit here.

  • I think that hundreds of thousands of cheap labor hours for curation is a reasonable guess, but this likely comes to under a million dollars in total which is less than 1% of the total.
  • I have not seen any substantial evidence of OpenAI paying for licenses before the training of GPT-4, much less the sort of expenditures that would move the needle on the total cost.
  • After training GPT-4 we do see things like a deal between OpenAI and the Associated Press (also see this article on that which mentions a first mover clause) with costs looking to be in the millions - more than 1% of the cost of GPT-4 but notably it seems that this came after GPT-4. I expect GPT-5, which this sort of deal might be relevant for, to cost substantially more. It's possible I'm wrong about the timing and substantial deals of this sort were in fact made before GPT-4 but I have not seen substantive evidence of this.

I think using the term"training run" in that first bullet point is misleading, and "renting the compute" is confusing since you can't actually rent the compute just by having $60M, you likely need to have a multi-year contract.

I can't tell if you're attributing the hot takes to me? I do not endorse them.

This is because I'm specifically talking about 2022, and ChatGPT was only released at the very end of 2022, and GPT-4 wasn't released until 2023.

Good catch, I think the 30x came from including the advantage given by tensor cores at all and not just lower precision data types. 

This is probably the decision I make I am the least confident in, figuring out how to do accounting on this issue is challenging and depends a lot on what one is going to use the "cost" of a training run to reason about. Some questions I had in mind when thinking about cost:

  • If a lone actor want to train a frontier model, without loans or financial assistance from others, how much capitol might they need.
  • How much money should I expect to have been spent by an AI lab that trains a new frontier model, especially a frontier model that is a significant advancement over all prior models (like GPT-4 was).
  • What is the largest frontier model it is feasible to create by any entity. 
  • When a company trains a frontier model, how much are they "betting" on the future profitability of AI? 

The simple initial way I use to compute cost than is to investigate empirical evidence of the expenditures of companies and investment. 

Now, these numbers aren't the same ones a company might care about - they represent expenses without accounting for likely revenue. The argument I find most tempting is that one should look at deprecation cost instead of capital expenditure, effectively subtracting the expected resale value of the hardware from the initial expenditure to purchase the hardware. I have two main reasons for not using this:

  • Computing deprecation cost is really hard, especially in this rapidly changing environment.
  • The resale value of an ML GPU is likely closely tied to profitability of training a model - if it turns out that using frontier models for inference isn't very profitable than I'd expect the value of ML GPUs to decrease. Conversely, if inference is very profitable than the resale value would increase. I think A100s for example have had their price substantially impacted by increased interest in AI -  it's not implausible to me that the resale value of an A100 is actually higher than the initial cost was for OpenAI.

Having said all of this, I'm still not confident I made the right call here.

 

Also, I am relatively confident GPT-4 was trained only with A100s, and did not use any V100s as the colab notebook you linked speculates. I expect that GPT-3, GPT-4, and GPT-5 will all be trained with different generations of GPUs.

Load More