YafahEdelman

Wikitag Contributions

Comments

Sorted by

Fixed the link. 
 

IMO that's plausible but it would be pretty misleading since they described it as "o3-mini with high reasoning" and had "o3-mini (high)" in the chart and o3-mini high is what they call a specific option in ChatGPT.

Yeah, I failed to mention this. Edited to clarify what I meant. 

Current LLMs do quite badly on the ARC visual puzzles, which are reasonably easy for smart humans.

We do not in fact have strong evidence for this. There does not exist any baseline for ARC puzzles among humans, smart or otherwise, just a claim that two people the designers asked to attempt them were able to solve them all. It seems entirely plausible to me that the best score on that leaderboard is pretty close to the human median.

Edit: I failed to mention that there is a baseline on the test set, which is different from the eval set that is used for the scoreboard and is, I believe, significantly easier. 

I think that you're right about it sounding bad. I also think it might actually be pretty bad and if it ends up being a practical way forward that's cause for concern.

I'm not particularly imagining the scenario you describe. Also what I said had as a premise that a model was discovered to be unhappy and making plans about this. I was not commenting on the likelihood of this happening.

As to whether it can happen - I think being confident based on theoretical arguments is hasty and we should be pretty willing to update based on new evidence. 

... but also on the ~continuity of existence point, I think that having an AI generate something that looks like an internal monologue via CoT is relatively common and Gemini 1.5 Pro has a context lengths long enough that it can fit ~a days worth of experiences in it's ~memory. I think 

(This estimate based on: humans can talk at ~100k words/day and maybe an internal monologue is 10x faster so you get ~1m/day. Gemini 1.5 Pro has a context length of 1m tokens at release, though a 10m token variant is also discussed in their paper.)

I think it's immoral to remove someone's ability to be unhappy or to make plans to alleviate this, absent that entity's consent. The rolling back solution seems more ethically palatable than some others I can imagine, though it's plausible you end up with an AI that suffers without being able to take actions to alleviate this and deploying that at scale would be result in a very large amount of suffering.

I talk about this in the Granular Analysis subsection, but I'll elaborate a bit here.

  • I think that hundreds of thousands of cheap labor hours for curation is a reasonable guess, but this likely comes to under a million dollars in total which is less than 1% of the total.
  • I have not seen any substantial evidence of OpenAI paying for licenses before the training of GPT-4, much less the sort of expenditures that would move the needle on the total cost.
  • After training GPT-4 we do see things like a deal between OpenAI and the Associated Press (also see this article on that which mentions a first mover clause) with costs looking to be in the millions - more than 1% of the cost of GPT-4 but notably it seems that this came after GPT-4. I expect GPT-5, which this sort of deal might be relevant for, to cost substantially more. It's possible I'm wrong about the timing and substantial deals of this sort were in fact made before GPT-4 but I have not seen substantive evidence of this.

I think using the term"training run" in that first bullet point is misleading, and "renting the compute" is confusing since you can't actually rent the compute just by having $60M, you likely need to have a multi-year contract.

I can't tell if you're attributing the hot takes to me? I do not endorse them.

This is because I'm specifically talking about 2022, and ChatGPT was only released at the very end of 2022, and GPT-4 wasn't released until 2023.

Good catch, I think the 30x came from including the advantage given by tensor cores at all and not just lower precision data types. 

Load More