Setting aside AI, what do we do about it?
If we do consider AI, how does the analysis change? My guess is that either we'll have enough abundance that these questions of cost and living standard aren't relevant or we'll have other more important problems to worry about.
Suppose you are correct and that OpenPhil did indeed believe in long timelines pre-ChatGPT. Does this reflect badly on them? It seems like a reasonable prior to me, and many senior researchers even within OA were uncertain that the their methods would scale to more powerful systems.
Might it be simpler to remove the source of model non-determism which causes different results with temperature 0? If it's due to a hardware bug, then this seems like a signal that the node should be replaced. If it's due to a software bug then this should be patched.
Cool result! Do you know they used Llama 2 instead of Llama 3? The paper was released recently.
This seems right - I was confused about the original paper. My bad.
Yep, I think you're right, thanks for pointing this out.
Google/Deepmind has publicly advocated preserving CoT Faithfullness/Moniterability as long as possible. However, they are also leading the development of new architectures like Hope and Titans which would bypass this with continuous memory. I notice I am confused. Is the plan to develop these architectures and not deploy them? If so, why did they publish them?
Edit: Many people have pointed out correctly that Hope and Titans don't break CoT and it's a separate architectural improvement. Therefore I no longer endorse the above take. Thanks for correcting my confusion!
The smaller amount of NVL72s that are currently in operation can only serve large models to a smaller user base.
Do you know the reason for the NVL72 delay? I thought it was announced in March 2024.
Out of curiosity, what was your top choice?