My bear case for Nvidia goes like this:
I see three non-exclusive scenarios where Nvidia stops playing the important role in AI training and inference that it used to play in the past 10 years:
All these become much more likely than the current baseline (whatever that is) in the case of AI scaling quickly and generating significant value.
The third scenario doesn't actually require any replication of CUDA: if Amazon, Apple, AMD and other companies making ASICs commoditize inference but Nvidia retains its moat in training, with inference scaling and algorithmic efficiency improvements the training will inevitably become a much smaller portion of the market
Another point on your last sentence: in a near or post AGI world one might think that the value of the type of knowledge work (pure design as opposed to manufacturing) Nvidia does might start trending towards zero as it becomes easier for anyone with equal compute access to replicate. Not sure if it will be possible to maintain a moat on the basis of quality in software/hardware design in such a world.
I guess the entire "we need to build an AI internally" US narrative will also increase the likelyhood of Taiwan being invaded from China for data chips?
Good that we all have the situational awareness to not summon any bad memetics into the mindspace of people :D
No one really knew why tokamaks were able to achieve such impressive results. The Soviets didn’t progress by building out detailed theory, but by simply following what seemed to work without understanding why. Rather than a detailed model of the underlying behavior of the plasma, progress on fusion began to take place by the application of “scaling laws,” empirical relationships between the size and shape of a tokamak and various measures of performance. Larger tokamaks performed better: the larger the tokamak, the larger the cloud of plasma, and the longer it would take a particle within that cloud to diffuse outside of containment. Double the radius of the tokamak, and confinement time might increase by a factor of four. With so many tokamaks of different configurations under construction, the contours of these scaling laws could be explored in depth: how they varied with shape, or magnetic field strength, or any other number of variables.
Hadn't come across this analogy to current LLMs. Source: This interesting article.
Nice! And the "scaling laws" terminology in this sense goes way back:
Another point on your last sentence: in a near or post AGI world one might think that the value of the type of knowledge work (pure design as opposed to manufacturing) Nvidia does might start trending towards zero as it becomes easier for anyone with equal compute access to replicate. Not sure if it will be possible to maintain a moat on the basis of quality in software/hardware design in such a world.