One quick observation about NVDA dividends that not many people might be aware of: NVDA pays a quarterly dividend of exactly once cent ($0.01) per share. They don't do this for the "usual" reason companies pay dividends (returning money to shareholders) but because by paying a non-zero dividend at all NVDA becomes part of dividend-paying company indexes and that means that ETFs that follow those indexes will buy NVDA shares. So they technically pay a dividend but for the purposes of valuation you should think of it as a non dividend paying stock.
Regarding ...
...Taking your Tetris example, sure 6KB seems small -- as long as you restrict yourself to a space of all possible programs for Gameboy or whichever platform you took this example from. But if your goal is to encode Tetris for a computer engineer who has no knowledge about Gameboy, you will have to include, at the very least, the documentation on the CPU ISA, the hardware architecture of the device and the details on the quirks of its I/O hardware. That would already bring the "size of Tetris" to 10s of megabytes. Describing it for a person from 1950s, I susp
I don't think it affects the essence of your argument, but I would say that you cannot get a good estimate of the Kolgomorov complexity of Word or other modern software from binary size. The Kolgomorov complexity of Word should properly be the size of the smallest binary that would execute in an indistinguishable way to Word. There are very good reasons to think that the existing Word binary is significantly larger than that.
Modern software development practices optimize for a combination of factors where binary size has very little weight. Development and...
It would be great to prevent it, but it also seems very hard? Is there anything short of an international agreement with serious teeth that could have a decent chance of doing it? I suppose US-only legislation could maybe delay it for a few years and would be worth doing, but that also seems a very big lift in current climate.
Really fantastic primer! I have been meaning to learn more about DeFi and this was a perfect intro.
Does anybody know, for someone who wants to learn more, not just on the investing/trading side but on the development of smart contracts, what are good resources, other than the many links in the article?
Are there good books on the topic? Or tutorials?
What about subreddits or discord servers? People to follow on twitter?
I get what you are saying. You have convinced me that the following two statements are contradictory:
My confusion is that it intuitively it seems both must be true for a rational agent but I guess my intuition is just wrong.
Thanks for your comments, they were very illuminating.
I think you are not allowed to refer explicitly to utility in the options.
I was going to answer that I can easily reword my example to not explicitly mention any utility values, but when I tried to that it very quickly led to something where it is obvious that u(A) = u(C). I guess my rewording was basically going through the steps of the proof of VNM theorem.
I am still not sure I am convinced by your objection, as I don't think there's anything self-referential in my example, but that did give me some pause.
The tricky bit is the question whether this also applies to one-shot problems or not.
This is the crux. It seems to me that the expected utility frame work means that if you prefer A to B in one time choice, then you must also prefer n repetitions of A to n repetitions of B, because the fact that you have larger variance for n=1 does not matter. This seems intuitively wrong to me.
Thanks, I looked at the discussion you linked with interest. I think I understand my confusion a little better, but I am still confused.
I can walk through the proof of the VNM theorem and see where the independence axiom comes in and how it leads to u(A)=u(B) in my example. The axiom of independence itself feels unassailable to me and I am not quite sure this is a strong enough argument against it. Maybe having a more direct argument from axiom of independence to unintuitive result would be more convincing.
Maybe the answer is to read Dawes book, thanks for the reference.
I find it confusing that the only thing that matters to a rational agent is the expectation of utility, i.e., that the details of the probability distribution of utilities do not matter.
I understand that VNM theorem proves that from what seem reasonable axioms, but on the other hand it seems to me that there is nothing irrational about having different risk preferences. Consider the following two scenarios
According to expected utility, it is...
Even if it actually turns out that "Super human AI will run on computers not much more expensive than personal computers" (which deepseek-r1 made marginally more plausible, but I'd say is still unlikely) it remains true that there will be very large returns to running 100 super human AIs instead of 1, or maybe 1 that's 100 times larger and smarter.
In other words, demand for hardware capable of running AIs will be very elastic. I don't see reductions in the costs of running AIs of a given level being bad for expected NVDA future cashflows. They don't mean we'll run the same "amount of AI" in less hardware, it will be closer to more AI in same amount of hardware.