Epistemic status: rather controversial and not very well researched :) Not super novel, I assume, but a cursory look did not bring up any earlier posts, please feel free to link some.

Intuition pump: bigger brain does not necessarily imply a smarter creature. Apes are apparently smarter than elephants and dolphins appear smarter than blue whales. There is definitely a correlation, but the relationship is far from certain.

Starting point: intelligence is roughly equivalent to the degree of abstraction of the world models (detecting Dennett's "real patterns", at increasingly higher level). Humans are much better at abstract thought than other animals, and one can trace the creature's ability to find higher-level patterns in the world (including themselves) with higher intelligence throughout the natural and artificial world.

A non-novel point: Abstraction is compression. Specifically, abstraction is nothing but a lossy compression of the world model, be it the actual physical world, or the world of ideas.

An obvious point: generating good abstractions is expensive. If you have enough resources to use your existing mental capacity, there is no reason to expend resources on generating better abstractions. If you have room to grow your brain to add more of the same-level patterns, it is cheaper than building better abstractions in the same brain size. 

A less obvious point is how hard building good abstractions is. This is what theoretical research is and what separates the likes of Einstein, Dawkins and Nash from the rest of us.

An implication: size and compute restrictions while facing the need to cope with novel situations facilitate abstraction building.

A just so story: human brain size is (currently) constrained by the head size, which is constrained by the hip size due to having to walk upright, which is constrained by the body mass due to resource availability and, well, gravity, resulting in abstraction building being a good way to deal with the changing environment.

Current AI state: the LLMs now get smarter by getting larger and training more. There are always compute and size pressures, but they are not hard constraints, more like costs. Growing to get more successful, the elephant way, not the human way, seems like a winning strategy at this point.

Absolute constraints spark abstraction building: the vetoed California bill SB 1047 "covers AI models with training compute over 1026 integer or floating-point operations and a cost of over $100 million. If a covered model is fine-tuned using more than $10 million, the resulting model is also covered" according to  the Wikipedia. Should the bill had been signed, it would have created severe enough pressures to do more with less to focus on building better and better abstractions once the limits are hit.

A speculation: much better abstractions smooth out the "jagged frontier" and reduce or eliminate the weak spots of the current models, which is jumping from "rule interpolation" (according to François Chollet) to "rule invention", something he and other skeptics point out at as the weakness of the current models.

The danger: once the jagged frontier is smooth enough to enable "rule invention", we get to the "foom"-like zone Eliezer has been cautioning about. 

Conclusion: currently it does not look like there are skull-and-hip-size restrictions on AI, so even with the next few frontier models we are probably not at the point where the emerging abstraction level matches that of (smartest) humans. But this may not last.

New Comment
5 comments, sorted by Click to highlight new comments since:

Should the bill had been signed, it would have created severe enough pressures to do more with less to focus on building better and better abstractions once the limits are hit.

Ok, I see the argument. But even without such legislation, the costs of large training runs create major incentives to build better abstractions.

[-]Shmi20

Right, eventually it will. But abstraction building is very hard! If you have any other option, like growing in size, I would expect it to be taken first.

I guess I should be a bit more precise. Abstraction building at the same level as before is probably not very hard. But going up a level is basically equivalent to inventing a new way of compressing knowledge, which is a quantitative leap.

Does this summary capture the core argument? Physical constraints on the human brain contributed to its success relative to other animals, because it had to "do more with less" by using abstraction. Analogously, constraints on AI compute or size will encourage more abstraction, increasing the likelihood of "foom" danger.

Plus, we're already well into the misuse danger zone... And heading deeper fast.

We are the mouse fearing the cat of AGI, but everything we are doing teaches the kittens how to catch mice.