Hey, thanks for taking the time to answer!
First, I want to make clear that I don’t believe LLMs to be just stochastic parrots, nor do I doubt that they are capable of world modeling. And you are right to request some more specifically stated beliefs and predictions. In this comment, I attempted to improve on this, with limited success.
There are two main pillars in my world model that make me, even in light of the massive gains in capabilities we have seen in the last seven year, still skeptical of transformer architecture scaling straight to AGI.
The performance of o1 in the first linked paper is indeed impressive, especially on what they call mystery blocksworld. Would not have expected this level of improvement. Do you know of any material that goes into more detail on the RL pre-training of o1?
I do take issue with the conclusion that reasoning in the confines of toy problems is sufficient to scale directly to AGI, though. The disagreement might stem from differing definitions of AGI. LLMs (or LRMs) exist in an environment provided by humans, including the means to translate LLM output into "acti...
I'd put a reasonably high probability (5%) on orcas and several other species having all the necessary raw mental capacity to be "uplifted" in just a few (<20) generations with technology (in the wider sense) that has been available for a long time. Being uplifted means here the ability to intellectually engage with us on a near-equal or even equal footing, to create culture, to actively shape their destiny. Humans have been training, selecting, shaping other animals since before the dawn of history. Whenever we did so, it was with the goal of improving...
Then I misunderstood your original comment, sorry. As a different commenter wrote, the obvious solution would be to only engage with interesting people. But, of course, unworkable in practice. And "social grooming" nearly always involves some level of talking. A curse of our language abilities, I guess. Other social animals don't have that particular problem.
The next best solution would be higher efficiency, more socializing bang for your word count buck, so to speak. Shorter conversations for the same social effect. Not usually a focus of anything billed as conversation guide, for obvious reasons. But there are some methods aimed at different goals that, in my experience, also help with this as a side effect.
I understand, for someone with a strong drive to solve hard problems, there's an urge for conversations to serve a function, exchange information with your interlocutor so things can get done. There's much to do and communication is already painfully inefficient at it's best.
The thing is, I don't think the free-association game is inefficient, if one is skilled at it. It's also not all that free. The reason it is something humans "developed" is because it is the most efficient way to exchange rough but extensive models of our minds with others via natural ...
That’s a fantastic memory aid for this concept, much appreciated! Crafting games in general give ample examples to internalize this kind of bootstrap mentality. Also for quickly scaling to the next anvil-equivalent. As you touched upon, real life has a deep crafting tree, with anvil problems upon anvil problems. Something that took me far too long to learn, if you got your anvil, but still don't find yourself were you want to be, it pays to find the next anvil problem quickly. If you still have a lot of distance to cover, don't get bogged down by things th... (read more)