james leeming
james leeming has not written any posts yet.

james leeming has not written any posts yet.

The medieval lord doesn't get to see New York. He's asking about things he knows well: troops, castles, woodland, farmland. Towns and cities are small and less significant remember? All societies are agrarian! He doesn't get to see what we want to show him, he's asking us questions and we're answering and wishing we could say 'yes but you should be asking about our arsenal of nuclear submarines that fire 12 missiles each with 8 warheads that can incinerate an entire army anywhere in the world within 30 minutes'
We're looking at stars, the things we know well. Stars, black holes, planets and dust are 5% of the universe. The entire visible universe... (read more)
Why do we think aliens would do things with stars? How can we be sure that our reasoning isn't similar to that of a medieval nobleman trying to gauge the power of the US today?
"How many castles are in their realm? No castles? What, they can field hundreds of thousands of men-at-arms but no horse? What sort of backwards land is this? Is this another realm like the Aztec empire I heard rumours about? Enormous gold reserves and huge armies but none of the refinements of the civilized world! Let's invade!"
You can see how they would make incorrect assumptions if they got to ask the questions!
95% of the universe is 'dark'. What... (read more)
Surely transformer-based architecture is not what superintelligences will be running on. Transformers have many limitations. The context window for one, can this be made large enough for what a superintelligence would need? What about learning and self-improvement after training? Scaling and improving transformers might be a path to superintelligence but it seems like a very inefficient route.
We've demonstrated that roughly human-level intelligence can, in many ways, be achieved by Transformer architecture. But what if there's something way better than Transformers, just as Transformers are superior to what we were using before? We shouldn't rule out someone publishing a landmark paper with a better architecture. The last landmark paper came out in 2017!
And there might well be discontinuities in performance. Pre-Stable Diffusion AI art was pretty awful, especially faces. It went from awful to artful in a matter of months, not years.