jacob_cannell comments on Anthropomorphic AI and Sandboxed Virtual Universes - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (123)
We have the seed - its called physics, and we certainly don't need to run it from start to civilization!
On the one hand I was discussing sci-fi scenarios that have an intrinsic explanation for a small human populations (such as a sleeper ship colony encountering a new system).
And on the other hand you can do big partial simulations of our world, and if you don't have enough AI's to play all the humans you could use simpler simulacra to fill in.
Eventually with enough Moore's Law you could run a large sized world on its own, and run it considerably faster than real time. But you still wouldn't need to start that long ago - maybe only a few generations.
Could != would. You grossly underestimate how impossibly difficult this would be for them.
Again - how do you know you are not in a sim?
You misunderstand me. What I'm confident about is that I'm not in a sim written by agents who are dumber than me.
Not even agents with really fast computers?
You're right, of course. I'm not in a sim written by agents dumber than me in a world where computation has noticeable costs (negentropy, etc).
How do you measure that intelligence?
What i'm trying to show is a set of techniques where a civilization could spawn simulated sub-civilizations such that the total effective intelligence capacity is mainly in the simulations. That doesn't have anything to do with the maximum intelligence of individuals in the sim.
Intelligence is not magic. It has strict computational limits.
A small population of guards can control a much larger population of prisoners. The same principle applies here. Its all about leverage. And creating an entire sim universe is a massive, massive lever of control. Ultimate control.