Rocketeer
Rocketeer has not written any posts yet.

Rocketeer has not written any posts yet.

I apologize as this is a theory I'm still working out myself.
No worries! Hashing out the details in our theories is always fun, and getting another perspective should be encouraged.
With that said, I think this theory could still use some more work.
The torrent or information actually transfers FASTER the more seeds / leeches there are.
That's because there are more computers in use, yes. Adding more physical computers often increases speeds, but that's not an ironclad rule. Changing how the host and client interact without adding more computers is unlikely to be incredibly helpful unless you're fixing a mistake with the initial setup, and splitting one program on one supercomputer... (read 528 more words →)
Several comments here, possibly motivated by my not entirely understanding your idea here.
It doesn't seem obvious to me that it's possible to reduce the computational difficulty of a simulation by "offloading" that difficulty onto another part of the simulation. You're also a little unclear about what you mean by "computing power" to begin with,
Every AI within the simulation would get fragmented data
OK, what do you mean by this? Do you mean that each agent gets inputs from only some of the space, i.e. fog of war? Under that interpretation, it's trivially true - I do not know everything.
Do you mean that each agent is itself computing some fraction of the simulation... (read more)
Even given entirely aleatoric risk, it's not clear to me that the compounding effect is necessary.
Suppose my model for AI risk is a very naive one - when the AI is first turned on, its values are either completely aligned (95% chance) or unaligned (5% chance). Under this model, one month after turning on the AI, I'll have a 5% chance of being dead, and a 95% chance of being an immortal demigod. Another month, year, or decade, and there's still a 5% chance after... (read more)