Wei_Dai comments on Wanted: backup plans for "seed AI turns out to be easy" - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (62)
I'm assuming that the AI will be allowed to FOOM and take over enough of the universe to run enough simulations. If that's still not sufficient, we can make it more likely for the AI/human team to find a good outcome, at the cost of increasing complexity. For example allow the humans to tell the AI to restart another set of simulations with an embedded message, like "Last time you did X, and it didn't turn out well. Try something else!"
Have humans monitor the simulations, and stop the ones that are headed in bad directions or do not show promise of leading to a positive Singularity. Save the random seeds for the simulations so everyone can be recreated once a positive Singularity is established.
Do you mean like if a simulation establishes a world government and starts to research FAI carefully, but a century after it's released into the real world, the government falls apart? We can let a simulation run until a positive Singularity actually occurs inside, and only release it then.
I originally wrote down a more complex proposal that tried to address some of these issues, but switched to a simpler one because
Sounds like it'd be a better idea to run one simulation, in which the stars blink a message telling everyone they are in such a simulation and need to give the SIAI as many Manhattan projects they ask for in the way they ask for or they'll be deleted/go to hell. Possibly starting it a fair number of decades in the past so there's plenty of time.