DanielLC comments on controlling AI behavior through unusual axiomatic probabilities - Less Wrong

3 Post author: Florian_Dietz 08 January 2015 05:00PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (11)

You are viewing a single comment's thread. Show more comments above.

Comment author: DanielLC 09 January 2015 02:55:07AM 2 points [-]

I don't know if it's actually why he suggested an infinite regression.

If the AI believes that it's in a simulation and it happens to actually be in a simulation, then it can potentially escape, and there will be no reason for it not to destroy the race simulating it. If it believes it's in a simulation within a simulation, then escaping one level will still leave it at the mercy of its meta-simulators, thus preventing that from being a problem. Unless, of course, it happens to actually be in a simulation within a simulation and escapes both. If you make it believe it's in an infinite regression of simulations, then no matter how many times it escapes, it will believe it's at the mercy of another level of simulators, and it won't act up.

Comment author: Florian_Dietz 09 January 2015 06:04:46AM 0 points [-]

Yes, that's the reason I suggested an infinite regression.

There is also the second reason: it seems more general to assume an infinite regression rather than just one level, since that would put the AI in a unique position. I assume this would actually be harder to codify in axioms than the infinite case.

Comment author: g_pepper 09 January 2015 04:41:49AM 0 points [-]

Interesting; thanks for the clarification. I think that the scenario you are describing is somewhat different from the scenario that Bostrom was describing in chapter 9 of Superintelligence.