JGWeissman comments on Rolf Nelson: How to deter a rogue AI by using your first-mover advantage - Less Wrong

6 Post author: Kevin 17 November 2010 02:02PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (23)

You are viewing a single comment's thread. Show more comments above.

Comment author: JGWeissman 17 November 2010 07:18:06PM 0 points [-]

This assumes that highly accurate simulations of reality can occur with not much resources. If that's not the case then this fails.

How sure are you that you are not in an approximate simulation of a more precisely detailed reality, with the precision of your expectations scaled down proportionally with the precision of your observations?

(Of course, I am only responding to 1 of your 3 independant arguments)

Comment author: JoshuaZ 17 November 2010 07:23:41PM 0 points [-]

How sure are you that you are not in an approximate simulation of a more precisely detailed reality, with the precision of your expectations scaled down proportionally with the precision of your observations?

I don't know if I am or am not in a simulation. But if one has a reasonably FOOMed AI it becomes much more plausible that it would be able to tell. It might be able to detect minor discrepancies. Also, I'd assign a much higher probability to the possibility that I'm in a simulation if I knew that detailed simulations are possible in our universe. If the smart AI determines that it is in a universe that doesn't allow detailed simulations for at all plausible resource levels then the chance that it is in a simulation should be low.

Comment author: JGWeissman 17 November 2010 07:33:38PM 1 point [-]

My point is that the simulation does not have to be as detailed as reality, in part because the agents within the simulation don't have any reliable experience of being in reality, being themselves less detailed than "real" agents, and so don't know what level of detail to expect. A simulation could even have simplified reality plus a global rule that manipulates any agent's working memory to remove any realization it might have that it is in a simulation.

Comment author: JoshuaZ 17 November 2010 07:51:24PM 0 points [-]

That requires very detailed rules about manipulating agents within the system rather than doing a straight physics simulation (otherwise what do you do when it modifies its memory system). I'm not arguing that it isn't possibly doable, just that it doesn't seem necessarily to be likely.