MichaelGR comments on The AI in a box boxes you - Less Wrong

102 Post author: Stuart_Armstrong 02 February 2010 10:10AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (378)

You are viewing a single comment's thread. Show more comments above.

Comment author: rosyatrandom 02 February 2010 03:29:05PM *  28 points [-]

If the AI can create a perfect simulation of you and run several million simultaneous copies in something like real time, then it is powerful enough to determine through trial and error exactly what it needs to say to get you to release it.

Comment author: MichaelGR 03 February 2010 09:23:18PM 4 points [-]

This begs the question of how can the AI simulate you if its only link to the external world is a text-only terminal. That doesn't seem to be enough data to go on.

Makes for a very scary sci-fi scenario, but I doubt that this situation could actually happen if the AI really is in a box.

Comment author: Amanojack 31 March 2010 01:25:27PM 5 points [-]

Indeed, a similar point seems to apply to the whole anti-boxing argument. Are we really prepared to say that super-intelligence implies being able to extrapolate anything from a tiny number of data points?

It sounds a bit too much like the claim that a sufficiently intelligent being could "make A = ~A" or other such meaninglessness.

Hyperintelligence != magic

Comment author: jacob_cannell 04 February 2011 05:36:51AM 0 points [-]

Yes, but the AI could take over the world, and given a Singularity, it should be possible to recreate perfect simulations.

So really this example makes more sense if the AI is making a future threat.