passive_fist comments on Link: Elon Musk wants gov't oversight for AI - Less Wrong

9 Post author: polymathwannabe 28 October 2014 02:15AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (44)

You are viewing a single comment's thread. Show more comments above.

Comment author: passive_fist 29 October 2014 05:38:34AM 2 points [-]

As long as the computer is in its own simulated world, with no input from the outside world, we're almost certainly safe. It cannot model the real world.

But hook it up to some cameras and microphones, and then you have the potential for something that could wind up being dangerous.

So I'd say there's no reason to speculate about 1000x computing power. Just stick it in a virtual world with no human communication and let in run for a while and see if it shows signs of the kind of intelligence that would be worrying.

(The AI Box argument does not apply here)

The challenge, of course, is coming up with a virtual world that is complex enough to be able to discern high intelligence while being different enough from the real world that it could not apply knowledge gained in the simulation to the real world.

Comment author: FeepingCreature 29 October 2014 02:18:23PM *  3 points [-]

As long as the computer is in its own simulated world, with no input from the outside world, we're almost certainly safe. It cannot model the real world.

Note: given really really large computational resources, an AI can always "break out by breaking in"; generate some physical laws ordered by complexity, look what sort of intelligent life arises in those cosmologies, craft an attack that works against it on the assumption that it's running the AI in a box, repeat for the hundred simplest cosmologies. This potentially needs a lot of computing power, but it might take very little depending on how determined our minds are by our physics.

Comment author: passive_fist 29 October 2014 10:14:59PM 3 points [-]

I'd say that if it started running a huge number of simulations of physical realities, and analyzing the intelligence of beings that resulted, that would fall squarely into the 'worrying level of intelligence' category.

In fact if it started attempting to alter the physics of the virtual world it's in at any level - either by finding some in-game way to hack the virtual world, or by running simulations of alternate physics - that would be incredibly worrying.