DanArmak comments on Reply to Holden on 'Tool AI' - Less Wrong

94 Post author: Eliezer_Yudkowsky 12 June 2012 06:00PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (348)

You are viewing a single comment's thread. Show more comments above.

Comment author: DanArmak 13 June 2012 08:29:43PM 1 point [-]

Surely NASA code is thoroughly tested in simulation runs. It's the equivalent of having a known-perfect method of boxing an AI.

Comment author: asparisi 14 June 2012 11:18:31PM 0 points [-]

Huh. This brings up the question of whether or not it would be possible to simulate the AGI code in a test-run without regular risks. Maybe create some failsafe that is invisible to the AGI that destroys it if it is "let out of the box" or (to incorporate Holden's suggestion, since it just came to me) having a "tool mode" where the AGI's agent-properties (decision making, goal setting, etc.) are non-functional.

Comment author: Eliezer_Yudkowsky 14 June 2012 09:26:34PM -1 points [-]

But NASA code can't check itself - there's no attempt at having an AI go over it.

Comment author: DanArmak 15 June 2012 06:45:40AM 0 points [-]

Yes, but even ordinary simulation testing produces software that's much better on its first real run than software that has never been run at all.