thomblake comments on Open Thread: March 2010, part 2 - Less Wrong

4 Post author: RobinZ 11 March 2010 05:25PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (334)

You are viewing a single comment's thread. Show more comments above.

Comment author: thomblake 12 March 2010 07:26:38PM 0 points [-]

The problems involved in creating ethical military robots are vastly different from those involved in general AI. Ron Arkin's Governing Lethal Behavior in Autonomous Robots does a good job of describing how one should think when building such a thing. Basically, there are rules for war, and the trick is to just implement those in the robot, and there's very little judgement left over. To hear him explain it, it doesn't even sound like a very hard problem.

Comment author: SilasBarta 12 March 2010 07:41:20PM 2 points [-]

To hear him explain it, it doesn't even sound like a very hard problem.

Then I'm not sure he understands the problem. How does the robot tell the difference between an enemy soldier and a noncombatant? When they're surrendering? When they're dead/severely wounded?

The rules of war themselves are fairly algorithmic, but applying them is a different story.

Comment author: thomblake 12 March 2010 07:48:06PM 3 points [-]

Well there's a bit of bracketing at work here. Distinguishing between an enemy soldier and a noncombatant isn't an ethical problem. He does note that determining when a soldier is surrendering is difficult, and points out the places where there really is an ethical difficulty (for example, someone who surrenders and then seems to be aggressive).