JoshuaZ comments on What if AI doesn't quite go FOOM? - Less Wrong

11 Post author: Mass_Driver 20 June 2010 12:03AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (186)

You are viewing a single comment's thread. Show more comments above.

Comment author: JoshuaZ 20 June 2010 04:59:48PM 2 points [-]

Ah ok. Then we don't disagree substantially. I just consider the two possibilities (problems with the encryption method, and error in implementation) to be roughly the same probability or close enough given the data we currently have that I can't make a decent judgment on the matter, whereas you seem to think that the human error problem is substantially more likely.

Comment author: wedrifid 20 June 2010 05:28:57PM 2 points [-]

Yes, it sounds like just a difference in degree.

This subject deserves a whole chapter of Harry Potter Fanfiction. The need for Constant Vigilance when guarding against an enemy that is resourceful, clever, more powerful and tireless. It would conclude with Mad Eye Moody getting killed. Constant Vigilance is futile when you are a human. The only option is to kill the enemy once and for all, to eliminate that dependence.

Comment author: gwern 20 June 2010 06:28:10PM 2 points [-]

I don't think MoR really needs a chapter on that.

I mean, canon Harry Potter does that already - Mad Eye (the real one) is captured by Dark forces before we ever meet him, tortured routinely, and 2 or 3 years later is killed by them.

(And of course, canon Mad Eye had no chance of actually killing Voldemort once and for all, so Constant Vigilance was all he could do.)

Comment author: Douglas_Knight 20 June 2010 10:38:34PM 1 point [-]

More examples: (1) people have a history of reusing one-time pads (2) side-channel attacks. The latter is a big deal that doesn't really fit the dichotomy.