paulfchristiano comments on Cryptographic Boxes for Unfriendly AI - Less Wrong

24 Post author: paulfchristiano 18 December 2010 08:28AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (155)

You are viewing a single comment's thread. Show more comments above.

Comment author: ChristianKl 31 January 2013 05:46:32PM -1 points [-]

You assume that the encryption you implemented works the same way as your mathematical proof and your implementation has no bugs.

In real life it happens frequently that encryption gets implemented with bugs that compromize it.

Comment author: paulfchristiano 31 January 2013 08:20:27PM 1 point [-]

It also happens frequently that it doesn't.

This is a serious concern with any cryptographic system, but it's unrelated to the accusation that cryptography is security through confusingness. The game is still you vs. the cryptographic design problem, not you vs. an AI.

Comment author: ChristianKl 31 January 2013 08:46:05PM -1 points [-]

It also happens frequently that it doesn't.

How do you know that a system that's currently working is bug free?

On a more general note, most trusted cryptographic systems have source that get's checked by many people. If you write a cryptographic tool specifically for the purpose of sandboxing this AI the crypto-tool is likely to be reviewed by less people.

A system that gives you a simple bit flag to deny access is easier to check for bugs, it's less confusing.