timtyler comments on Cryptographic Boxes for Unfriendly AI - Less Wrong

24 Post author: paulfchristiano 18 December 2010 08:28AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (155)

You are viewing a single comment's thread. Show more comments above.

Comment author: JamesAndrix 19 December 2010 07:47:09PM 0 points [-]

I just wanted to eliminate all such possible concerns so that no one could say "there will always be something you haven't thought of."

I can still say that. Layering sandboxes doesn't mean an AI can't find a flaw in each layer.

When you're up against an opponent that might understand its own systems on a much deeper level than you, adding barriers that look confusing to you is not the appropriate response.

A proven JVM would be a good a method of hardware isolation. Cryptography gives a false sense of security.

I have far more confidence in a system that simply denies access based on a bit flag than that a cryptosystem performs as advertised. If you can't do the former correctly, then I definitely don't trust you with the latter.

Comment author: timtyler 19 December 2010 07:58:08PM *  2 points [-]

Cryptography gives a false sense of security.

That is only true for those who don't understand it.