timtyler comments on Cryptographic Boxes for Unfriendly AI - Less Wrong

24 Post author: paulfchristiano 18 December 2010 08:28AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (155)

You are viewing a single comment's thread. Show more comments above.

Comment author: datadataeverywhere 18 December 2010 09:24:02AM *  8 points [-]

I don't think you understand what a fully homomorphic encryption system is.

In particular, if you have a homomorphic encryption of the source code of a program you can run that program without learning anything about it (except how long the source is), but the output of the program will be homomorphically encrypted.

Barak et al proved that you cannot encrypt source code and have someone else run it without decrypting it. Gentry's results do not contradict that proof. A fully homomorphic encryption system allows one do encrypt data and have that data operated on by an program without that program being able to derive information about what that data actually is.

If it wasn't for that flaw, I would be focusing on others; the assumption that we have a source code verifier (Oracle) that can tell us definitively that an AI is friendly implies, at minimum, that we have an incredibly precise and completely accurate mathematical model of friendliness. Start on that problem first.

Looking this far out, I'm not sure I trust these encryptions schemes anyway. For instance, are you positive our uFAI is of the non-quantum variety?

What do you mean by

If you want to be even more fantastically paranoid [...] you will actually have to shred the key, guaranteeing that an AI will have to interact with the shredded key to recover it. If you do this, you have to use another new primitive to implement the final revelation of the friendly AI's message.

What do you mean by this? You're not suggesting destroying all copies of the private key, and then expecting to decrypt the answer yourself are you? I can't figure out what else you mean by this, even though that makes no sense.

Comment author: timtyler 18 December 2010 11:20:37AM *  1 point [-]

The "On the (Im)possibility of Obfuscating Programs" link is interesting, IMO. Haven't finished it yet, and am still rather sceptical. Obfuscation is not really like a cryptographic primitive - but it is not necessarily trivial either. The paper itself says:

Our work rules out the standard, “virtual black box” notion of obfuscators as impossible, along with several of its applications. However, it does not mean that there is no method of making programs “unintelligible” in some meaningful and precise sense. Such a method could still prove useful for software protection.

Also, the main approach is to exhibit some programs which the authors claim can't be obfuscated. That doesn't seem all that significant to me.

The link with homomorphic encryption seems rather weak - though the authors do mention homomorphic encryption as a possible application of obfuscation in their abstract.

Comment author: paulfchristiano 18 December 2010 06:37:35PM 3 points [-]

Ruling out a black box obfuscator is extremely significant. This means that anything which provably obfuscates a program needs to exploit some special structure of the thing being obfuscated. In fact, they show it needs to exploit some super special structure, because even very easy classes of program to obfuscate can't be obfuscated.

There are actually several strong connections between obfuscation and homomorphic encryption. For one, obfuscation implies homomorphic encryption. For two, their descriptions are very semantically similar. For three, the possibility of homomorphic encryption is precisely the thing that implies the impossibility of obfuscation.