timtyler comments on Cryptographic Boxes for Unfriendly AI - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (155)
I don't think you understand what a fully homomorphic encryption system is.
Barak et al proved that you cannot encrypt source code and have someone else run it without decrypting it. Gentry's results do not contradict that proof. A fully homomorphic encryption system allows one do encrypt data and have that data operated on by an program without that program being able to derive information about what that data actually is.
If it wasn't for that flaw, I would be focusing on others; the assumption that we have a source code verifier (Oracle) that can tell us definitively that an AI is friendly implies, at minimum, that we have an incredibly precise and completely accurate mathematical model of friendliness. Start on that problem first.
Looking this far out, I'm not sure I trust these encryptions schemes anyway. For instance, are you positive our uFAI is of the non-quantum variety?
What do you mean by
What do you mean by this? You're not suggesting destroying all copies of the private key, and then expecting to decrypt the answer yourself are you? I can't figure out what else you mean by this, even though that makes no sense.
The "On the (Im)possibility of Obfuscating Programs" link is interesting, IMO. Haven't finished it yet, and am still rather sceptical. Obfuscation is not really like a cryptographic primitive - but it is not necessarily trivial either. The paper itself says:
Also, the main approach is to exhibit some programs which the authors claim can't be obfuscated. That doesn't seem all that significant to me.
The link with homomorphic encryption seems rather weak - though the authors do mention homomorphic encryption as a possible application of obfuscation in their abstract.
Ruling out a black box obfuscator is extremely significant. This means that anything which provably obfuscates a program needs to exploit some special structure of the thing being obfuscated. In fact, they show it needs to exploit some super special structure, because even very easy classes of program to obfuscate can't be obfuscated.
There are actually several strong connections between obfuscation and homomorphic encryption. For one, obfuscation implies homomorphic encryption. For two, their descriptions are very semantically similar. For three, the possibility of homomorphic encryption is precisely the thing that implies the impossibility of obfuscation.