JackV comments on Cryptographic Boxes for Unfriendly AI - Less Wrong

24 Post author: paulfchristiano 18 December 2010 08:28AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (155)

You are viewing a single comment's thread. Show more comments above.

Comment author: JackV 19 December 2010 09:57:51PM 0 points [-]

That sounds reasonable. I agree a complete discussion is probably too complicated, but it certainly seems a few simple examples of the sort I eventually gave would probably help most people understand -- it certainly helped me, and I think many other people were puzzled, whereas with the simple examples I have now, I think (although I can't be sure) I have a simplistic but essentially accurate idea of the possibilities.

I'm sorry if I sounded overly negative before: I definitely had problems with the post, but didn't mean to negative about it.

If I were breaking down the post into several, I would probably do:

(i) the fact of holomorphic encryption's (apparent) existence, how that can be used to to run algorithms on unknown data and a few theoretical applications of that, a mention that this is unlikely to be practical atm. That it can in principle be used to execute an unknown algorithm on unknown data, but that is really, really impractical, but might become more practical with some sort of parallel processing design. And at this point, I think most people would accept when you say it can be used to run an unfriendly AI.

(ii) If you like, more mathematical details, although this probably isn't necessary

(iii) A discussion of friendlyness-testing, which wasn't in the original premise, but is something people evidently want to think about

(iv) any other discussion of running an unfriendly AI safely