Tyrrell_McAllister comments on Cryptographic Boxes for Unfriendly AI - Less Wrong

24 Post author: paulfchristiano 18 December 2010 08:28AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (155)

You are viewing a single comment's thread. Show more comments above.

Comment author: luminosity 18 December 2010 09:02:50AM 5 points [-]

I found the discussion of homomorphic encryption interesting, but

One solution to the problem of friendliness is to develop a self-improving, unfriendly AI, put it in a box, and ask it to make a friendly AI for us. This gets around the incredible difficulty of friendliness, but it creates a new, apparently equally impossible problem.

If you can reliably build an AI, but you cannot reliably build friendliness in it, why should I trust that you can build a program which in turn can reliably verify friendliness? It seems to me that if you are unable to build friendliness, it is due to not sufficiently understanding friendliness. If you don't sufficiently understand it, I do not want you building a program which is the check on the release of an AI.

Comment author: Tyrrell_McAllister 18 December 2010 06:25:07PM *  6 points [-]

It seems to me that if you are unable to build friendliness, it is due to not sufficiently understanding friendliness. If you don't sufficiently understand it, I do not want you building a program which is the check on the release of an AI.

It seems very plausible to me that certifying friendly source code (while still hugely difficult) is much easier than finding friendly source code. For example, maybe we will develop a complicated system S of equations such that, provably, any solution to S encodes the source of an FAI. While finding a solution to S might be intractably difficult for us, verifying a solution x that has been handed to us would be easy—just plug x into S and see if x solves the equations.

ETA: The difficult part, obviously, is developing the system S in the first place. But that would be strictly easier than additionally finding a solution to S on our own.

Comment author: Oscar_Cunningham 18 December 2010 09:29:59PM 1 point [-]

It looks like you're just arguing about P=NP, no?

Comment author: Tyrrell_McAllister 18 December 2010 10:04:56PM *  2 points [-]

It looks like you're just arguing about P=NP, no?

Even if P!=NP, there still remains the question of whether friendliness is one of those things that we will be able to certify before we can construct an FAI from scratch, but after we can build a uFAI. Eliezer doesn't think so.