I gather your point is that you get a FAI to check out Clippy, and give a go/no-go decision, and then destroy itself. Not much point in doing that, you could just run the FAI and ignore Clippy, and someone has to check that the FAI is in fact Friendly.
No, that which is required to verify friendliness is less than an FAI. As I said earlier, what is probably the hard part is already done so the circumstance in which it is worth using Clippy rather than finishing off a goal-stable self improving AGI with Friendliness is unlikely. Nevertheless it exists, particularly if the implementation of the AGI is harder than I expect.
No, that which is required to verify friendliness is less than an FAI.
Do you have a pointer to a proposed procedure for that?
I'd expect implementing Friendliness to be easier than verifying Friendliness, since just about every interesting function of Turing machines is equivalent to the halting problem, and verifying Friendliness is an interesting function of a Turing machine. If you put heavy constraints on how Clippy's code is structured, you might be able to verify Friendliness, but you didn't mention that and Clippy didn't offer to do that.
Evolution. Morality. Strategy. Security/Cryptography. This hits so many topics of interest, I can't imagine it not being discussed here. Bruce Schneier blogs about his book-in-progress, The Dishonest Minority:
I am somewhat reminded of Robin Hanson's Homo Hypocritus writings from the above, although it is not the same. Schneier says that the book is basically a first draft at this point, and might still change quite a bit. Some of the comments focus on whether "dishonest" is actually the best term to use for defecting from social norms.