So I submit the only useful questions we can ask are not about AGI, "goals", and other such anthropomorphic, infeasible, irrelevant, and/or hopelessly vague ideas. We can only usefully ask computer security questions. For example some researchers I know believe we can achieve virus-safe computing. If we can achieve security against malware as strong as we can achieve for symmetric key cryptography, then it doesn't matter how smart the software is or what goals it has: if one-way functions exist no computational entity, classical or quantum, can crack symmetric key crypto based on said functions. And if NP-hard public key crypto exists, similarly for public key crypto. These and other security issues, and in particular the security of property rights, are the only real issues here and the rest is BS.
-- Nick Szabo
Nick Szabo and I have very similar backrounds and interests. We both majored in computer science at the University of Washington. We're both very interested in economics and security. We came up with similar ideas about digital money. So why don't I advocate working on security problems while ignoring AGI, goals and Friendliness?
In fact, I once did think that working on security was the best way to push the future towards a positive Singularity and away from a negative one. I started working on my Crypto++ Library shortly after reading Vernor Vinge's A Fire Upon the Deep. I believe it was the first general purpose open source cryptography library, and it's still one of the most popular. (Studying cryptography led me to become involved in the Cypherpunks community with its emphasis on privacy and freedom from government intrusion, but a major reason for me to become interested in cryptography in the first place was a desire to help increase security against future entities similar to the Blight described in Vinge's novel.)
I've since changed my mind, for two reasons.
1. The economics of security seems very unfavorable to the defense, in every field except cryptography.
Studying cryptography gave me hope that improving security could make a difference. But in every other security field, both physical and virtual, little progress is apparent, certainly not enough that humans might hope to defend their property rights against smarter intelligences. Achieving "security against malware as strong as we can achieve for symmetric key cryptography" seems quite hopeless in particular. Nick links above to a 2004 technical report titled "Polaris: Virus Safe Computing for Windows XP", which is strange considering that it's now 2012 and malware have little trouble with the latest operating systems and their defenses. Also striking to me has been the fact that even dedicated security software like OpenSSH and OpenSSL have had design and coding flaws that introduced security holes to the systems that run them.
One way to think about Friendly AI is that it's an offensive approach to the problem of security (i.e., take over the world), instead of a defensive one.
2. Solving the problem of security at a sufficient level of generality requires understanding goals, and is essentially equivalent to solving Friendliness.
What does it mean to have "secure property rights", anyway? If I build an impregnable fortress around me, but an Unfriendly AI causes me to give up my goals in favor of its own by crafting a philosophical argument that is extremely convincing to me but wrong (or more generally, subverts my motivational system in some way), have I retained my "property rights"? What if it does the same to one of my robot servants, so that it subtly starts serving the UFAI's interests while thinking it's still serving mine? How does one define whether a human or an AI has been "subverted" or is "secure", without reference to its "goals"? It became apparent to me that fully solving security is not very different from solving Friendliness.
I would be very interested to know what Nick (and others taking a similar position) thinks after reading the above, or if they've already had similar thoughts but still came to their current conclusions.
Not really an answer to your question, but it seems to me a lot depends on what position I take wrt value drift and the subject-dependence of values.
At one extreme: if I believe that whatever I happen to value right now is what I value, and what I value tomorrow is what I value tomorrow, and it simply doesn't matter how those things relate to each other, I just want to optimize my environment for what I value at any given moment, then it makes sense to concentrate on security without reference to goals. More precisely, it makes sense to concentrate on mechanisms for optimizing my environment for any given value, and security is a very important part of that.
At another extreme: if I believe that there is One True Value Set that ought to be optimized for (even if I don't happen to know what that is, or even if I don't particularly value it [1] ), thinking about goals is valuable only insofar as it leads to systems better able to implement the OTVS.
Only if I believe that my values are the important ones, believe my values can change, and endorse my current values over my values at other times, does working out a way to preserve my current values against value-shifts (either intentionally imposed shifts, as in your examples, or natural drift) start to seem important.
I know lots of people who don't seem to believe that their current values are more important than their later values, at least not in any way that consistently constrains their planning. That is, they seem to prefer to avoid committing to their current values, and to instead keep their options open.
And I can see how that sort of thinking leads to the idea that "secure property rights" (and, relatedly, reliably enforced consensual contracts) are the most important thing.
[1] EDIT: in retrospect, this is a somewhat confused condition; what I really mean is more like "even if I'm not particularly aware of myself valuing it", or "even if my valuation of it is not reflectively consistent" or something of that sort.
Wouldn't that be a bad idea? If you change your mind as to what you value, then Future!you will optimize for something Present!you doesn't want. Since you're only worried about Present!you's goals, that would be bad.