I remember making this argument :D Haha, I was quickly downvoted.
Anyhow, "vaguely specified goals" actually turns out to be a property of you, not the AI.
If an agent has formally definable goals, then of course they are precisely specified. Vagueness can only be a property of matching actual goals to higher level descriptions.
I thought the paper was mostly wrong. In particular, I thought the argument that:
it is not a good idea for an AGI system to be designed in the frameworks where a single goal is assumed, such as evolutionary learning, program search, or reinforcement learning,
...was weak.
There is no guarantee that the derived goals will be logically consistent with the input goals, except in highly simplified situations.
Are they saying that (practically feasible) goal derivation algorithms necessarily produce logical inconsistencies? Or that this is actually a desirable property? Or what?
The text says an AI "should" maintain a goal structure that produces logically inconsistent subgoals. I don't think I understand what they mean.
Related post: Muehlhauser-Wang Dialogue.
Motivation Management in AGI Systems, a paper to be published at AGI-12.
From the discussion section: