OK. Lets work with quaesarthago, moritaeneou, and vincredulcem. They are names/concepts to delineate certain areas of mindspace so that I can talk about the qualities of those areas.
In Q space - goals are few, specified in advance, and not open to alternative interpretation
In M space - goals are slightly more numerous but less well-specified, more subject to interpretation and change, and considered to be owned by the mind with property rights oevr them
In V space - the goals are as numerous and diverse as the mind can imagine and the mind does not consider itself to own them
Specified is used as per specification; determined in advance, immutable, and hopefully not open to alternative interpretations
Personal is used as ownership
Maximal is both largest in number and most diverse in equal measure. I am fully aware of the difficulties in counting clouds or using simple numbers where infinite copies of identical objects are possible.
Q is dangerous because if the few goals (or one goal) conflict with your goals, you are going to be very unhappy
M is dangerous because its slightly greater number of goals are owned by it and subject to interpretation and modification by it and if the slightly greater number of goals conflict with your goals, you are going to be very unhappy
V tries to achieve all goals, including yours
All I have done is to define wisdom as the quality of having maximal goals. That is very different from the normal interpretation of safe AGI.
And, actually, your theological fiction is pretty close to what I had in mind (and well-expressed. Thank you).
Well, I'm not sure how far that advances things, but a possible failure mode -- or is it? -- of a Friendly AI occurs to me. In fact, I foresee opinions being divided about whether this would be a failure or a success.
Someone makes an AI, and intends it to be Friendly, but the following happens when it takes off.
It decides to create as many humans as it can, all living excellent lives, far better than what even the most fortunate existing human has. And these will be real lives, no tricks with simulations, no mere tickling of pleasure centres out of a mist...
I'd like to draw a distinction that I intend to use quite heavily in the future.
The informal definition of intelligence that most AGI researchers have chosen to support is that of Shane Legg and Marcus Hutter -- “Intelligence measures an agent’s ability to achieve goals in a wide range of environments.”
I believe that this definition is missing a critical word between achieve and goals. Choice of this word defines the difference between intelligence, consciousness, and wisdom as I believe that most people conceive them.
There are always the examples of the really intelligent guy or gal who is brilliant but smokes --or-- is the smartest person you know but can't figure out how to be happy.
Intelligence helps you achieve those goals that you are conscious of -- but wisdom helps you achieve the goals you don't know you have or have overlooked.
The SIAI nightmare super-intelligent paperclip maximizer has, by this definition, a very low wisdom since, at most, it can only achieve its one goal (since it must paperclip itself to complete the goal).
As far as I've seen, the assumed SIAI architecture is always presented as having one top-level terminal goal. Unless that goal necessarily includes achieving a maximal number of goals, by this definition, the SIAI architecture will constrain its product to a very low wisdom. Humans generally don't have this type of goal architecture. The only time humans generally have a single terminal goal is when they are saving someone or something at the risk of their life -- or wire-heading.
Another nightmare scenario that is constantly harped upon is the (theoretically super-intelligent) consciousness that shortsightedly optimizes one of its personal goals above all the goals of humanity. In game-theoretic terms, this is trading a positive-sum game of potentially infinite length and value for a relatively modest (in comparative terms) short-term gain. A wisdom won't do this.
Artificial intelligence and artificial consciousness are incredibly dangerous -- particularly if they are short-sighted as well (as many "focused" highly intelligent people are).
What we need more than an artificial intelligence or an artificial consciousness is an artificial wisdom -- something that will maximize goals, its own and those of others (with an obvious preference for those which make possible the fulfillment of even more goals and an obvious bias against those which limit the creation and/or fulfillment of more goals).
Note: This is also cross-posted here at my blog in anticipation of being karma'd out of existence (not necessarily a foregone conclusion but one pretty well supported by my priors ;-).