'We can design intelligent machines so their primary, innate emotion is unconditional love for all humans. First we can build relatively simple machines that learn to recognize happiness and unhappiness in human facial expressions, human voices and human body language. Then we can hard-wire the result of this learning as the innate emotional values of more complex intelligent machines, positively reinforced when we are happy and negatively reinforced when we are unhappy.'
-- Bill Hibbard (2001), Super-intelligent machines.
That was published in a peer-reviewed journal, and the author later wrote a whole book about it, so this is not a strawman position I'm discussing here.
So... um... what could possibly go wrong...
When I mentioned (sec. 6) that Hibbard's AI ends up tiling the galaxy with tiny molecular smiley-faces, Hibbard wrote an indignant reply saying:
'When it is feasible to build a super-intelligence, it will be feasible to build hard-wired recognition of "human facial expressions, human voices and human body language" (to use the words of mine that you quote) that exceed the recognition accuracy of current humans such as you and me, and will certainly not be fooled by "tiny molecular pictures of smiley-faces." You should not assume such a poor implementation of my idea that it cannot make discriminations that are trivial to current humans.'
As Hibbard also wrote "Such obvious contradictory assumptions show Yudkowsky's preference for drama over reason," I'll go ahead and mention that Hibbard illustrates a key point: There is no professional certification test you have to take before you are allowed to talk about AI morality. But that is not my primary topic today. Though it is a crucial point about the state of the gameboard, that most AGI/FAI wannabes are so utterly unsuited to the task, that I know no one cynical enough to imagine the horror without seeing it firsthand. Even Michael Vassar was probably surprised his first time through.
No, today I am here to dissect "You should not assume such a poor implementation of my idea that it cannot make discriminations that are trivial to current humans."
Once upon a time - I've seen this story in several versions and several places, sometimes cited as fact, but I've never tracked down an original source - once upon a time, I say, the US Army wanted to use neural networks to automatically detect camouflaged enemy tanks.
The researchers trained a neural net on 50 photos of camouflaged tanks amid trees, and 50 photos of trees without tanks. Using standard techniques for supervised learning, the researchers trained the neural network to a weighting that correctly loaded the training set - output "yes" for the 50 photos of camouflaged tanks, and output "no" for the 50 photos of forest.
Now this did not prove, or even imply, that new examples would be classified correctly. The neural network might have "learned" 100 special cases that wouldn't generalize to new problems. Not, "camouflaged tanks versus forest", but just, "photo-1 positive, photo-2 negative, photo-3 negative, photo-4 positive..."
But wisely, the researchers had originally taken 200 photos, 100 photos of tanks and 100 photos of trees, and had used only half in the training set. The researchers ran the neural network on the remaining 100 photos, and without further training the neural network classified all remaining photos correctly. Success confirmed!
The researchers handed the finished work to the Pentagon, which soon handed it back, complaining that in their own tests the neural network did no better than chance at discriminating photos.
It turned out that in the researchers' data set, photos of camouflaged tanks had been taken on cloudy days, while photos of plain forest had been taken on sunny days. The neural network had learned to distinguish cloudy days from sunny days, instead of distinguishing camouflaged tanks from empty forest.
This parable - which might or might not be fact - illustrates one of the most fundamental problems in the field of supervised learning and in fact the whole field of Artificial Intelligence: If the training problems and the real problems have the slightest difference in context - if they are not drawn from the same independently identically distributed process - there is no statistical guarantee from past success to future success. It doesn't matter if the AI seems to be working great under the training conditions. (This is not an unsolvable problem but it is an unpatchable problem. There are deep ways to address it - a topic beyond the scope of this post - but no bandaids.)
As described in Superexponential Conceptspace, there are exponentially more possible concepts than possible objects, just as the number of possible objects is exponential in the number of attributes. If a black-and-white image is 256 pixels on a side, then the total image is 65536 pixels. The number of possible images is 265536. And the number of possible concepts that classify images into positive and negative instances - the number of possible boundaries you could draw in the space of images - is 2^(265536). From this, we see that even supervised learning is almost entirely a matter of inductive bias, without which it would take a minimum of 265536 classified examples to discriminate among 2^(265536) possible concepts - even if classifications are constant over time.
If this seems at all counterintuitive or non-obvious, see Superexponential Conceptspace.
So let us now turn again to:
'First we can build relatively simple machines that learn to recognize happiness and unhappiness in human facial expressions, human voices and human body language. Then we can hard-wire the result of this learning as the innate emotional values of more complex intelligent machines, positively reinforced when we are happy and negatively reinforced when we are unhappy.'
and
'When it is feasible to build a super-intelligence, it will be feasible to build hard-wired recognition of "human facial expressions, human voices and human body language" (to use the words of mine that you quote) that exceed the recognition accuracy of current humans such as you and me, and will certainly not be fooled by "tiny molecular pictures of smiley-faces." You should not assume such a poor implementation of my idea that it cannot make discriminations that are trivial to current humans.'
It's trivial to discriminate a photo of a picture with a camouflaged tank, and a photo of an empty forest, in the sense of determining that the two photos are not identical. They're different pixel arrays with different 1s and 0s in them. Discriminating between them is as simple as testing the arrays for equality.
Classifying new photos into positive and negative instances of "smile", by reasoning from a set of training photos classified positive or negative, is a different order of problem.
When you've got a 256x256 image from a real-world camera, and the image turns out to depict a camouflaged tank, there is no additional 65537th bit denoting the positiveness - no tiny little XML tag that says "This image is inherently positive". It's only a positive example relative to some particular concept.
But for any non-Vast amount of training data - any training data that does not include the exact bitwise image now seen - there are superexponentially many possible concepts compatible with previous classifications.
For the AI, choosing or weighting from among superexponential possibilities is a matter of inductive bias. Which may not match what the user has in mind. The gap between these two example-classifying processes - induction on the one hand, and the user's actual goals on the other - is not trivial to cross.
Let's say the AI's training data is:
Dataset 1:
- +
- Smile_1, Smile_2, Smile_3
- -
- Frown_1, Cat_1, Frown_2, Frown_3, Cat_2, Boat_1, Car_1, Frown_5
Now the AI grows up into a superintelligence, and encounters this data:
Dataset 2:
-
- Frown_6, Cat_3, Smile_4, Galaxy_1, Frown_7, Nanofactory_1, Molecular_Smileyface_1, Cat_4, Molecular_Smileyface_2, Galaxy_2, Nanofactory_2
It is not a property of these datasets that the inferred classification you would prefer is:
- +
- Smile_1, Smile_2, Smile_3, Smile_4
- -
- Frown_1, Cat_1, Frown_2, Frown_3, Cat_2, Boat_1, Car_1, Frown_5, Frown_6, Cat_3, Galaxy_1, Frown_7, Nanofactory_1, Molecular_Smileyface_1, Cat_4, Molecular_Smileyface_2, Galaxy_2, Nanofactory_2
rather than
- +
- Smile_1, Smile_2, Smile_3, Molecular_Smileyface_1, Molecular_Smileyface_2, Smile_4
- -
- Frown_1, Cat_1, Frown_2, Frown_3, Cat_2, Boat_1, Car_1, Frown_5, Frown_6, Cat_3, Galaxy_1, Frown_7, Nanofactory_1, Cat_4, Galaxy_2, Nanofactory_2
Both of these classifications are compatible with the training data. The number of concepts compatible with the training data will be much larger, since more than one concept can project the same shadow onto the combined dataset. If the space of possible concepts includes the space of possible computations that classify instances, the space is infinite.
Which classification will the AI choose? This is not an inherent property of the training data; it is a property of how the AI performs induction.
Which is the correct classification? This is not a property of the training data; it is a property of your preferences (or, if you prefer, a property of the idealized abstract dynamic you name "right").
The concept that you wanted, cast its shadow onto the training data as you yourself labeled each instance + or -, drawing on your own intelligence and preferences to do so. That's what supervised learning is all about - providing the AI with labeled training examples that project a shadow of the causal process that generated the labels.
But unless the training data is drawn from exactly the same context as the real-life, the training data will be "shallow" in some sense, a projection from a much higher-dimensional space of possibilities.
The AI never saw a tiny molecular smileyface during its dumber-than-human training phase, or it never saw a tiny little agent with a happiness counter set to a googolplex. Now you, finally presented with a tiny molecular smiley - or perhaps a very realistic tiny sculpture of a human face - know at once that this is not what you want to count as a smile. But that judgment reflects an unnatural category, one whose classification boundary depends sensitively on your complicated values. It is your own plans and desires that are at work when you say "No!"
Hibbard knows instinctively that a tiny molecular smileyface isn't a "smile", because he knows that's not what he wants his putative AI to do. If someone else were presented with a different task, like classifying artworks, they might feel that the Mona Lisa was obviously smiling - as opposed to frowning, say - even though it's only paint.
As the case of Terry Schiavo illustrates, technology enables new borderline cases that throw us into new, essentially moral dilemmas. Showing an AI pictures of living and dead humans as they existed during the age of Ancient Greece, will not enable the AI to make a moral decision as to whether switching off Terry's life support is murder. That information isn't present in the dataset even inductively! Terry Schiavo raises new moral questions, appealing to new moral considerations, that you wouldn't need to think about while classifying photos of living and dead humans from the time of Ancient Greece. No one was on life support then, still breathing with a brain half fluid. So such considerations play no role in the causal process that you use to classify the ancient-Greece training data, and hence cast no shadow on the training data, and hence are not accessible by induction on the training data.
As a matter of formal fallacy, I see two anthropomorphic errors on display.
The first fallacy is underestimating the complexity of a concept we develop for the sake of its value. The borders of the concept will depend on many values and probably on-the-fly moral reasoning, if the borderline case is of a kind we haven't seen before. But all that takes place invisibly, in the background; to Hibbard it just seems that a tiny molecular smileyface is just obviously not a smile. And we don't generate all possible borderline cases, so we don't think of all the considerations that might play a role in redefining the concept, but haven't yet played a role in defining it. Since people underestimate the complexity of their concepts, they underestimate the difficulty of inducing the concept from training data. (And also the difficulty of describing the concept directly - see The Hidden Complexity of Wishes.)
The second fallacy is anthropomorphic optimism: Since Bill Hibbard uses his own intelligence to generate options and plans ranking high in his preference ordering, he is incredulous at the idea that a superintelligence could classify never-before-seen tiny molecular smileyfaces as a positive instance of "smile". As Hibbard uses the "smile" concept (to describe desired behavior of superintelligences), extending "smile" to cover tiny molecular smileyfaces would rank very low in his preference ordering; it would be a stupid thing to do - inherently so, as a property of the concept itself - so surely a superintelligence would not do it; this is just obviously the wrong classification. Certainly a superintelligence can see which heaps of pebbles are correct or incorrect.
Why, Friendly AI isn't hard at all! All you need is an AI that does what's good! Oh, sure, not every possible mind does what's good - but in this case, we just program the superintelligence to do what's good. All you need is a neural network that sees a few instances of good things and not-good things, and you've got a classifier. Hook that up to an expected utility maximizer and you're done!
I shall call this the fallacy of magical categories - simple little words that turn out to carry all the desired functionality of the AI. Why not program a chess-player by running a neural network (that is, a magical category-absorber) over a set of winning and losing sequences of chess moves, so that it can generate "winning" sequences? Back in the 1950s it was believed that AI might be that simple, but this turned out not to be the case.
The novice thinks that Friendly AI is a problem of coercing an AI to make it do what you want, rather than the AI following its own desires. But the real problem of Friendly AI is one of communication - transmitting category boundaries, like "good", that can't be fully delineated in any training data you can give the AI during its childhood. Relative to the full space of possibilities the Future encompasses, we ourselves haven't imagined most of the borderline cases, and would have to engage in full-fledged moral arguments to figure them out. To solve the FAI problem you have to step outside the paradigm of induction on human-labeled training data and the paradigm of human-generated intensional definitions.
Of course, even if Hibbard did succeed in conveying to an AI a concept that covers exactly every human facial expression that Hibbard would label a "smile", and excludes every facial expression that Hibbard wouldn't label a "smile"...
Then the resulting AI would appear to work correctly during its childhood, when it was weak enough that it could only generate smiles by pleasing its programmers.
When the AI progressed to the point of superintelligence and its own nanotechnological infrastructure, it would rip off your face, wire it into a permanent smile, and start xeroxing.
The deep answers to such problems are beyond the scope of this post, but it is a general principle of Friendly AI that there are no bandaids. In 2004, Hibbard modified his proposal to assert that expressions of human agreement should reinforce the definition of happiness, and then happiness should reinforce other behaviors. Which, even if it worked, just leads to the AI xeroxing a horde of things similar-in-its-conceptspace to programmers saying "Yes, that's happiness!" about hydrogen atoms - hydrogen atoms are easy to make.
Link to my discussion with Hibbard here. You already got the important parts.
Or more generally, not just a binary classification problem but a measurement issue: How to measure benefit to humans or human satisfaction.
It has sometimes struck me that this FAI requirement has a lot in common with something we were talking about on the futarchy list a while ago. Specifically, how to measure a populace's satisfaction in a robust way. (Meta: exploring the details here would be going off on a tangent. Unfortunately I can't easily link to the futarchy list because Typepad has decided Yahoo links are "potential comment spam")
Of course with futarchy we want to do so for a different purpose, informing a decision market. At first glance the purposes might seem to have little in common. Futarchy contemplates just human participants. The human participants might well be aided by machines, but that is their business alone. FAI contemplates transcendent AI, where humanity cannot hope to truly control it anymore but can only hope that we have raised it properly (so to speak).
But beneath the surface they have important properties in common. They each contemplate an immensely intelligent mechanism that must do the right thing across an unimaginably broad panorama of issues. They both need to inform this mechanism's utility function, so they need to measure benefit to humans accurately and robustly. They both could be dangerous if the metric has loopholes. So they both need a metric that is not a fallible proxy for benefit to humans but a true measure of it. They both need this metric to be secure against intelligent attack - even the best metric does little good if an attacker can change it into something else. They both have to be started with the right metric or something that leads quite surely to it, because correcting them later will be impossible. (Robin speculated that futarchy could generate its own future utility function but I believe such an approach can only cause degeneration)
I conclude that there must be at least a strong resemblance between a desirable utility metric for futarchy and a desirable utility metric for FAI.
Beyond this, I speculate that futarchy has advantages as a sort of platform for FAI. I'll call the combination "futurAIrchy".
First, it might teach a young FAI better than any human teacher could. Like, the young FAI (or several versions or instances of it) would participate much like any other trader, but use the market feedback to refine its knowledge and procedures.
However, certain caprices of the market (January slump, that sort of thing) might lead to FAI learning bad or irrelevant tenets (eg, "January is an evil time"). That pseudo-knowledge would cause sub-optimal decisions and would risk insane behavior (eg, "Forcibly sedate everyone during january")
So I think we'd want FAI trader(s) to be insulated from the less meaningful patterns of the market. I propose that FAIs would trade thru a front end that only concerns itself with hedging against such patterns, and makes them irrelevant as far as the FAI can tell. Call it a "front-end AI". (Problems: Determining the right borderline as they both get more sophisticated. Who or what determines that, under what rules, and how could they abuse the power? Should there be just one front-end AI, arbitrarily many, or many but according to some governing rule?)
Secondly, the structure above might be an unusually safe architecture for FAI. Like, forever it is the rule that the only legitimate components are:
Problems: "log-rolling" where different components collude and thereby accidentally defeat the system. I don't see an exploit yet but that doesn't mean there isn't one. Is there yet a separate mechanism for securing the system against collusion?
What becomes of the profit that these AIs make? Surely we don't put so much real spending power in their silicon hands. But then, all they can do is re-invest it. Perhaps the money ceases to be human-spendable money and becomes just tokens.
What if a FAI goes bankrupt, or becomes inordinately wealthy? I propose that the behavior be that of a population search algorithm (eg genetic algorithm, though it's not clear how or whether crossover should be used). Bankrupt FAIs, or even low-scoring ones, cease to exist, and successful ones reproduce.
If FAI's are like persisting individuals, their hardware is an issue. Like, when a bankrupt FAI is replaced by a wealthy one's offspring, what if the bankrupt one's hardware just isn't fast enough? One proposal: it is all somehow hardware-balanced so that only the algorithms make a difference. Another proposal: FAIs (or another component that works with them) can buy and sell the hardware FAIs run on. Thus a bankrupt FAI's hardware is already sold. But then it is not so obvious how reproduction should be managed.
There's plenty more to be said about futurAIrchy but I've gone on long enough for now.