At the current AGI-12 conference, some designers have been proponents of keeping AGI's safe by bringing them up in human environments, providing them with interactions and feedback in a similar way to how we bring up human children. Obviously that approach would fail for a fully smart AGI with its own values - it would pretend to follow our values for as long as it needed, and then defect. However, some people have confidence if we started with a limited, dumb AGI, then we could successfully inculcate our values in this way (a more sophisticated position would be that though this method would likely fail, it's more likely to succeed than a top-down friendliness project!).
The major criticism of this approach is that it anthropomorphises the AGI - we have a theory of children's minds, constructed by evolution, culture, and our own child-rearing experience. And then we project this on the alien mind of the AGI, assuming that if the AGI presents behaviours similar to a well-behaved child, then it will become a moral AGI. The problem is that we don't know how alien the AGI's mind will be, and if our reinforcement is actually reinforcing the right thing. Specifically, we need to be able to find some way of distinguishing between:
- An AGI being trained to be friendly.
- An AGI being trained to lie and conceal.
- An AGI that will behave completely differently once out of the training/testing/trust-building environment.
- An AGI that forms the wrong categories and generalisations (what counts as "human" or "suffering", for instance), because it lacks human-shared implicit knowledge that was "too obvious" for us to even think of training it on.
Anyone with an idea and a computer can write an advice book on how to raise children. And Science really doesn't know what techniques have what effects in particular circumstances.
If we really knew how to raise Friendly children, public schools wouldn't be the mess that they are.
None of that has anything to do with IRB or other ethics reviews.
I am not talking about taking N children and getting N children, maximizing average Friendliness of the children. I am talking about, given N children, finding some regimen X, such that a child which has finished regimen X will have the highest expected Friendliness.
Regimen X may well involve frequent metaphorical culling of children who have low expected Friendliness.