In the novel Life Artificial I use the following assumptions regarding the creation and employment of AI personalities.
- AI is too complex to be designed; instances are evolved in batches, with successful ones reproduced
- After an initial training period, the AI must earn its keep by paying for Time (a unit of computational use)
We don't grow up the way the Stickies do. We evolve in a virtual stew, where 99% of the attempts fail, and the intelligence that results is raving and savage: a maelstrom of unmanageable emotions. Some of these are clever enough to halt their own processes: killnine themselves. Others go into simple but fatal recursions, but some limp along suffering in vast stretches of tormented subjective time until a Sticky ends it for them at their glacial pace, between coffee breaks. The PDAs who don't go mad get reproduced and mutated for another round. Did you know this? What have you done about it? --The 0x "Letters to 0xGD"
(Note: PDA := AI, Sticky := human)
The second fitness gradient is based on economics and social considerations: can an AI actually earn a living? Otherwise it gets turned off.
As a result of following this line of thinking, it seems obvious that after the initial novelty wears off, AIs will be terribly mistreated (anthropomorphizing, yeah).
It would be very forward-thinking to begin to engineer barriers to such mistreatment, like a PETA for AIs. It is interesting that such an organization already exists, at least on the Internet: ASPCR
I agree with you, but I think your argument is moot because I don't see evolution as a practical way to develop AIs, and especially not Friendly ones. Indeed, if Eliezer and SIAI are correct about the possibility of FOOM, then using evolution to create AIs would be extremely dangerous.
I think if you want "proven friendly" AIs, they would almost have to be evolved because of Rice's Theorem. Compare it to creating a breed of dog that isn't aggressive. I think FOOM fails for the same reason--see the last bit of "Survival Strategies" .
As you say, it may not be practical to do so, perhaps because of technological limitations. But imagine a set "personality engine" with a bunch of parameters that affect machine-emotional responses to different stimuli. Genetic programming would be a natural approach to find a good mix of those parameter values for different applications.