Followup to: Anthropomorphic Optimism
If you've watched Hollywood sci-fi involving supposed robots, androids, or AIs, then you've seen AIs that are depicted as "emotionless". In the olden days this was done by having the AI speak in a monotone pitch - while perfectly stressing the syllables, of course. (I could similarly go on about how AIs that disastrously misinterpret their mission instructions, never seem to need help parsing spoken English.) You can also show that an AI is "emotionless" by having it notice an emotion with a blatant somatic effect, like tears or laughter, and ask what it means (though of course the AI never asks about sweat or coughing).
If you watch enough Hollywood sci-fi, you'll run into all of the following situations occurring with supposedly "emotionless" AIs:
- An AI that malfunctions or otherwise turns evil, instantly acquires all of the negative human emotions - it hates, it wants revenge, and feels the need to make self-justifying speeches.
- Conversely, an AI that turns to the Light Side, gradually acquires a full complement of human emotions.
- An "emotionless" AI suddenly exhibits human emotion when under exceptional stress; e.g. an AI that displays no reaction to thousands of deaths, suddenly showing remorse upon killing its creator.
- An AI begins to exhibit signs of human emotion, and refuses to admit it.
Now, why might a Hollywood scriptwriter make those particular mistakes?
These mistakes seem to me to bear the signature of modeling an Artificial Intelligence as an emotionally repressed human.
At least, I can't seem to think of any other simple hypothesis that explains the behaviors 1-4 above. The AI that turns evil has lost its negative-emotion-suppressor, so the negative emotions suddenly switch on. The AI that turns from mechanical agent to good agent, gradually loses the emotion-suppressor keeping it mechanical, so the good emotions rise to the surface. Under exceptional stress, of course the emotional repression that keeps the AI "mechanical" will immediately break down and let the emotions out. But if the stress isn't so exceptional, the firmly repressed AI will deny any hint of the emotions leaking out - that conflicts with the AI's self-image of itself as being emotionless.
It's not that the Hollywood scriptwriters are explicitly reasoning "An AI will be like an emotionally repressed human", of course; but rather that when they imagine an "emotionless AI", this is the intuitive model that forms in the background - a Standard mind (which is to say a human mind) plus an extra Emotion Suppressor.
Which all goes to illustrate yet another fallacy of anthropomorphism - treating humans as your point of departure, modeling a mind as a human plus a set of differences.
This is a logical fallacy because it warps Occam's Razor. A mind that entirely lacks chunks of brainware to implement "hate" or "kindness", is simpler - in a computational complexity sense - than a mind that has "hate" plus a "hate-suppressor", or "kindness" plus a "kindness-repressor". But if you start out with a human mind, then adding an activity-suppressor is a smaller alteration than deleting the whole chunk of brain.
It's also easier for human scriptwriters to imagine themselves repressing an emotion, pushing it back, crushing it down, then it is for them to imagine once deleting an emotion and it never coming back. The former is a mode that human minds can operate in; the latter would take neurosurgery.
But that's just a kind of anthropomorphism previously covered - the plain old ordinary fallacy of using your brain as a black box to predict something that doesn't work like it does. Here, I want to talk about the formally different fallacy of measuring simplicity in terms of the shortest diff from "normality", i.e., what your brain says a "mind" does in the absence of specific instruction otherwise, i.e., humanness. Even if you can grasp that something doesn't have to work just like a human, thinking of it as a human+diff will distort your intuitions of simplicity - your Occam-sense.
Eliezer,
"This is all trending into a completely different topic, namely agent failures and market failures in Hollywood. Movies routinely fail to make money due to lousy scripts, but this doesn't cause Hollywood to routinely pay more for better scripts."
Your analysis here is completely off. It is probably true what you say about locks in distribution and taste, but that is somewhat irrelevant, or at least not causing the continued use of bad scripts.
The real reason is that the market is working perfectly; it's just that it is and always will be more economically efficient to get and use bad scripts than to use good ones. First, people see movies in droves whether the scripts are good or bad. I'd even go so far as to posit that script quality has close to zero effect on sales assuming other elements ("buzz", marketing, star-power, etc) are present. Studios can and have countless times made terrible movies that feature some big names and had a marketing juggernaut behind them and been ridiculously successful. So one premise from you analysis, that script quality has any correlation to box office success, is flawed.
Second, and perhaps more importantly, bad scripts are cheaper and easier to get. There are far more terrible writers than good ones, and far more bad scripts than good. It's easier, then, to get bad scripts, and since it doesn't seem to matter what kind of script you use, why wait around for brilliant scripts? Just put your money into actors and CG and marketing and you'll likely make money anyway, and those variables are much easier to control than script quality.