NancyLebovitz comments on Open Thread September, Part 3 - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (203)
I'd split the difference-- I don't think it's that hard to imagine an AI which has about as much loyalty to Ais as people have to people.
Really alien minds are naturally much harder to imagine. Clippy seems more like a damaged human than a thoroughly alien mind.
This may be a matter of assuming that minds would naturally have a complex mix of entangled goals, the way humans do. Even an FAI has two goals (Friendliness and increasing its intelligence) which may come into conflict.
Faint memory: an Alexis Gilliland cartoon of an automated bomber redirecting its target from a robot factory to a maternity ward.
No, just Friendliness. Increasing intelligence has no weight whatsoever as a terminal goal. Of course, an AI that did not increase its intelligence to a level which it could do anything practical to aid me (or whatever the AI is Friendly to) is trivially not Friendly a posteriori.
That leads to an interesting question-- how would an FAI decide how much intelligence is enough?
I don't know. It's supposed to be the smart one, not me. ;)
I'm hoping it goes something like: