V_V comments on Futarchy and Unfriendly AI - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (27)
"Unfriendly" is a tribal signal. The proper term is "unsafe", but I think that "evil" is a better approximation than "unfriendly" in its standard usage, as opposed to the non-standard usage invented by Yudkowsky.
I always though that "evil" means a malicious intention, while "unfriendly" does harm but not with the intention of doing harm. Compare a standard B-movie rogue robot who hunts humans because of murderous "feelings" it developed out of revenge, fear, envy, or other anthropomorphic qualities, with the paperclip maximizer.
Calling something "evil" applies anthropomorphism to it.
It's signals that you are talking about the thing this tribe is talking about.
No, it's a mere signal of allegiance, which you are using to try to shut up the outgroup.
It's like talking religion with a theist who complains that unless you are referring specifically to Elohim/Jesus/Allah/whatever then you couldn't possibly say anything meaningful about their religion.
I'm not criticizing semantics out of context to the argument he makes it's a strawman to claim that everyone who says "evil AI" hasn't anything meaningful to say.
He speaks about how it's obvious that nobody funds a evil AI. For some values of evils that's true. On the other hand it's not the cases we worry about.