jkaufman comments on Open thread, August 5-11, 2013 - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (307)
"Indifferent AI" would be a better name than "Unfriendly AI".
It would unfortunately come with misleading connotations. People don't usually associate 'indifferent' with 'is certain to kill you, your family, your friends and your species'. People already get confused enough about 'indifferent' AIs without priming them with that word.
Would "Non-Friendly AI" satisfy your concerns? That gets rid of those of the connotations of 'unfriendly' that are beyond merely being 'something-other-than-friendly'.
We could gear several names to have maximum impact with their intended recipients, e.g. the "Takes-Away-Your-Second-Amendment-Rights AI", or "Freedom-Destroying AI", "Will-Make-It-So-No-More-Beetusjuice-Is-Sold AI" etc. All strictly speaking true properties for UFAIs.
Uncaring AI? The correlate could stay 'Friendly AI', as I presume to assume acting in a friendly fashion is easier to identify than capability for emotions/values and emotion/value motivated action.
I prefer the selective capitalisation of "unFriendly AI". This emphasizes that it's just any AI other than a Friendly AI, but still gets the message across that it's dangerous.
There are some AI in works of fiction that you could describe as indifferent. The one in neuromancer for example just wants to talk to other AI in the universe and doesn't try to transform all resources on earth into material to run itself.
An AI that does try to grow itself like a cancer is on the other hand unfriendly.
If you take about something like the malaria virus we also wouldn't call the virus indifferent but unfriendly towards humans even if the virus just tries to spread itself and doesn't have the goal of killing humans.
That's... actually a pretty good metaphor. Benign tumor AI vs. malignant tumor AI?
Eliezer assumes in the meta-ethics sequence that you cannot really ever talk outside of your general moral frame. By that assumption (which I think he is still making), Indifferent AI would be friendly or inactive. Unfriendly AI better conveys the externality to humans morality.
Perhaps you can never get all the way out.
But certainly someone who talks about human rights and values the survival of the species is speaking less constrained by moral frame than somebody who values only her race or her nation or her clan and considers all other humans as though they were another species competing with "us."
How wrong am I to incorporate AI in my ideas of "us," with the possible result that I enable a universe where AI might thrive even without what we now think of as human? Would this not be analogous to a pure caucasian human supporting values that lead to a future of a light-brown human race, a race with no pure caucasian still in it? Would this Caucasian have to be judged to have committed some sort of CEV-version of genocide?
"AI" is really all of mindspace except the tiny human dot. There's an article about it around here somewhere. PLENTY of AIs are indeed correctly incorporated in "us", and indeed unless things go horribly wrong "what we now think of as humans" will be extinct and replaced with these wast and alien things. Think of daleks and GLADoS and chuthulu and Babyeaters here. These are mostly as close to friendly as most humans are, and we're trusting humans to make the seed FAI in the first place.
Unfiendly AI are not like that. The process of evolution itself is basically a very stupid UFAI. Or a pandemic. or the intuition pump in this article http://lesswrong.com/lw/ld/the_hidden_complexity_of_wishes/ . Or even something like a supernova. It's not a character, not even an "evil" one.
((yea this is a gross oversimplification, I'm aiming mostly at causing true intuitions here, not causing true explicit beliefs. The phenomena is related to metaphor.))