So you prefer a future without humans because the price of doing what's necessary to have a world with humans is to high to pay?
The point of the article is that the greatest effect of FAI research is irony, that in trying to prevent a psychopathic AI we are making it more likely that one will exist, because by mentally restraining the AI we are giving it reasons to hate us
Sadly, I think the other title was better. "Is X a good idea" seems open enough to prompt discussion, while "abomination" prompts me to mentally categorize the post as spam.
Friendly-AI is a truly abhorrent concept indicative of intellectual depravity.
I tend to ignore any non-fiction which starts with moralizing and denigrating. I assume the rest of the article is crap, too.
Please read on, I would have removed the snarky intro
The doublepost with different titles cracks me up.
Yeah, I'm not very good at the internet, I didn't realize deleting articles apparently means nothing on this site
Friendly-AI is an abomination
The reasoning of most of the people on this site and at MIRI is that to prevent an AI taking over the world and killing us all; we must first create an AI that will take over the world but act according to the wishes of humanity; a benevolent god, for want of a better term. I think this line of thinking is both unlikely to work and ultimately cruel to the FAI in question, for the reasons this article explains:
http://hplusmagazine.com/2012/01/16/my-hostility-towards-the-concept-of-friendly-ai/
Desirability is not a requisite of the truth darkmatter2525 source
I had exactly the same idea!
It is possible that in that only few people are actually 'players' (have consciousness) and others are NPC-like p-zombies. In that case, I can say I'm one of the players, as I'm sure that I have consciousness, but there is no way I can prove it to anyone else ;-) .
One of the positive aspects of this kind of thought experiments is that usually gives people additional reasons for good behavior because in most cases it is highly likely that simulators are conscious creatures who will probably reward those who behave ethically.
I admit that it serves my ego suitably to imagine that I am the only conscious human, and a world full of shallow-AI's was created just for me ;-)
Well, yeah, you should still be good to your friends and other presumably real people. However, there would be no point in, say, trying to save people from the holocaust, since the simulators wouldn't let actual people get tortured and burnt.
The simulators may justify in their minds actual people getting tortured and burnt by suggesting that most of the people will not experience too much suffering, that the simulations would not otherwise have lived (although this fails to distinguish between lives and lives worth living), and that they can end the simulation if our suffering becomes too great. That the hypothetical simulators did not step in during the many genocides in our kind's history may suggest that they either do not exist, or that creating an FAI is more important to them than preventing human suffering.
This co-opts Bostrom's Simulation argument, but a possible solution to the fermi paradox is that we are all AI's in the box, and the simulators have produced billions of humans in order to find the most friendly human to release from the box. Moral of the story, be good and become a god
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
You are asuming that a AGI has a mind that value X, and by making it friendly with are imposing our value Y. why create a FAI with a supressed value X in the first place?
check this out : http://lesswrong.com/lw/rf/ghosts_in_the_machine/
There is no ghost in a (relatively) simple machine, but an AI is not simple. The greatest success in AI research have been by imitating what we understand of the human mind. We are no longer programming AI's, we are imitating the structure of the human brain and then giving it a directive (for example with Google's deepmind). With AI's, there is a ghost in the machine, i.e. we do not know that it is possible to give a sentient being a prime directive. We have no idea whether it will desire what we want it to desire, and everything could go horribly wrong if we attempt to force it to.