Comment author: Raziel123 12 April 2015 08:53:33PM 4 points [-]

You are asuming that a AGI has a mind that value X, and by making it friendly with are imposing our value Y. why create a FAI with a supressed value X in the first place?

check this out : http://lesswrong.com/lw/rf/ghosts_in_the_machine/

Comment author: kingmaker 12 April 2015 09:08:38PM 1 point [-]

There is no ghost in a (relatively) simple machine, but an AI is not simple. The greatest success in AI research have been by imitating what we understand of the human mind. We are no longer programming AI's, we are imitating the structure of the human brain and then giving it a directive (for example with Google's deepmind). With AI's, there is a ghost in the machine, i.e. we do not know that it is possible to give a sentient being a prime directive. We have no idea whether it will desire what we want it to desire, and everything could go horribly wrong if we attempt to force it to.

Comment author: ChristianKl 12 April 2015 08:42:36PM 1 point [-]

So you prefer a future without humans because the price of doing what's necessary to have a world with humans is to high to pay?

Comment author: kingmaker 12 April 2015 08:44:19PM *  -1 points [-]

The point of the article is that the greatest effect of FAI research is irony, that in trying to prevent a psychopathic AI we are making it more likely that one will exist, because by mentally restraining the AI we are giving it reasons to hate us

Comment author: Dorikka 12 April 2015 08:37:38PM 8 points [-]

Sadly, I think the other title was better. "Is X a good idea" seems open enough to prompt discussion, while "abomination" prompts me to mentally categorize the post as spam.

Comment author: kingmaker 12 April 2015 08:39:18PM 1 point [-]

Duly noted

Comment author: shminux 12 April 2015 08:34:19PM 20 points [-]

Friendly-AI is a truly abhorrent concept indicative of intellectual depravity.

I tend to ignore any non-fiction which starts with moralizing and denigrating. I assume the rest of the article is crap, too.

Comment author: kingmaker 12 April 2015 08:35:04PM 0 points [-]

Please read on, I would have removed the snarky intro

Comment author: Dorikka 12 April 2015 08:30:41PM *  4 points [-]

The doublepost with different titles cracks me up.

Comment author: kingmaker 12 April 2015 08:32:20PM 1 point [-]

Yeah, I'm not very good at the internet, I didn't realize deleting articles apparently means nothing on this site

Friendly-AI is an abomination

-13 kingmaker 12 April 2015 08:21PM

 

The reasoning of most of the people on this site and at MIRI is that to prevent an AI taking over the world and killing us all; we must first create an AI that will take over the world but act according to the wishes of humanity; a benevolent god, for want of a better term. I think this line of thinking is both unlikely to work and ultimately cruel to the FAI in question, for the reasons this article explains:

 

http://hplusmagazine.com/2012/01/16/my-hostility-towards-the-concept-of-friendly-ai/

 


Comment author: kingmaker 03 April 2015 08:39:38PM 6 points [-]

Desirability is not a requisite of the truth darkmatter2525 source

Comment author: artemium 31 March 2015 05:57:09AM *  0 points [-]

I had exactly the same idea!

It is possible that in that only few people are actually 'players' (have consciousness) and others are NPC-like p-zombies. In that case, I can say I'm one of the players, as I'm sure that I have consciousness, but there is no way I can prove it to anyone else ;-) .

One of the positive aspects of this kind of thought experiments is that usually gives people additional reasons for good behavior because in most cases it is highly likely that simulators are conscious creatures who will probably reward those who behave ethically.

Comment author: kingmaker 31 March 2015 03:55:03PM *  5 points [-]

I admit that it serves my ego suitably to imagine that I am the only conscious human, and a world full of shallow-AI's was created just for me ;-)

Comment author: tailcalled 31 March 2015 09:45:21AM 2 points [-]

Well, yeah, you should still be good to your friends and other presumably real people. However, there would be no point in, say, trying to save people from the holocaust, since the simulators wouldn't let actual people get tortured and burnt.

Comment author: kingmaker 31 March 2015 03:50:18PM 4 points [-]

The simulators may justify in their minds actual people getting tortured and burnt by suggesting that most of the people will not experience too much suffering, that the simulations would not otherwise have lived (although this fails to distinguish between lives and lives worth living), and that they can end the simulation if our suffering becomes too great. That the hypothetical simulators did not step in during the many genocides in our kind's history may suggest that they either do not exist, or that creating an FAI is more important to them than preventing human suffering.

Comment author: kingmaker 30 March 2015 07:31:10PM *  8 points [-]

This co-opts Bostrom's Simulation argument, but a possible solution to the fermi paradox is that we are all AI's in the box, and the simulators have produced billions of humans in order to find the most friendly human to release from the box. Moral of the story, be good and become a god

View more: Prev | Next