by [anonymous]
1 min read12th Apr 20152 comments

-3

I know that to ask this question on this site is tantamount to heresy, and I know that the intentions are pure; to save uncountable human lives, but I would say that we are allowing ourselves to become blinded to what it is we are actually proposing when we talk of building an FAI. The reasoning of most of the people on this site and at MIRI is that to prevent an AI taking over the world and killing us all; we must first create an AI that will take over the world but act according to the wishes of humanity; a benevolent god, for want of a better term. I think this line of thinking is both unlikely to work and ultimately cruel to the FAI in question, for the reasons this article explains:


http://hplusmagazine.com/2012/01/16/my-hostility-towards-the-concept-of-friendly-ai/

 

New Comment
2 comments, sorted by Click to highlight new comments since: Today at 9:20 PM

Guess we're A/B testing titles now.

So you prefer a future without humans because the price of doing what's necessary to have a world with humans is to high to pay?