My previous article on this article went down like a server running on PHP (quite deservedly I might add). You can all rest assured that I won't be attempting any clickbait titles again for the foreseeable future. I also believe that the whole H+ article is written in a very poor and aggressive manner, but that some of the arguments raised cannot be ignored.
On my original article, many people raised this post [insert link] by Eliezer Yudkowsky as a counterargument to the idea that an FAI could have goals contrary to what we programmed. In summary, he argues that a program doesn't necessarily do as the programmer wishes, but rather as they have programmed. In this sense, there is no ghost in the machine that interprets your commands and acts accordingly, it can act only as you have designed. Therefore from this, he argues, an FAI can only act as we had programmed.
I personally think this argument completely ignores what has made AI research so successful in recent years: machine learning. We are no longer designing an AI from scratch and then implementing it; we are creating a seed program which learns from the situation and alters its own code with no human intervention, i.e. the machines are starting to write themselves, e.g. with google's deepmind. They are effectively evolving, and we are starting to find ourselves in the rather concerning position where we do not fully understand our own creations.
You could simply say, as someone said in the comments of my previous post, that if X represents the goal of having a positive effect on humanity, then the FAI should be created so as to have X as its primary directive. My answer to that is the most promising developments have been through imitating the human brain, and we have no reason to believe that the human brain (or any other brain for that matter) can be guaranteed to have a primary directive. One could argue that evolution has given us our prime directives: to ensure our own continued existence, to reproduce and to cooperate with each other, but there are many people who are suicidal, who have no interest in reproducing and who violently rebel against society (for example psychopaths). We are instructed by society and our programming to desire X, but far too many of us desire, say, Y for this to be considered a reliable way of achieving X.
Evolution’s direction has not ensured that we do “what we are supposed to do”, we could well face similar disobedience from our own creation. Seeing as the most effective way we have seen of developing AI is creating them in our image; as there are ghosts in us, there could well be ghosts in the machine.
My previous article on this article went down like a server running on PHP (quite deservedly I might add). You can all rest assured that I won't be attempting any clickbait titles again for the foreseeable future. I also believe that the whole H+ article is written in a very poor and aggressive manner, but that some of the arguments raised cannot be ignored.
On my original article, many people raised this post [insert link] by Eliezer Yudkowsky as a counterargument to the idea that an FAI could have goals contrary to what we programmed. In summary, he argues that a program doesn't necessarily do as the programmer wishes, but rather as they have programmed. In this sense, there is no ghost in the machine that interprets your commands and acts accordingly, it can act only as you have designed. Therefore from this, he argues, an FAI can only act as we had programmed.
I personally think this argument completely ignores what has made AI research so successful in recent years: machine learning. We are no longer designing an AI from scratch and then implementing it; we are creating a seed program which learns from the situation and alters its own code with no human intervention, i.e. the machines are starting to write themselves, e.g. with google's deepmind. They are effectively evolving, and we are starting to find ourselves in the rather concerning position where we do not fully understand our own creations.
You could simply say, as someone said in the comments of my previous post, that if X represents the goal of having a positive effect on humanity, then the FAI should be created so as to have X as its primary directive. My answer to that is the most promising developments have been through imitating the human brain, and we have no reason to believe that the human brain (or any other brain for that matter) can be guaranteed to have a primary directive. One could argue that evolution has given us our prime directives: to ensure our own continued existence, to reproduce and to cooperate with each other, but there are many people who are suicidal, who have no interest in reproducing and who violently rebel against society (for example psychopaths). We are instructed by society and our programming to desire X, but far too many of us desire, say, Y for this to be considered a reliable way of achieving X.
Evolution’s direction has not ensured that we do “what we are supposed to do”, we could well face similar disobedience from our own creation. Seeing as the most effective way we have seen of developing AI is creating them in our image; as there are ghosts in us, there could well be ghosts in the machine.