My previous article on this article went down like a server running on PHP (quite deservedly I might add). You can all rest assured that I won't be attempting any clickbait titles again for the foreseeable future. I also believe that the whole H+ article is written in a very poor and aggressive manner, but that some of the arguments raised cannot be ignored.
On my original article, many people raised this post by Eliezer Yudkowsky as a counterargument to the idea that an FAI could have goals contrary to what we programmed. In summary, he argues that a program doesn't necessarily do as the programmer wishes, but rather as they have programmed. In this sense, there is no ghost in the machine that interprets your commands and acts accordingly, it can act only as you have designed. Therefore from this, he argues, an FAI can only act as we had programmed.
I personally think this argument completely ignores what has made AI research so successful in recent years: machine learning. We are no longer designing an AI from scratch and then implementing it; we are creating a seed program which learns from the situation and alters its own code with no human intervention, i.e. the machines are starting to write themselves, e.g. with google's deepmind. They are effectively evolving, and we are starting to find ourselves in the rather concerning position where we do not fully understand our own creations.
You could simply say, as someone said in the comments of my previous post, that if X represents the goal of having a positive effect on humanity, then the FAI should be programmed directly to have X as its primary directive. My answer to that is the most promising developments have been through imitating the human brain, and we have no reason to believe that the human brain (or any other brain for that matter) can be guaranteed to have a primary directive. One could argue that evolution has given us our prime directives: to ensure our own continued existence, to reproduce and to cooperate with each other; but there are many people who are suicidal, who have no interest in reproducing and who violently rebel against society (for example psychopaths). We are instructed by society and our programming to desire X, but far too many of us desire, say, Y for this to be considered a reliable way of achieving X.
Evolution’s direction has not ensured that we do “what we are supposed to do”, we could well face similar disobedience from our own creation. Seeing as the most effective way we have seen of developing AI is creating them in our image; as there are ghosts in us, there could well be ghosts in the machine.
You probably already agreed with "Ghosts in the Machine" before reading it since obviously, a program executes exactly its code even in the context of AI. Also obviously, the program can still appear to not do what it's supposed to if "supposed" is taken to mean to programmer's intent.
These statements don't ignore machine learning; they imply that we should not try to build an FAI using current machine learning techniques. You're right, we understand (program + parameters learned from dataset) even less than (program). So while the outside view might say: "current machine learning techniques are very powerful, so they are likely to be used for FAI," that piece of inside view says: "actually, they aren't. Or at least they shouldn't." ("learn" has a precise operational meaning here, so this is unrelated to whether an FAI should "learn" in some other sense of the word).
Again, whether a development has been successful or promising in some field doesn't mean it will be as successful in FAI, so imitation of the human brain isn't necessarily good here. Reasoning by analogy and thinking about evolution is also unlikely to help; nature may have given us "goals", but they are not goals in the same sense as : "The goal of this function is to add 2 to its input," or "The goal of this program is to play chess well," or "The goal of this FAI is to maximize human utility."
Buit people are using ML techniques. Should MIRI be campaigning to get this research stopped?