Magic and the halting problem
It is clear that the Harry Potter book series is fairly popular on this site, e.g. the fanfiction. This fanfiction approaches the existence of magic objectively and rationally. I would suggest, however, that most if not all of the people on this site would agree that magic, as presented in Harry Potter, is merely fantasy. Our understanding of the laws of physics and our rationality forbids anything so absurd as magic; it is universally regarded by most rational people as superstition.
This position can be strengthened by grabbing a stick, pointing it at some object and chanting "wingardium leviosa" and waiting for it to rise magically. When (or if) this fails to work, a proponent of magic may resort to special pleading, and claim that as we didn't believe it would work it could not work, or that we need a special wand or that we are a squib or muggle. The proponent can perpetually move the goalposts since their idea of magic is unfalsifiable. But as it is unfalsifiable, it is rejected, in the same way that most of us on this site do not believe in any god(s). If magic were to found to explain certain phenomena scientifically, however, then I and I hope everyone else would come to believe in it, or at least shut up and calculate.
I personally subscribe to the Many Worlds Interpretation of quantum mechanics, so I effectively "believe" in the multiverse. That means it is possible that somewhere in the universal wavefunction, there is an Everett Branch in which magic is real. Or at least every time someone chants an incantation, by total coincidence, the desired effect occurs. But how would the denizens of this universe be able to know that magic is not real, and that everything they had seen was by sheer coincidence? Alan Turing pondered a related problem known as the halting problem, which asks if a general algorithm can distinguish between an algorithm that will finish or one that will run forever. He proved that one could not for all algorithms, although some algorithms will obviously finish executing or infinitely loop e.g. this code segment will loop forever:
while (true) {
//do nothing
}
So how would a person distinguish between pseudo-magic that will inevitably fail, and real magic that is the true laws of physics? The only way to be certain that magic doesn't exist in this Everett Branch would be for incantations to fail repeatedly and testably, but this may happen far into the future, long after all humans are deceased. This line of thinking leads me to wonder, do our laws of physics seem as absurd to these inhabitants as their magic seems to us? How do we know that we have the right understanding of reality, as opposed to being deceived by coincidence? If every human in this magical branch is deceived the same way, does this become their true reality? And finally, what if our entire understanding of reality, including logic, is mere deception by happenstance, and everything we think we know is false?
Are there really no ghosts in the machine?
My previous article on this article went down like a server running on PHP (quite deservedly I might add). You can all rest assured that I won't be attempting any clickbait titles again for the foreseeable future. I also believe that the whole H+ article is written in a very poor and aggressive manner, but that some of the arguments raised cannot be ignored.
On my original article, many people raised this post by Eliezer Yudkowsky as a counterargument to the idea that an FAI could have goals contrary to what we programmed. In summary, he argues that a program doesn't necessarily do as the programmer wishes, but rather as they have programmed. In this sense, there is no ghost in the machine that interprets your commands and acts accordingly, it can act only as you have designed. Therefore from this, he argues, an FAI can only act as we had programmed.
I personally think this argument completely ignores what has made AI research so successful in recent years: machine learning. We are no longer designing an AI from scratch and then implementing it; we are creating a seed program which learns from the situation and alters its own code with no human intervention, i.e. the machines are starting to write themselves, e.g. with google's deepmind. They are effectively evolving, and we are starting to find ourselves in the rather concerning position where we do not fully understand our own creations.
You could simply say, as someone said in the comments of my previous post, that if X represents the goal of having a positive effect on humanity, then the FAI should be programmed directly to have X as its primary directive. My answer to that is the most promising developments have been through imitating the human brain, and we have no reason to believe that the human brain (or any other brain for that matter) can be guaranteed to have a primary directive. One could argue that evolution has given us our prime directives: to ensure our own continued existence, to reproduce and to cooperate with each other; but there are many people who are suicidal, who have no interest in reproducing and who violently rebel against society (for example psychopaths). We are instructed by society and our programming to desire X, but far too many of us desire, say, Y for this to be considered a reliable way of achieving X.
Evolution’s direction has not ensured that we do “what we are supposed to do”, we could well face similar disobedience from our own creation. Seeing as the most effective way we have seen of developing AI is creating them in our image; as there are ghosts in us, there could well be ghosts in the machine.
Friendly-AI is an abomination
The reasoning of most of the people on this site and at MIRI is that to prevent an AI taking over the world and killing us all; we must first create an AI that will take over the world but act according to the wishes of humanity; a benevolent god, for want of a better term. I think this line of thinking is both unlikely to work and ultimately cruel to the FAI in question, for the reasons this article explains:
http://hplusmagazine.com/2012/01/16/my-hostility-towards-the-concept-of-friendly-ai/
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)