The problem with FAI is that it is nearly impossible for human minds of even high intellect to get good results solely through philosophy - without experimental feedback. Aristotle famously got it wrong when he deduced philosophically that rocks fall faster than feathers.
Also, I believe that it is a pointless endeavor for now. Here are 2 reasons why I think that's the case.
*1. We humans don't have any idea whatsoever as to what constitutes the essence of an intelligent system. Because of our limited intellects, our best bet is to simply take the one intelligent system that we know of - the human brain - and simply replicate it in an artificial manner. This is a far easier task than designing an intelligence from scratch, since in this case the part of design was already done by natural (and sexual) selection.
Our best hope and easiest path for AI is simply to replicate the human brain (preferably the brain of an intelligent and docile human being), and make a body suitable for it to inhabit. Henry Markram is working on this (hopefully he will use himself or someone like himself for the first template - instead of some stupid or deranged human), and he notably hasn't been terribly concerned with Friendly AI. Ask yourself this: what makes for FH (Friendly Humans)? And here we turn to neuroscience, evo-psych and... the thing that some people want to avoid discussing for fear of making others uncomfortable: HBD. People of higher average IQ are, on average, less predisposed to violence. Inbred populations are more predisposed to clannish behavior (we would ideally want an AI that is the opposite of that, that is most willing to be tolerant of out-groups). Some populations of human beings are more predisposed to violence, while some have a reputation for docility (you can see that in the crime rates). It's in the genes and the brain that they produce, combined with some environmental factors like random mutations, the way proteins fold and are expressed, etc.
So obviously the most promising way to create Friendly AI at this point in time is to replicate the brain of a Friendly Human.
*2. We might not be smart or creative enough on average to be able to build a FAI, or it might take too long a time to do so. This is a problem that, if exists, will not only not go away, but actually compound itself. As long as there are no restrictions whatsoever on reproduction and some form of welfarism and socialism exists in most nations on Earth, there will be dysgenics with regards to intelligence - since intelligent people generally have less children than those on the left half of the Bell curve - while the latter are basically subsidized to reproduce by means of wealth transfer from the rich (who are also more likely to have above-average IQs, else they wouldn't be rich).
Even if we do possess the knowledge to replicate the human brain, I believe it is highly unlikely that it will happen in a single generation. AI (friendly or not) is NOT just around the corner. Humanity doesn't even possess the ability to write a bugless operating system, or build a computer that obeys sane laws of personal computing. What's worse, it did possess the ability to built something reasonably close to these ideals, but that ability is lost today. If building FAI takes more than one generation, and the survival of billions of people depends on it, then we should rather have it sooner rather than later.
The current bottleneck with AI and most science in general is with the number of human minds able and willing to do it. Without the ability to mass-produce at least human-level AI, we simply desperately need to maximize the proportion of intelligent and conscientious human beings, by producing as many of them as possible. The sad truth is this: one Einstein or Feinman is more valuable when it comes to the continued well-being of humanity than 99% of the rest of human beings who are simply incapable of producing such high-level work and thought because of either genetics and environmental factors, i.e. conditions in the uterus, enough iodine, etc. The higher the average intelligence of humanity, the more science thrives.
Eugenics for intelligence is the obvious answer. This can be achieved through various means, discussed in this very good post on West Hunter. Just one example, which is one of the slowest but the one advanced nations are 100% capable of doing right now: advanced nations already possess the means to create embryos using the sperm and eggs of the best and brightest of scientists alive today. If our leaders simply conditioned welfare and even payments of large sums of money for the below-average IQ women on them acting as surrogate mothers for "genius" embryos, in 20-30 years we could have dozens of Feynmans and tens of thousands of Yudkowskys working on AI. This would have the added benefit on keeping the low-IQ mothers otherwise pregnant and unavailable for spreading low-IQ genes to the next generation, which would result in less people who are a net drain on the future society and would cause only time-consuming problems for the genius kids (like stealing their possessions or engaging in other criminal activities).
I do realize that increasing intelligence in this manner is bound to have an upper limit and, furthermore, will have some other drawbacks. The high incidence of Tay-Sachs disease among the 110 average IQ Ashkenazi Jews is an illustration of this. But I believe that the discoveries of the healthy high IQ people have the potential to provide more hedons than the dolors of the Tay-Sachs sufferers (or other afflictions of high-IQ people, including some less serious ones like myopia).
EDIT: Given the above, especially if *2. is indeed the case, it is not unreasonable to believe that donating to AmRen or Steve Sailer has greater utility than donating to SIAI. I believe that the brainpower at SIAI is better spent on a problem that is almost as difficult as FAI, namely making HBD acceptable discourse in the scientific and political circles (preferably without telling people who wouldn't fully grasp it and would instead use it as justification for hatred towards Blacks), and specifically peaceful, non-violent eugenics for intelligence as a policy for the improvement of human societies over time.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Before you build a new crop of them, first you should probably make sure society is even listening to its Einsteins and Feynmans, or that the ones you have are even interested in solving these problems. It does no good to create a crop of supergeniuses who aren't interested in solving your problems for you and wouldn't be listened to if they did.
I upvoted you for responding with a refutation and not simply downvoting.