At some point soon, I'm going to attempt to steelman the position of those who reject the AI risk thesis, to see if it can be made solid. Here, I'm just asking if people can link to the most convincing arguments they've found against AI risk.
EDIT: Thanks for all the contribution! Keep them coming...
No. I said:
I used brain emulations as analogy to help aid your understanding. Because unless you have deep knowledge of machine learning and computational neuroscience, there are huge inferential distances to cross.
Yes we are. I have made a detailed, extensive, citation-full, and well reviewed case that human minds are just that.
All of our understanding about the future of AGI is based ultimately on our models of the brain and AI in general. I am claiming that the MIRI viewpoint is based on an outdated model of the brain, and a poor understanding of the limits of computation and intelligence.
I will summarize for one last time. I will then no longer repeat myself because it is not worthy of my time - any time spent arguing this is better spent preparing another detailed article, rather than a little comment.
There is extensive uncertainty concerning how the brain works and what types of future AI are possible in practice. In situations of such uncertainty, any good sane probabilistic reasoning agent should come up with a multimodal distribution that spreads belief across several major clusters. If your understanding of AI comes mainly from reading LW - you are probably biased beyond hope. I'm sorry, but this is true. You are stuck in box and don't even know it.
Here are the main key questions that lead to different belief clusters:
If the human mind is built out of a complex mess of hardware specific circuits, and the brain is far from efficient, than there is little to learn from the brain. This is Yudkowsky/MIRI's position. This viewpoint leads to a focus on pure math and avoidance of anything brain-like (such as neural nets). In this viewpoint hard takeoff is likely, AI is predicted to be nothing like human minds, etc.
If you believe that the human is complex and messy hardware, but the brain is efficient, than you get Hanson's viewpoint where the future is dominated by brain emulations. The brain ems win over brain inspired AI because scanning real brain circuitry is easier than figuring out how it works.
Now what if the brain's algorithms are not complex, and the brain is efficient? Then you get my viewpoint cluster.
These questions are empirical - and they can be answered today. In fact, I realized all this years ago and spent a huge amount of time learning more about the future of computer hardware, the limits of computation, machine learning, and computational neuroscience.
Yudkowsky, Hanson, and to some extent Bostrom - were all heavily inspired by the highly influential evolved modularity hypothesis in ev psych from Tooby and Cosmides. In this viewpoint, the brain is complex, and most of our algorithmic content is hardware based rather than software. I have argued that this viewpoint has been tested empirically and now disproven. The brain is built out of relatively simple universal learning algorithms. It will essentially be almost impossible to build practical AGI that is very different from the brain (remember, AGI is defined as software which can do everything the brain does).
Bostrom/Yudkowksky have also argued that the brain is very far from efficient. For example, from true sources of disagreement:
The first two statements are true, the third statement is problematic, and the thrust of the conclusion is incorrect. The minimum realistic energy for a brain-like circuit is probably close to what the brain actually uses:
These errors add up to around 6 orders of magnitude or so. The brain is near the limits of energy efficiency for what it does in terms of irreversible computation. No practical machine we will ever build in the near future is going to be many orders of magnitude more efficient than the brain. Yes, eventually reversible and quantum computing could perhaps result in large improvements, but those technologies are far and will come long after neuromorphic AGI.
That isn't quite correct. We do have hard wiring that raises and lowers the from-the-inside importance of specific features present in our learning data. That is, we have a nontrivial inductive bias which not all possible minds will have, even when we start by assuming that all minds are semi-modular universal learners.