That we want to stop AI research. We don’t. Current AI research is very far from the risky areas and abilities. And it’s risk aware AI researchers that are most likely to figure out how to make safe AI.
Is it really the case that nobody interested in AI risk/safety wants to stop or slow down progress in AI research? It seemed to me there was perhaps at least substantial minority that wanted to do this, to buy time.
I am one of those proponents of stopping all AI research and I will explain why.
(1) Don't stand too close to the cliff. We don't know how AGI will emerge and by the time we are close enough to know, it's probably too late. Either human error or malfeasance will bring us over the edge.
(2) Friendly AGI might be impossible. Computer scientists cannot even predict the behavior of simple programs. The halting problem, a specific type of prediction, is provably impossible in non-trivial code. I doubt we'll even grasp why the first AGI we build works.
Neither of t...
Following some somewhat misleading articles quoting me, I thought I’d present the top 9 myths about the AI risk thesis: