The new paper by Stuart Armstrong (FHI) and Kaj Sotala (SI) has now been published (PDF) as part of the Beyond AI conference proceedings. Some of these results were previously discussed here. The original predictions data are available here.
Abstract:
This paper will look at the various predictions that have been made about AI and propose decomposition schemas for analysing them. It will propose a variety of theoretical tools for analysing, judging and improving these predictions. Focusing specifically on timeline predictions (dates given by which we should expect the creation of AI), it will show that there are strong theoretical grounds to expect predictions to be quite poor in this area. Using a database of 95 AI timeline predictions, it will show that these expectations are born out in practice: expert predictions contradict each other considerably, and are indistinguishable from non-expert predictions and past failed predictions. Predictions that AI lie 15 to 25 years in the future are the most common, from experts and non-experts alike.
I'm not sure what you mean by this question. Is this a variant of what it is like to be a bat? There's a decent argument that such questions don't make sense. But this doesn't matter much: Whether some AI has qualia or not doesn't change any of the external behavior, than for most purposes like existential risk it doesn't matter.
This and most of the rest of your post are assertions, not arguments.
First, what do you mean by behaviorism in this context? Behaviorism as that word is classically defined isn't an attempt to explain consciousness. It doesn't care about consciousness at all.
"Is this a variant of what it is like to be a bat?"
Is there something that it is like to be you? There are also decent arguments that qualia does matter. It is hardly a settled matter. If anything, the philosophical consensus is that qualia is important.
"Whether some AI has qualia or not doesn't change any of the external behavior,"
Yes, behaviorism is a very attractive solution. But presumably what people want is a living conscious artificial mind and not a useful house maid in robot form. I can get that functionality right now.
If I wr... (read more)