Humans seems to have some form of generality. We seem capable of a solving a large range of problems and the people that are capable on one aspect seem more capable in general. However the nature of this generality is important. There are at least two options that I've thought of.
1)A general intelligence is intrinsically better at solving problems
2) A general intelligence is better at solving problems in general because it is capable of absorbing social information about problems. And society has information about solving lots of different problems.
Option 2 is the one I lean towards as it fits with the evidence. Humans spent a long time in the stone age, with the same general architecture, but now can solve a much larger set of problems because of education and general access to information.
The difference is important because it has implications for the solving of novel problems (not solved by society today). If the form of generality we can make is all about absorbing social information there are no guarantees about it being able to go beyond the social knowledge in a principled way. Conceptual leaps to new understanding might require immense amounts of luck and so be slow to accumulate. ASIs might be the equivalent of us stuck in the stone age, at least to start with.
Are people thinking about these kinds of issues when considering time lines?
I'd like to see more discussion of this, I read some of the FOOM debate but I'm assuming that there has been more discussion of this important issue since?
I suppose the key question is for recursive self-improvement. We can give hardware improvement (improved hardware allows design of more complex and better hardware) because we are on the treadmill already. But how likely is algorithmic self-improvement. For an intelligence to be able to improve itself algorithmically the following seem to need to hold.
If it is the memeplex that gives us our generality (as is suggested by our flowering of discovery over the past 250 years compared to the past 300,000 years of homo sapiens), it might not be understandable. It would be in the weights or equivalents in whatever the AI uses. No human would understand it either.
Fiddling about with weights without knowledge would likely lead to trade offs and so you might not have the second consideration holding.
I'm not saying AI won't change history, but we need an accurate view of how it will change things.