KatjaGrace comments on Superintelligence 6: Intelligence explosion kinetics - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (67)
In figure 7 (p63) the human and civilizational baselines are defined relative to 2014. Note that if human civilization were improving in its capabilities too - perhaps because of individual cognitive enhancements, technological progress, or institutional improvements - the crossover point would move upwards, while the 'strong superintelligence point' would not. Thus AI could in principle reach the 'strong superintelligence' point before the 'crossover'.
Yes, I fully agree except your last point. I think the expectation what can be called 'strong superintelligence' will rise as well. People will perceive their intelligence boost by supportive aids as 'given assistance'.
Especially in a slow takeoff scenario we will have many tools incorporating weak AI superintelligent capabilities. A team of humans with these tools will reach 'strong superintelligence' in todays standards. In crossover a sole AI might be superintelligent either.
I disagree to draw the baselines for human, civilization and strong superintelligence horizontally. Bostrums definition of human baseline (p62-63):
As information sources, technological aids and communication capabilies (e.g. immersive 3D VR work environments) improve over time, these 'baselines' should be rising, see following figure: