agi-hater
agi-hater has not written any posts yet.

agi-hater has not written any posts yet.

Furthermore, you compare humans to computers and brains to machines and imply that consciousness is computation. To say that "consciousness is not computation" is comparable to "god of gaps" argument is ironic considering the existence of the AI effect. Your view is hardly coherent in any other worldview than hardcore materialism (which itself is not coherent). Again, we stumble into an area of philosophy, which you hardly addressed in your article. Instead you focused on predicting how good our future computers will be at computing while making appeals to emotion, appeals to unending progress, appealing to the fallacy that solving the last 10% of the "problem" is as easy as the other... (read more)
Your definition of AGI ("the kind of AI with sufficient capability to make it a genuine threat to humanity's future or survival if it is misused or misaligned") is tragically insufficient, vague, subjective, and arguably misaligned with the generally accepted definition of AGI.
From what you wrote elsewhere ("An AGI having its own goals and actively pursuing them as an agent") you imply that the threat could come from AGI's intentions, that is, you imply that AGI will have consciousness, intentionality, etc. - qualities so far exclusively prescribed to living things (you have provided no arguments to think otherwise).
However, you decided to define "intelligence" as "stuff like complex problem solving that's useful for... (read more)
It does not follow that computationally cheaper things are more likely to happen than computationally expensive things. Moreover, describing something as "computationally difficult" is a subjective value judgment (unless you can reasonably prove otherwise) and implies that all actions/events can be reduced to some form of computation.