Argument for a powerful AI being unlikely - was this considered before?
One problem I see here is the "lone hero inventor" implicit assumption, namely that there are people optimizing things for their goals on their own and an AI could be extremely powerful at this. I would like to propose a different model.
This model would be that intelligence is primarily a social, communication skill, it is the skill to disassemble (understand, lat. intelligo), play with and reassemble ideas acquired from other people. Like literally what we are doing on this forum. It is conversational. The whole standing on the shoulder of giants thing, not the lone hero thing.
In this model, inventions are made by the whole of humankind, a network, where each brain is a node communicating slightly modified ideas to each other.
In such a network one 10000 IQ node does not get very powerful, it doesn't even make the network very powerful i.e. a friendly AI does not quickly solve mortality even with human help.
The primary reason I think such a model is correct that intelligence means thinking, we think in concepts, and concepts are not really nailed down but they are constantly modified through a social communication process. Atoms used to mean indivisible units, then they became divisible into little ping-pong balls, and then the model was updated into something entirely different by quantum physics, but is quantum physics based atom theory about the same atoms that were once thought to be indivisible or is this a different thing now? Is modern atomic theory still about atoms? What are we even mapping here and where does the map end and the territory begin?
So the point is human knowledge is increased by a social communication process where we keep throwing bleggs at each other, and keep redefining what bleggs and rubes mean now, keep juggling these concept, keep asking what you really mean under bleggs, and so on. Intelligence is this communication ability, it is to disassemble Joe's concept of bleggs and understand how it differs from Jane's concept of bleggs and maybe assemble a new concept that describes both bleggs.
Without this communication, what would be even intelligence? What would lone intelligence be? It is almost a contradictory term in itself. What would a brain alone in a universe intelligere i.e. understand if nothing would talk to it? Just tinker with matter somehow without any communication whatsoever? But even if we imagine such an "idiot inventor genius", some kind of a mega-plumber on steroids instead of an intellectual or academic it needs goals for that kind of tinkering with that material stuff, for that it needs concepts, and concepts come and evolve from a constant social ping-pong.
An AI would be yet another node in our network, and participate in this process of throwing blegg-concepts at each other probably far better than any human can, but still just a node.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
It would be neat to actually make an implementation of this to show sceptics. It seems to be within the reach of a MSc project or so. The hard part is representing 2-5.