You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Arenamontanus comments on Top 9+2 myths about AI risk - Less Wrong Discussion

44 Post author: Stuart_Armstrong 29 June 2015 08:41PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (45)

You are viewing a single comment's thread. Show more comments above.

Comment author: [deleted] 30 June 2015 08:32:03AM *  0 points [-]

Argument for a powerful AI being unlikely - was this considered before?

One problem I see here is the "lone hero inventor" implicit assumption, namely that there are people optimizing things for their goals on their own and an AI could be extremely powerful at this. I would like to propose a different model.

This model would be that intelligence is primarily a social, communication skill, it is the skill to disassemble (understand, lat. intelligo), play with and reassemble ideas acquired from other people. Like literally what we are doing on this forum. It is conversational. The whole standing on the shoulder of giants thing, not the lone hero thing.

In this model, inventions are made by the whole of humankind, a network, where each brain is a node communicating slightly modified ideas to each other.

In such a network one 10000 IQ node does not get very powerful, it doesn't even make the network very powerful i.e. a friendly AI does not quickly solve mortality even with human help.

The primary reason I think such a model is correct that intelligence means thinking, we think in concepts, and concepts are not really nailed down but they are constantly modified through a social communication process. Atoms used to mean indivisible units, then they became divisible into little ping-pong balls, and then the model was updated into something entirely different by quantum physics, but is quantum physics based atom theory about the same atoms that were once thought to be indivisible or is this a different thing now? Is modern atomic theory still about atoms? What are we even mapping here and where does the map end and the territory begin?

So the point is human knowledge is increased by a social communication process where we keep throwing bleggs at each other, and keep redefining what bleggs and rubes mean now, keep juggling these concept, keep asking what you really mean under bleggs, and so on. Intelligence is this communication ability, it is to disassemble Joe's concept of bleggs and understand how it differs from Jane's concept of bleggs and maybe assemble a new concept that describes both bleggs.

Without this communication, what would be even intelligence? What would lone intelligence be? It is almost a contradictory term in itself. What would a brain alone in a universe intelligere i.e. understand if nothing would talk to it? Just tinker with matter somehow without any communication whatsoever? But even if we imagine such an "idiot inventor genius", some kind of a mega-plumber on steroids instead of an intellectual or academic it needs goals for that kind of tinkering with that material stuff, for that it needs concepts, and concepts come and evolve from a constant social ping-pong.

An AI would be yet another node in our network, and participate in this process of throwing blegg-concepts at each other probably far better than any human can, but still just a node.

Comment author: Arenamontanus 01 July 2015 09:17:49AM 2 points [-]

I think you will find this discussed in the Hanson-Yudkowsky foom debate. Robin thinks that distributed networks of intelligence (also known as economies) are indeed a more likely outcome than a single node bootstrapping itself to extreme intelligence. He has some evidence from the study of firms, which is a real-world example of how economies of scale can produce chunky but networked smart entities. As a bonus, they tend to benefit from playing somewhat nicely with the other entities.

The problem is that while this is a nice argument, would we want to bet the house on it? A lot of safety engineering is not about preventing the most likely malfunctions, but the worst malfunctions. Occasional paper jams in printers are acceptable, fires are not. So even if we thought this kind of softer distributed intelligence explosion was likely (I do) we could be wrong about the possibility of sharp intelligence explosions, and hence it is rational to investigate them and build safeguards.