I've been going through the AIFoom debate, and both sides makes sense to me. I intend to continue, but I'm wondering if there're already insights in LW culture I can get if I just ask for them.

 

My understanding is as follows:

 

The difference between a chimp and a human is only 5 million years of evolution. That's not time enough for many changes.

 

Eliezer takes this as proof that the difference between the two in the brain architecture can't be much. Thus, you can have a chimp-intelligent AI that doesn't do much, and then with some very small changes, suddenly get a human-intelligent AI and FOOM!

 

Robin takes the 5-million year gap as proof that the significant difference between chimps and humans is only partly in the brain architecture. Evolution simply can't be responsible for most of the relevant difference; the difference must be elsewhere.

So he concludes that when our ancestors got smart enough for language, culture became a thing. Our species stumbled across various little insights into life, and these got passed on. An increasingly massive base of cultural content, made of very many small improvements is largely responsible for the difference between chimps and humans.

Culture assimilated new information into humans much faster than evolution could.

So he concludes that you can get a chimp-level AI, and to get up to human-level will take, not a very few insights, but a very great many, each one slowly improving the computer's intelligence. So no Foom, it'll be a gradual thing.

 

So I think I've figured out the question. Is there a commonly known answer, or are there insights towards the same?

New to LessWrong?

New Comment
48 comments, sorted by Click to highlight new comments since: Today at 5:45 AM

It is true that a young human (usually) starts with a ton of cultural knowledge that a young chimp doesn't have.

It is also true that if you would try giving the cultural knowledge to the young chimp, they wouldn't be able to process it.

Therefore, culture is important, but the genetic adaptation that makes culture possible is also important.

If the AI would get the ability to use human culture, after connecting to internet it would be able to use the human culture just like humans; maybe even better, because humans are usually limited to a few cultures and subcultures, while a sufficiently powerful AI could use them all.

Also, culture is essentially important because humans are mortal, and live relatively shortly compared with how much information they have available. (Imagine that you are an immortal vampire with perfect memory; how many years would it take you to become an expert at everything humans know and do, assuming that the rest of humankind remains frozen in the state of 2016?) Thus culture is the only way to go beyond the capacity of the individual. Also, some experiments get you killed, and culture is a way to get this knowledge. The AI with sufficiently great memory and processing speed would have less need for culture than humans do.

I don't know if I'm saying anything that hasn't been said before elsewhere, but looking at the massive difference in intelligence between humans seems like a strong argument for FOOM to me. Humans are basically all the same. We have 99.99% the same DNA, the same brain structure, size, etc. And yet some humans have exceptional abilities.

I was just reading about Paul Erdos. He could hold 3 conversations at the same time, with mathematicians on highly technical subjects. He was constantly having insights into mathematical research left and right. He produced more papers than any other mathematician.

I don't think it's a matter of culture. I don't think an average person could "learn" to have a higher IQ, let alone be Erdos. And yet he very likely had the same brain structure as everyone else. Who knows what would be possible if you are allowed to move far outside the space of humans.


But this isn't the (main) argument Yudkowsky uses. He relies on this intuition that I don't think was explicitly stated or argued strongly enough. This one intuition is central to all the points about recursive self improvement.

It's that humans kind of suck. At least at engineering and solving complicated technical problems. We weren't evolved to be good at it. There are many cases where simple genetic algorithms outperform humans. Humans outperform GAs in other cases of course, but it shows we are far from perfect. Even in the areas where we do well, we have trouble keeping track of many different things in our heads. We are very bad at prediction and pattern matching compared to small machine learning algorithms much of the time.

I think this intuition that "humans kind of suck" and "there are a lot of places we could make big improvements" is at the core of the FOOM debate and most these AI risk debates. If you really believe this, then it seems almost obvious that AI will very rapidly become much smarter than humans. People that don't have this seem to believe that AI is going to be very slow. Perhaps with steep diminishing returns.

There are many cases where simple genetic algorithms outperform humans. Humans outperform GAs in other cases of course, but it shows we are far from perfect.

To riff on your theme a little bit, maybe one area where genetic algorithms (or other comparably "simplistic" approaches) could shine is in the design of computer algorithms, or some important features thereof.

Well actually GAs aren't that good at algorithms. Because slightly mutating an algorithm usually breaks it, or creates an entirely different algorithm. So the fitness landscape isn't that gentle.

You can do a bit better if you work with circuits instead. And even better if you make the circuits continuous, so small mutations create small changes in output. And you can optimize these faster with gradient descent instead of GAs.

And then you have neural networks, which are quite successful.

https://en.wikipedia.org/wiki/Neuroevolution "Neuroevolution, or neuro-evolution, is a form of machine learning that uses evolutionary algorithms to train artificial neural networks. It is most commonly applied in artificial life, computer games, and evolutionary robotics. A main benefit is that neuroevolution can be applied more widely than supervised learning algorithms, which require a syllabus of correct input-output pairs. In contrast, neuroevolution requires only a measure of a network's performance at a task. For example, the outcome of a game (i.e. whether one player won or lost) can be easily measured without providing labeled examples of desired strategies."

One main thing from which foom depends is the number of AIs which are fooming simultaneously. If it only one, it is real foom. If hundreds, it is just a transition to new equilibrium where will be many superintelligent agents.

The answer on one or many AIs critically depends on the speed of fooming. If fooming doubling time is miliseconds, than one AI wins. But if it weeks or months there will be many fooming AIs, which may result in war between AIs or equilibrium.

But what is more important here is a question: how strong fooming is compare to overall speed of progress in the AI field? If AI is fooming with doubling time of 3 weeks, but the field has speed of 1 month, its is not real fooming.

If AI depends of one crutial insight, which will result in 10 000 times improvement, that will be the real fooming.

[+][anonymous]8y-50