timtyler comments on Risks from AI and Charitable Giving - Less Wrong

2 Post author: XiXiDu 13 March 2012 01:54PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (126)

You are viewing a single comment's thread. Show more comments above.

Comment author: XiXiDu 13 March 2012 06:52:25PM 3 points [-]

That post mentions several other hard takeoff scenarios...

Thanks. I will review those scenarios. Just some quick thoughts:

Has evolutionary precedent in that the genetic differences between humans and chimps are relatively small.

On first sight this sounds suspicious. The genetic difference between a chimp and a human amounts to about ~40–45 million bases that are present in humans and missing from chimps. And that number is irrespective of the difference in gene expression between humans and chimps. So it's not like you're adding a tiny bit of code and get a superapish intelligence.

The argument from the gap between chimpanzees and humans is interesting but can not be used to extrapolate onwards from human general intelligence. It is pure speculation that humans are not Turing complete and that there are levels above our own. That chimpanzees exist, and humans exist, is not a proof for the existence of anything that bears, in any relevant respect, the same relationship to a human that a human bears to a chimpanzee.

Serial hardware overhang: an AI running on processors with more serial speed than neurons could be able to e.g. process longer chains of inference instead of relying on cache lookups.

Humans can process long chains of inferences with the help of tools. The important question is if incorporating those tools into some sort of self-perception, some sort of guiding agency, is vastly superior to humans using a combination of tools and expert systems.

In other words, it is not clear that there does exist a class of problems that is solvable by Turing machines in general, but not by a combination of humans and expert systems.

If an AI that we invented can hold a complex model in its mind, then we can also simulate such a model by making use of expert systems. Being consciously aware of the model doesn't make any great difference in principle to what you can do with the model.

Here is what Greg Egan has to say about this in particular:

Whether a mind can synthesise, or simplify, many details into something more tightly knit doesn't really depend on any form of simultaneous access to the data in something like human working memory. Almost every complex mathematical idea I understand, I only really understand through my ability to scribble things on paper while I'm reading a textbook. No doubt some lucky people have bigger working memories than mine, but my point is that modern humans synthesise concepts all the time from details too complex to hold completely in their own biological minds. Conversely, an AI with a large working memory has ... a large working memory, and doesn't need to reach for a sheet of paper. What it doesn't have is a magic tool for synthesising everything in its working memory into something qualitatively different.

Comment author: timtyler 13 March 2012 11:17:42PM *  1 point [-]

The argument from the gap between chimpanzees and humans is interesting but can not be used to extrapolate onwards from human general intelligence. It is pure speculation that humans are not Turing complete and that there are levels above our own.

Surely humans are Turing complete. I don't think anybody disputes that.

We know that capabilities extend above our own in all the realms where machines already outstrip our capabilities - and we have a pretty good idea what greater speed, better memory and more memory would do.

Comment author: CarlShulman 14 March 2012 07:39:38PM 2 points [-]

Agree with your basic point, but a nit-pick: limited memory and speed (heat death of the universe, etc) put many neat Turing machine computations out of reach of humans (or other systems in our world) barring new physics.

Comment author: timtyler 14 March 2012 08:57:15PM 1 point [-]

Sure: I meant in the sense of the "colloquial usage" here:

In colloquial usage, the terms "Turing complete" or "Turing equivalent" are used to mean that any real-world general-purpose computer or computer language can approximately simulate any other real-world general-purpose computer or computer language, within the bounds of finite memory - they are linear bounded automaton complete. A universal computer is defined as a device with a Turing complete instruction set, infinite memory, and an infinite lifespan; all general purpose programming languages and modern machine instruction sets are Turing complete, apart from having finite memory.