Squark comments on How to Study Unsafe AGI's safely (and why we might have no choice) - Less Wrong

10 Post author: Punoxysm 07 March 2014 07:24AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (47)

You are viewing a single comment's thread. Show more comments above.

Comment author: Squark 09 March 2014 08:34:59PM 0 points [-]

While a sub-human narrow AI can be optimized at designing general AI? That seems unlikely.

I think it's either that or we won't be able to build any human-level AGI without WBE.

If we were to reboot evolution from the Cambrian, it's entirely unnecessary that it would still produce humans, or something of similar intelligence, within the same time frame.

Agreed. However hominid evolution was clearly not pure luck since it involved significant improvement over a relatively short time span.

Moreover, evolution is a process of adaptation to the environment. How can a boxed narrow AI produce something which is well adapted to the environment outside the box?

Evolution produced something which is adapted to a very wide range of environments, including environments vastly different from the environment in which evolution happened. E.g., US Astronauts walked the surface of the moon which is very different from anything relevant to evolution. We call this something "general intelligence". Ergo, it is possible to produce general intelligence by a process which has little of it.

Humans can't self-improve to any significant extent. Stuff less intelligent than humans we can design can't self-improve to any significant extent.

My point is that it's unlikely the point of diminishing returns is close to human intelligence. If this point is significantly below human intelligence then IMO we won't be able to build AGI without WBE.

The program doesn't have a supernatural ghost who can decide "I'm going to be an interpreter starting from now". Either is an interpreter (in which case it is not an AI) or it is not.

It is an AI which contains an interpreter as a subroutine. My point is, if you somehow succeed to freeze a self-modifying AI at a point in which it is already interesting and but not yet dangerous, then the next experiment has to start from scratch anyway. You cannot keep running it while magically turning self-modification off since the self-modification is an inherent part of the program. This stands in contrast to your ability to e.g. turn on/off certain input/output channels.

Comment author: V_V 13 March 2014 02:14:20PM 0 points [-]

I think it's either that or we won't be able to build any human-level AGI without WBE.

Why?

Agreed. However hominid evolution was clearly not pure luck since it involved significant improvement over a relatively short time span.

It wasn't pure luck, there was selective pressure. But this signal towards improvement is often weak and noisy, and it doesn't necessarily correlate well with intelligence: a chimp is smarter than a lion, but not generally more evolutionary fit. Even homo sapiens had a population bottleneck 70,000 which almost led to extinction.

Comment author: Squark 13 March 2014 08:43:55PM 0 points [-]

It is my intuition that if something as complex and powerful as human-level intelligence can be engineered in the foreseeable future, than it would have to use some kind of bootstrapping. I admit it is possible than I'm wrong and that in fact progress in AGI would be through a very long sequence of small improvements and that the AGI will be given no introspective / self-modification powers. In this scenario, a "proto-singularity" is a real possibility. However, what I think will happen is that we won't make significant progress before we develop a powerful mathematical formalism. Once such a formalism exists, it will be much more efficient to use it in order to build a pseudo-narrow self-modifying AI than keep improving AI "brick by brick".