V_V comments on How to Study Unsafe AGI's safely (and why we might have no choice) - Less Wrong

10 Post author: Punoxysm 07 March 2014 07:24AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (47)

You are viewing a single comment's thread. Show more comments above.

Comment author: V_V 13 March 2014 02:14:20PM 0 points [-]

I think it's either that or we won't be able to build any human-level AGI without WBE.

Why?

Agreed. However hominid evolution was clearly not pure luck since it involved significant improvement over a relatively short time span.

It wasn't pure luck, there was selective pressure. But this signal towards improvement is often weak and noisy, and it doesn't necessarily correlate well with intelligence: a chimp is smarter than a lion, but not generally more evolutionary fit. Even homo sapiens had a population bottleneck 70,000 which almost led to extinction.

Comment author: Squark 13 March 2014 08:43:55PM 0 points [-]

It is my intuition that if something as complex and powerful as human-level intelligence can be engineered in the foreseeable future, than it would have to use some kind of bootstrapping. I admit it is possible than I'm wrong and that in fact progress in AGI would be through a very long sequence of small improvements and that the AGI will be given no introspective / self-modification powers. In this scenario, a "proto-singularity" is a real possibility. However, what I think will happen is that we won't make significant progress before we develop a powerful mathematical formalism. Once such a formalism exists, it will be much more efficient to use it in order to build a pseudo-narrow self-modifying AI than keep improving AI "brick by brick".