ThisSpaceAvailable comments on [LINK] David Deutsch on why we don't have AGI yet "Creative Blocks" - Less Wrong

2 Post author: harshhpareek 17 December 2013 07:03AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (37)

You are viewing a single comment's thread. Show more comments above.

Comment author: ThisSpaceAvailable 22 December 2013 08:46:07AM 0 points [-]

"It seems unlikely that you could have a genetic algorithm operate on a population of code and end up with a program that passes the Turing test"

Well, we have one case of it working, and that wasn't even with the process being designed with the "pass the Turing test" specifically as a goal.

"because at each step the genetic algorithm (as an optimization procedure) needs to have some sense of what is more or less likely to pass the test."

Having an automated process for determining with certainty that something passes the Turing test is quite stronger than merely having nonzero information. Suppose I'm trying to use a genetic algorithm to create a Halting Tester, and I have a Halting Tester that says that a program doesn't halt. If I know that the program does, in fact, not halt after n steps (by simply running the program for n steps), that provides nonzero information about the efficacy of my Halting Tester. This suggests that I could create a genetic algorithm for creating Halting Testers (obviously, I couldn't evolve a perfect Halting Tester, but perhaps I could evolve one that is "good enough", given some standard). And who knows, maybe if I had such a genetic algorithm, not only would my Halting Testers evolve better Halting Testing, but since they are competing against each other, they would evolve better Tricking Other Halting Testers, and maybe that would eventually spawn AGI. I don't find that inconceivable.

Comment author: Vaniver 22 December 2013 06:46:01PM 0 points [-]

Well, we have one case of it working, and that wasn't even with the process being designed with the "pass the Turing test" specifically as a goal.

Are you referring to the biological evolution of humans, or stuff like this?

Having an automated process for determining with certainty that something passes the Turing test is quite stronger than merely having nonzero information.

Right; how did you interpret "some sense of what is more or less likely to pass the test"?

Comment author: ThisSpaceAvailable 22 December 2013 07:47:55PM 0 points [-]

I was referring to the biological evolution of humans; in your link, the process appears to have been designed with the Turing test in mind.

There's probably going to be a lot of guesswork as for as what metrics for "more likely to pass" are best, but the process doesn't have to be perfect, just good enough to generate intelligence. Obvious places to start would be complex games such as Go and poker, and replicating aspects of human evolution, such as simulating hunting and social maneuvering.

Comment author: Vaniver 22 December 2013 09:19:00PM 0 points [-]

I was referring to the biological evolution of humans

Ok. When I said "you," I meant modern humans operating on modern programming languages. I also don't think it's quite correct to equate actual historical evolution and genetic algorithms, for somewhat subtle technical reasons.