I talked to one fellow about GO playing AI last night and I mentioned these Restricted Boltzmann Machines. If the GO problem can be cast as an image processing problem, RBMs might be worth looking into: http://www.youtube.com/watch?v=AyzOUbkUf3M Here is a more recent Google Tech talk by Hinton on RBMs http://www.youtube.com/watch?v=VdIURAu1-aU
Your examples are all missing either the 'self' aspect or the 'recursive' aspect. See Intelligence Explosion for an actual example of recursive self-modification, or for a longer explanation of recursive self-improvement, this post.
I found those links posted above interesting.
I concede that the human learning process is not at all as explosive as the self-modifying AI processes of the future will be, but I was speaking to a different point:
Eliezer said: "I'd be pretty doubtful of any humans trying to do recursive self-modification in a way that didn't involve logical proof of correctness to start with."
I am arguing that humans do recursive self-modification all the time, without "proofs of correctness to start with" - even to the extent of developing gene therapies that modify our own hardware.
I fail to see how human learning is not recursive self-modification. All human intelligence can be thought of as deeply recursive. A playFootBall() function certainly calls itself repeatedly until the game is over. A football player certainly improves skill at football by repeated playing football. As skills sets develop human software (and instantiation) is being self-modified in the development of new neural networks and muscles (i.e. marathon runners have physically larger hearts, etc.) Arguably, hardware is being modified via epigenetics (phenotypes changing within narrow ranges of potential expression). As a species, we are definitely exploring genetic self-modification. A scientist who injects himself with a gene-based therapy is self-modifiying hardware.
We do all these things without foregoing proof of correctness and yet we still make improvements. I don't think that we should ignore the possibility of an AI that destroys the world. I am very happy that some people are pursuing a guarantee that it won't happen. I think it is worth noting that the process that will lead to provably friendly AI seems very different than the one that leads to not-necessarily-so-friendly humans and human society.
the only safe AI would be a logic system using a consistent logic, so that we could verify that certain undesirable statements were false in that system
Could be correct or wildly incorrect, depending on exactly what he meant by it. Of course you have to delete "the only", but I'd be pretty doubtful of any humans trying to do recursive self-modification in a way that didn't involve logical proof of correctness to start with.
We might say that humans as individuals do recursive self-modification when they practice at a skilled task such as playing football or riding a bike. Coaches and parents might or might not be conscious of logical proofs of correctness when teaching those tasks. Arguably a logical proof of (their definition of) correctness could be derived. But I am not sure that is what you mean.
Humans as a species do recursive self-modification through evolution. Correctness in that context is survival and the part under human control is selecting mates. I would like to have access to those proofs. They might come in handy when dating.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
All very true. Which is one reason I dislike all talk of "complexity" - particularly in such a fuzzy context as debates with creationists.
But we do all have some intuitions as to what we mean by complexity in this context. Someone, I believe it was you, has claimed in this thread that evolution can generate complexity. I assume you meant something other than "Evolution harnesses mutation as a random input and hence as a source of complexity".
William Dembski is an "intelligent design theorist" (if that is not too much of an oxymoron) who has attempted to define a notion of "specified complexity" or "Complex Specified Information" (CSI). He has not, IMHO, succeeded in defining it clearly, but I think he is onto something. He asserts that biology exhibits CSI. I agree. He asserts that evolution under natural selection is incapable of generating CSI - claiming that NS can at best only transfer information from the environment to the genome. I am pretty sure he is wrong about this, but we need a clear and formal definition of CSI to even discuss the question intelligently.
So, I guess I want to turn your question around. Do you have some definition of "complexity" in mind which allows for correct mathematical thinking about these kinds of issues?
"NS can at best only transfer information from the environment to the genome." Does this statement mean to suggest that the environment is not complex?