I agree that one problem with the wand is that it is not general. The same thing is true of paperclippers. Just as the wand is limited to converting things to gold, the paperclipper is limited to making paperclips.
But calling evolution intelligent is to speak in metaphors, and that indicates that your definition of intelligence is not a good one if we wish to speak strictly about it.
Humans certainly do recognize patterns in patterns. For example, we recognize that some things are red. That means recognizing a pattern: this red thing is similar to that red thing. Likewise, we recognize that some things are orange. This orange thing is similar to that orange thing. Likewise with other colors. And within those patterns we recognize other similarities, and so people talk about "warm" and "cool" colors, noticing that blue and green are similar to each other in some way, and that orange and red are similar to each other in another way. Likewise we have the concept of "color", which is noting that all of these patterns are part of a more general pattern. And then we notice that the concepts of "color" and "sound" have an even more general similarity to each other. And so on.
The neural networks you spoke of do nothing like this. Yes you might be able to apply them to those various tasks. But they only generate something like base level patterns, like noticing red and orange. They do not understand patterns of patterns.
I think that saying "only about a million" years was needed for something implies a misunderstanding, at least on some level, of how long a million years is.
I agree that babies have the ability to be intelligent all along. Even when they are babies, they are still recognizing patterns in patterns. None of our AI programs do this at all.

Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
If you reprogram the paperclipper to value something other than paperclips, then you have a different program. The original one cannot value anything except paperclips.
Second, the idea that a paperclipper can "solve problems, speak language etc." is simply assuming what you should be proving. The point of the wand is that something that is limited to a single goal does not do those things, and I do not expect anything limited to the goal of paperclips to do such things, even if they would serve paperclips.
I understand how word vectors work, and no, they are not what I am talking about.
"That's just language." Yes, if you know how to use language, you are intelligent. Currently we have no AI remotely close to actually being able to use language, as opposed to briefly imitating the use of language.
It's possible to construct a paperclipper in theory. AIXI-tl is basically a paperclipper. It's goal is not paperclips but maximizing a reward signal, which can come from anything (perhaps a paperclip recognizer...) AIXI-tl is very inefficient, but it's a proof of concept that paperclipers are possible to construct. AIXI-tl is fully capable of speaking, solving problems, anything that it predicts will lead to more reward.
A real AI would be much more efficient approximation of AIXI. Perhaps something like modern neural nets, that can predict what actions will lead to reward. Probably something more complicated. But it's definitely possible to construct paperclippers that only care about maximizing some arbitrary reward. The idea that just having the goal of getting paperclips would somehow make it incapable of doing anything else, is just absurd.
As for your hypothesis of what intelligence is, I find it incredibly unconvincing. It's true I don't necessarily have a better hypothesis. Because no one does. No one knows how the brain works. But just asserting a vague hypothesis like doesn't help anyone unless it actually explains something or helps us build better models of intelligence. I don't think it explains anything. Its definitely not specific enough to build an actual model out of.
But really it's irrelevant to this discussion. Even if you are correct, it doesn't say anything about AI progress. In fact if you are right, it could mean AI is even sooner. Because if it's correct, it means AI researchers just need to figure out that one idea, to suddenly make intelligent AIs. If we are only one breakthrough like that away from AGI, we are very close indeed.