I agree that one problem with the wand is that it is not general. The same thing is true of paperclippers. Just as the wand is limited to converting things to gold, the paperclipper is limited to making paperclips.
The paperclipper can be programmed to value any goal other than paperclips. Paperclips is just it's current goal. The gold wand can not do anything else.
But even if it's desire for paperclips is immutable and hard wired, it's still clearly intelligent. It can solve problems, speak language, design machines, etc, so long as it serves it's goal of making paperclips.
Humans certainly do recognize patterns in patterns. For example, we recognize that some things are red. That means recognizing a pattern: this red thing is similar to that red thing. Likewise, we recognize that some things are orange.
Artificial neural networks can do the same thing. This is a trivial property of NNs, similar objects produce similar internal representations. Internal representations tend to be semantically meaningful, lookup word vectors.
And within those patterns we recognize other similarities, and so people talk about "warm" and "cool" colors, noticing that blue and green are similar to each other in some way, and that orange and red are similar to each other in another way.
That's not a "pattern within a pattern". That's just a typical pattern, that green and blue appear near "cool" things and that orange and red appear near "hot" things.
Likewise we have the concept of "color", which is noting that all of these patterns are part of a more general pattern.
That's just language. The word "color" happens to be useful to communicate with people. I agree that language learning is important for AI. And this is a field that is making rapid progress.
If you reprogram the paperclipper to value something other than paperclips, then you have a different program. The original one cannot value anything except paperclips.
Second, the idea that a paperclipper can "solve problems, speak language etc." is simply assuming what you should be proving. The point of the wand is that something that is limited to a single goal does not do those things, and I do not expect anything limited to the goal of paperclips to do such things, even if they would serve paperclips.
I understand how word vectors work, and n...