Posts

Sorted by New

Wiki Contributions

Comments

Lanier's thinking comes across to me as muddy and mixed up, and his argumentative style as obnoxious. I tried, but failed to distill anything of value from his side of it. You're a patient man, Eliezer.

The things you have had to say about "impossible" research problems are among your most insightful. They fly right in the face of the more devilishly erroneous human intuitions, especially group intuitions, about what are and are not good ways to spend time and resources.

epwripi: This might sound cockier than I mean it to, but really, I tire of such assertions. I know what intelligence is, and I suspect many here do as well. Plenty of good definitions have been put forth, but somebody is always going to have a nitpick because it doesn't include their favorite metaphor, and there are always going to be people who don't want it to be defined. It can certainly be quantified roughly and relatively, at the least (though "points on a linear scale" may be tending towards a strawman extreme), and when people speak of an individual's intelligence, it's implied that they're talking about a certain time, or an average over time. It's trivial to point out that individuals can be consistent and inconsistent. It's the same with athletic ability.

Ori: It seems to me that what you're describing has already been approximated, due to the filtering effects of certain job markets and employers. Look to Seattle's Eastside, or Silicon Valley. I've never been to the latter, but the former is a lot like heaven, except that the streets aren't slated to be paved with gold until 2014. (Planning takes time.)

Shane, I was basically agreeing with you with regard to problem spaces: normalizing space size isn't enough, you've also got to normalize whatever else makes them incomparable. However, let's not confuse problem space with state space. Eliezer focuses on the latter, which I think is pretty trivial compared to what you're alluding to.

Shane, that seems distantly like trying to compare two positions in different coordinate systems, without transforming one into the other first. Surely there is a transform that would convert the "hard" space into the terms of the "easy" space, so that the size of the targets could be compared apples to apples.

Ian: I would say that his results are superior, but that he is not. Also, it is a mistake to pretend that the internal life doesn't exist.

Whenever people accuse me of not being open-minded, I freely admit that I wouldn't claim to be, because I've reached conclusions. Does being receptive to new evidence count?

Eliezer, you have been pretty prolific. I've thoroughly enjoyed digesting your writing lo these eight years, but this blogging thing seems to have worked out especially well. Enjoy your well-deserved regrouping time.

If a creature engages in goal-directed activity, then I call it intelligent. If by "having said goal" you mean "consciously intends it", than I regard the faculties for consciously intending things as a more sophisticated means for aiming at goals. If intercepting the ball is characterized (not defined) as "not intelligent", that is true relative to some other goal that supercedes it.

I'm basically asserting that the physical evolution of a system towards a goal, in the context of an environment, is what is meant when one distinguishes something that is "intelligent" from something (say, a bottle) that is not. Here, it is important to define "goal" and "environment" very broadly.

Of course, people constantly use the word "intelligence" to mean something more complicated, and higher-level. So, someone might say that a human is definitely "intelligent", and maybe a chimp, but definitely not a fly. Well, I think that usage is a mistake, because this is a matter of degree. I'm saying that a fly has the "I" in "AI", just to a lesser degree that a human. One might argue that the fly doesn't make plans, or use tools, or any number of accessories to intelligence, but I see those faculties as upgrades that raise the degree of intelligence, rather than defining it.

Before you start thinking about "minds" and "cognition", you've got to think about machinery in general. When machinery acquires self-direction (implying something toward which it is directed), a qualitative line is crossed. When machinery acquires faculties or techniques that improve self-direction, I think that is more appropriately considered quantitative.