Posts

Sorted by New

Wiki Contributions

Comments

Sorted by
Felix210

It sounds like you're pegging "intelligence" to mean what I'd call a "universal predictor". That is, something that can predict the future (or an unknown) given some information. And that it can do so given a variety of types of sets of unknowns, where "variety" involves more than a little hand-waving.

Therefore, something that catches a fly ball ("knowing" the rules of parabolic movement) can predict the future, but is not particularly "intelligent" if that's all it can do. It may be even a wee bit more "intelligent" if it can also predict where a mortar shell lands. It is even more "intelligent" if it predicts how to land a rocket on the moon. It is even more "intelligent" if it predicts the odds that any given cannon ball will land on a fort's walls. Etc.

I agree with Brain that this is a narrow definition of "intelligence". But that doesn't stop it from being an appropriate goal for AI at this time. That the word, "intelligence" is chosen to denote this goal seems more a result of culture than anything else. AI people go through a filter that extols "intelligence". So ... (One is reminded of many years ago when some AI thinkers had the holy grail of creating a machine that would be able to do the highest order of thinking the AI thinkers could possibly imagine: proving theorems. Coincidently, this is what these thinkers did for a living.)

Here's a thought on pinning down that word, "variety".

First, it seems to me that a "predictor" can be optimized to predict one thing very well. Call it a "tall" predictor (accuracy in Y, problem-domain-ness in X) Or it can be built to predict a lot of things rather poorly, but better than a coin. Call it a "flat" predictor. The question is: How efficient is it? How much prediction-accuracy comes out of this "predictor" given the resources it consumes? Or, using the words, "tall" and "flat" graphically, what's the surface area covered by the predictor, given a fixed amount of resources?

Would not "intelligence", as you mean it, be slightly more accurately defined as how efficient a predictor is and, uh, it's gotta be really wide or we ignore it?

Felix230

Beautiful idea!

Is a Wiki separate from Wikipedia needed?

Similar problem: One thing I run in to often on Wikipedia is entries that use the field's particular mathematical notation for no reason other than particular symbols and expressions are the jargon of the field. They get in the way of understanding what the entry is saying, though.

Similar problem is there seem to be academic papers that have practical applications and yet the papers are written to be as unclear as possible - perhaps to take on that "important" sheen, perhaps simply because the authors are deep in their own jargon and assume all readers know everything they know. Consider papers in the AI field. :)

Felix220

Has anyone built the equivalent of a Turing machine using processor count and/or replicated input data as the cheap resource rather than time?

That is, what could a machine that does everything in one step do in the way of useful work? With or without restrictions on how many replications of the input data there are going in and where the output might come out?

OK, OK. "Dude, what are you smoking?", right? :)

Felix200

Does this mean that if we cannot remember ever changing our minds, our minds are very good at removing clutter?

Or, consider a question that you've not made up your mind on: Does this mean that you're most likely to never make up your mind?

And, anyway, in light of those earlier posts concerning how well people estimate numeric probabilities, should it be any wonder that 66% = 96%?

Felix2-10

Nick: Nice spin! :) Context would be important if Eliezer had not asserted as a given that many, many experiments have been done to preclude any influence of context. My extremely limited experience and knowledge of psychological experiments says that there is a 100% chance that such is not a valid assertion. Imagine a QA engineer trying to skate by with the setups of psych experiments you have run in to. But, personal, anecdotal experience aside, it's real easy to believe Eliezer's assertion is true. Most people might have a hard time tuning out context, though, and therefore might have a harder time, both with conjunction fallacy questionnaires and accepting Eliezer's assertion.

g: Yes, keeping in mind that I would be first in line to answer C, myself!

Choice (B) seems a poster boy for "representation". So, that a normal person would choose B is yet another example of this, "probability" question not being a question about probability, but about "representation". Which is the point. Why is it hard to imagine that the word, "probable" does not mean, in such questions' contexts, or even, perhaps, in normal human communication, "probable" as a gambler or statistician would think of its meaning? Or, put another way, g, "who try to answer the question they're asked rather..." is an assumptive close. I don't buy it. They were not asked the question you, me, Eliezer, the logician or the autistic thought. They were asked the question that they understood. And, they have the votes to prove it. :)

So far as people making simple logical errors in computing probabilities, as is implied by the word, "fallacy", well, yeah. Your computer can beat you in both logic and probabilities. Just as your calculator can multiply better than you.

Anyway, I believe that the functional equivalent of visual illusions are inherent in anything one might call a mind. I'm just not convinced that this conjunction fallacy is such a case. The experiments mentioned seem more to identify and wonderfully clarify an interesting communications issue - one that probably stands out simply because there are, in these times, many people who make a living answering C.

Felix250

Arrrr. Shiver me timbers. I shore be curious what the rank be of "Linda is active in the feminist movement and is a bank teller" would be, seeing as how its meanin' is so far diff'rent from the larboard one aloft.

A tip 'o the cap to the swabbies what found a more accurate definition of "probability" (I be meanin' "representation".) than what logicians assert the meaning o' "probability" be. Does that mean, at a score of one to zero, all psychologists are better lexicographers than all logicians?

Felix2110

Quote: "We think in words, "

No we don't. Apparently you do, though. No reason to believe otherwise. :)

Please keep up these postings! They are very enjoyable.

Going back to "explaining" something by naming it (from a couple of your earlier posts):

e.g. Q: Why does this block fall to the floor when I let go of it? ... A: Gravity!

I always thought that such explanations were common side-effects of thinking in words. Sort of like optical illusions are side-effects of how the visual system works. Perhaps not. One does not need to use words to think symbolically. There are, after all, other ways to do lossy compression than with symbols.

Anyway, I'll still assert that it's easier to fall for such an "explanation" if you think in words. ... An easy assertion, given how hard it is to count the times one does it!