You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

KatjaGrace comments on SRG 4: Biological Cognition, BCIs, Organizations - Less Wrong Discussion

7 Post author: KatjaGrace 07 October 2014 01:00AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (139)

You are viewing a single comment's thread.

Comment author: KatjaGrace 07 October 2014 03:31:56AM 3 points [-]

How would you start to measure intelligence in non-human systems, such as groups of humans?

Comment author: gwern 07 October 2014 03:45:21PM 5 points [-]

One proposal goes that one measures predictive/reward-seeking ability on random small Turing machines: http://lesswrong.com/lw/42t/aixistyle_iq_tests/

Comment author: paulfchristiano 07 October 2014 03:19:23PM 2 points [-]

Below you ask whether the definition of intelligence per se is important at all; it seems it's not, and this may be some indication of how to measure what you actually care about.

Comment author: skeptical_lurker 07 October 2014 07:17:25AM 2 points [-]

Maybe a good starting point would be IQ tests?

Comment author: NxGenSentience 09 October 2014 03:06:08PM *  1 point [-]

I am a little curious that the "seven kinds of intelligence" (give or take a few, in recent years) notion has not been mentioned much, if at all, even if just for completeness.... Has that been discredited by some body of argument or consensus, that I missed somewhere along the line, in the last few years?

Particularly in many approaches to AI, which seem to view, almost a priori (I'll skip the italics and save them for emphasis) the approach of the day to be: work on (ostensibly) "component" features of intelligent agents as we conceive of them, or find them naturalistically.
Thus, (i) machine "visual" object recognition (wavelength band... up for grabs, perhaps, for some items might be better identified by switching up or down the E.M. scale and visual intelligence was one of the proposed seven kinds; (ii) mathematical intelligence or mathematical (dare I say it) intuition; (iii) facility with linguistic tasks, comprehension, multiple language acquisition -- another of the proposed seven; (i.v) manual dexterity and mechanical ability and motor skill (as in athletics, surgery, maybe sculpture, carpentry or whatever) -- another proposed form of intelligence, and so on. (Aside, interesting that these alleged components span the spectrum of difficulty... are, that is, problems from both easy and harder domains, as has been gradually -- sometimes unexpectedly -- revealed by the school of hard knocks, during the decades of AI engineering attempts.)

It seems that actors sympathetic to the top-down, "piecemeal" approach popular in much of the AI community would have jumped at this way of supplanting the ersatz "G" -- as it was called decades ago in early gropings in psychology and cogsci which sought a concept of IQ or living intelligence -- with, now, what many in cognitive science consider the more modern view and those in AI consider a more approachable engineering design strategy.

Any reason we aren't debating this more than we are? Or did I miss it in one of the posts, or bypass it inadvertently in my kindle app (where I read Bostrom's book)?

Comment author: SteveG 14 October 2014 12:54:32AM 0 points [-]

Bring these questions back up in later discussions!

Comment author: NxGenSentience 14 October 2014 09:54:29AM 0 points [-]

Will definitely do so. I can see several upcoming weeks when these questions will fit nicely, including perhaps the very next one. Regards....

Comment author: TRIZ-Ingenieur 08 October 2014 09:02:59PM 0 points [-]

Survival was and is the challenge of evolution. Higher intelligence gives more options to cope with deadly dangers.

To measure intelligence we should challenge AI entities using standardized tests. To develop these tests will become a new field of research. IQ tests are not suitable because of their anthropocentrism. Tests should analyze capabilities how good and fast real world problems are solved.