shokwave comments on David Chalmers' "The Singularity: A Philosophical Analysis" - Less Wrong

33 Post author: lukeprog 29 January 2011 02:52AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (202)

You are viewing a single comment's thread. Show more comments above.

Comment author: shokwave 01 February 2011 07:24:58AM 2 points [-]

We have a much greater understanding of what the "think" in "Can machines think?" means now. We have better tests than seeing if they can fake human language.

Comment author: jacob_cannell 01 February 2011 07:31:14AM 0 points [-]

The test isn't about faking human language, it's about using language to probe another mind. Whales and elephants have brains built out of similar quantities of the same cortical circuits but without a common language stepping into their minds is very difficult.

What's a better test for AI than the turing test?

Comment author: wedrifid 01 February 2011 07:46:14AM 3 points [-]

What's a better test for AI than the turing test?

Give it a series of fairly difficult and broad ranging tasks, none of which it has been created with existing specialised knowledge to handle.

Comment author: jacob_cannell 01 February 2011 07:57:22AM 0 points [-]

Yes - the AIQ idea.

But how do you describe the task and how does the AI learn about it? There's a massive gulf between AI's which can have the task/game described in human language and those that can not. Whale brains and elephants fall in the latter category. An AI which can realistically self-improve to human levels needs to be in the former category, like a human child.

You could define intelligence with an AIQ concept so abstract that it captures only learning from scratch without absorbing human knowledge, but that would be a different concept - it wouldn't represent practical capacity to intellectually self-improve in our world.

Comment author: shokwave 01 February 2011 08:39:46AM 0 points [-]

But how do you describe the task and how does the AI learn about it?

Use something like Prolog to declare the environment and problem. If I knew how the AI would learn about it, I could build an AI already. And indeed, there are fields of machine learning for things such as Bayesian inference.

Comment author: jacob_cannell 02 February 2011 07:25:11AM 0 points [-]

If you have to describe every potential probelm to the AI in Prolog, how will it learn to become a computer scientist or quantum physicist?

Comment author: shokwave 02 February 2011 07:33:07AM *  0 points [-]

Describe the problem of learning how to become a computer scientist or quantum physicist, then let it solve that problem. Now it can learn to become a computer scientists or quantum physicist.

(That said, a better method would be to describe computer science and quantum physics and just let it solve those fields.)

Comment author: jacob_cannell 02 February 2011 07:49:00AM 0 points [-]

Or a much better method: describe the problem of an AI that can learn natural language, the rest follows.

Comment author: shokwave 02 February 2011 09:39:00AM 0 points [-]

Except for all problems which are underspecified in natural language.

Which might be some pretty important ones.

Comment author: wedrifid 01 February 2011 08:07:42AM -1 points [-]

Agreement that human children are more intelligent than whales or elephants is likely to be the closest we get to agreement on this subject. You would need to absorb a lot of new knowledge from all the replies from various sources that have been provided to you here already before in progress is possible.

Comment author: jacob_cannell 02 February 2011 07:37:10AM 0 points [-]

Unfortunately it seems we are not even fully in agreement about that. A turing style test is a test of knowledge, the AIQ style test is a test of abstract intelligence.

An AIQ type test which just measures abstract intelligence fails to differentiate between feral einstein and educated einstein.

Effective intelligence, perhaps call it wisdom, is some product of intelligence and knowledge. The difference between human minds and those of elephants or whales is that of knowledge.

My core point, to reiterate again: the defining characteristic of human minds is knowledge, not raw intelligence.

Comment author: shokwave 02 February 2011 07:44:14AM 0 points [-]

Intelligence can produce knowledge from the environment. Feral Einstein would develop knowledge of the world, to the extent that he wasn't limited by non-knowledge/intelligence factors (like finding shelter or feeding himself).

Comment author: anon895 01 February 2011 07:55:34AM 0 points [-]

Possibly relevant: AIXI-style IQ tests.