Nick_Tarleton comments on Changing accepted public opinion and Skynet - Less Wrong

15 [deleted] 22 May 2009 11:05AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (58)

You are viewing a single comment's thread. Show more comments above.

Comment author: Nick_Tarleton 23 May 2009 01:50:10AM *  4 points [-]

Why does AI design need to have anything to do with the brain? (Third Alternative: ab initio development based on a formal normative theory of general intelligence, not a descriptive theory of human intelligence, comprehensible even to us to say nothing of itself once it gets smart enough.)

(Edit: Also, it's a huge leap from "no one is coming up with simple theories of the brain yet" to "we may well never understand intelligence".)

Comment author: whpearson 23 May 2009 09:06:13AM 0 points [-]

A specific AI design need be nothing like the design of the brain. However the brain is the only object we know of in mind space, so having difficulty understanding it is evidence, although very weak, that we may have difficulty understanding minds in general.

We might expect it to be a special case as we are trying to understand methods of understanding, so we are being somewhat self-referential.

If you read my comment you'll see I only raised it as a possibility, something to try and estimate the probability of, rather than necessarily the most likely case.

What would you estimate the probability of this scenario being, and why?

There might be formal proofs, but they probably are reliant on the definition of things like what understanding is, I've been trying to think of mathematical formalisms to explore this question, but I haven't come up with a satisfactory one yet.

Comment author: Peter_de_Blanc 23 May 2009 12:47:45PM 0 points [-]

I've been trying to think of mathematical formalisms to explore this question, but I haven't come up with a satisfactory one yet.

Have you looked at AIXI?

Comment author: whpearson 23 May 2009 03:42:15PM 0 points [-]

It is trivial to say one AIXI can't comprehend another instance of AIXI, if by comprehend you mean form an accurate model.

AIXI expects the environment to be computable and is itself incomputable. So if one AIXI comes across another, it won't be able to form a true model of it.

However I am not sure of the value of this argument as we expect intelligence to be computable.