You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

gjm comments on AI is not enough - Less Wrong Discussion

-22 Post author: benjayk 07 February 2012 03:53PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (39)

You are viewing a single comment's thread.

Comment author: gjm 07 February 2012 04:57:44PM 10 points [-]

I'm afraid just about everything here is wrong.

at some point we need something fundamentally non-algorithmic

No. Our brains are already implementing lots of algorithms. So far as we know, anything human beings come up with -- however creative -- is in some sense the product of algorithms. I suppose you could go further back -- evolution, biochemistry, fundamental physics -- but (1) it's hard to see how those could actually be relevant here and (2) as it happens, so far as we know those are all ultimately algorithmic too.

we will always have to rely on some non-algorithmical intelligence to find more intelligent solutions.

No (not even if you were right about ultimately needing something fundamentally non-algorithmic). Suppose you have some initial magic non-algorithmic step where the Finger of God implants intelligence into something (a computer, a human being, whatever). After that, that intelligent thing can design more intelligent things which design more intelligent things, etc. The alleged requirement to avoid an infinite regress is satisfied by that initial Finger-of-God step, even if everything after that is algorithmic. There's no reason to think that continued non-algorithmic stuff is called for.

we have no reason to suppose we can't find another more powerful one.

That might be true. It might even be true -- though I don't think you've given coherent reasons to think so -- that there'll always be a possible Next Big Thing that can't be found algorithmically. So what? A superintelligent AI isn't any less useful, or any less dangerous, merely because a magical new-AI-creating process might be able to create an even more superintelligent AI.

No algorithm can determine the simple axioms of the natural numbers from anything weaker.

It is not clear that this means anything. You certainly have given no reasons to believe it.

There is simply no way to derive the axioms from anything that doesn't already include it.

I think you are confusing derivations within some formal system such as Peano arithmetic (where, indeed, the only way to get the axioms is to begin with them, or with some other axioms that imply them) and (a quite different sort of) derivations outside that formal system, such as whatever Peano did to arrive at his axioms. I know of no reason to believe that the latter is fundamentally non-algorithmic, though for sure we don't know what algorithms would be best.

general intelligence necessarily has to transcend rules

I know of no reason to believe this, and it seems to me that if it seems true it's because what you imagine when you think about following rules is very simple rule-following, the sort of thing that might be done by a computer program at most a few pages in length running on a rather slow computer. In particular ...

since at the very least the rules can't be determined by rules

Whyever not? They have to be different rules, that's all.

Instead, we should expect a singularity that happens due to emergent intelligence.

"Emergence" is not magic.

not just one particular kind of intelligence like formal reasoning used by computers

Well, that might well be correct, in the sense that good paths to AI might well involve plenty of things that aren't best thought of as "formal reasoning". (Though, if they run on conventional computers, they will be equivalent in some sense to monstrously complicated systems of formal reasoning.)