Emile comments on What I would like the SIAI to publish - Less Wrong

27 Post author: XiXiDu 01 November 2010 02:07PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (218)

You are viewing a single comment's thread. Show more comments above.

Comment author: nhamann 01 November 2010 07:46:31PM *  7 points [-]

Actually, you can spell out the argument very briefly. Most people, however, will immediately reject one or more of the premises due to cognitive biases that are hard to overcome.

It seems like you're essentially saying "This argument is correct. Anyone who thinks it is wrong is irrational." Could probably do without that; the argument is far from as simple as you present it. Specifically, the last point:

At minimum, this means any AI as smart as a human, can be expected to become MUCH smarter than human beings -- probably smarter than all of the smartest minds the entire human race has ever produced, combined, without even breaking a sweat.

So I agree that there's no reason to assume an upper bound on intelligence, but it seems like you're arguing that hard takeoff is inevitable, which as far as I'm aware has never been shown convincingly.

Furthermore, even if you suppose that Foom is likely, it's not clear where the threshold for Foom is. Could a sub-human level AI foom? What about human-level intelligence? Or maybe we need super-human intelligence? Do we have good evidence for where the Foom-threshold would be?

I think the problems with resolving the Foom debate stem from the fact that "intelligence" is still largely a black box. It's very nice to say that intelligence is an "optimization process", but that is a fake explanation if I've ever seen one because it fails to explain in any way what is being optimized.

I think you paint in broad strokes. The Foom issue is not resolved.

Comment author: Emile 01 November 2010 08:09:37PM *  2 points [-]

Furthermore, even if you suppose that Foom is likely, it's not clear where the threshold for Foom is. Could a sub-human level AI foom? What about human-level intelligence? Or maybe we need super-human intelligence? Do we have good evidence for where the Foom-threshold would be?

A "threshold" implies a linear scale for intelligence, which is far from given, especially for non-human minds. For example, say you reverse engineer a mouse's brain, but then speed it up, and give it much more memory (short-term and long-term - if those are just ram and/or disk space on a computer, expanding those is easy). How intelligent is the result? It thinks way faster than a human, remembers more, can make complex plans ... but is it smarter than a human?

Probably not, but it may still be dangerous. Same for a "toddler AI" with those modifications.

Comment author: timtyler 03 November 2010 07:45:05AM *  4 points [-]

Human level intelligence is fairly clearly just above the critical point (just look at what is happening now). However, machine brains have different strengths and weaknesses. Sub-human machines could accelerate the ongoing explosion a lot - if they are better than humans at just one thing - and such machines seem common.

Comment author: XiXiDu 02 November 2010 11:36:22AM 3 points [-]
Comment author: nhamann 01 November 2010 08:19:28PM *  2 points [-]

Replace "threshold" with "critical point." I'm using this terminology because EY himself uses it to frame his arguments. See Cascades, Cycles, Insight, where Eliezer draws an analogy between a fission reaction going critical and an AI FOOMing.

It thinks way faster than a human, remembers more, can make complex plans ... but is it smarter than a human?

This seems to be tangential, but I'm gonna say no, as long as we assume that the rat brain doesn't spontaneously acquire language or human-level abstract reasoning skills.