Emile comments on What I would like the SIAI to publish - Less Wrong

27 Post author: XiXiDu 01 November 2010 02:07PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (218)

You are viewing a single comment's thread. Show more comments above.

Comment author: Emile 01 November 2010 08:09:37PM *  2 points [-]

Furthermore, even if you suppose that Foom is likely, it's not clear where the threshold for Foom is. Could a sub-human level AI foom? What about human-level intelligence? Or maybe we need super-human intelligence? Do we have good evidence for where the Foom-threshold would be?

A "threshold" implies a linear scale for intelligence, which is far from given, especially for non-human minds. For example, say you reverse engineer a mouse's brain, but then speed it up, and give it much more memory (short-term and long-term - if those are just ram and/or disk space on a computer, expanding those is easy). How intelligent is the result? It thinks way faster than a human, remembers more, can make complex plans ... but is it smarter than a human?

Probably not, but it may still be dangerous. Same for a "toddler AI" with those modifications.

Comment author: timtyler 03 November 2010 07:45:05AM *  4 points [-]

Human level intelligence is fairly clearly just above the critical point (just look at what is happening now). However, machine brains have different strengths and weaknesses. Sub-human machines could accelerate the ongoing explosion a lot - if they are better than humans at just one thing - and such machines seem common.

Comment author: XiXiDu 02 November 2010 11:36:22AM 3 points [-]
Comment author: nhamann 01 November 2010 08:19:28PM *  2 points [-]

Replace "threshold" with "critical point." I'm using this terminology because EY himself uses it to frame his arguments. See Cascades, Cycles, Insight, where Eliezer draws an analogy between a fission reaction going critical and an AI FOOMing.

It thinks way faster than a human, remembers more, can make complex plans ... but is it smarter than a human?

This seems to be tangential, but I'm gonna say no, as long as we assume that the rat brain doesn't spontaneously acquire language or human-level abstract reasoning skills.