Comment author: gallabytes 16 September 2014 03:16:46AM 1 point [-]

Do you have any examples of approaches that are indefinitely extendable?

Comment author: billdesmedt 16 September 2014 07:38:39PM 1 point [-]

Whole Brain Emulation might be such an example, at least insofar as nothing in the approach itself seems to imply that it would be prone to get stuck in some local optimum before its ultimate goal (AGI) is achieved.

Comment author: mvp9 16 September 2014 01:48:44AM 2 points [-]

Another way to get at the same point, I think, is - Are there things that we (contemporary humans) will never understand (from a Quora post)?

I think we can get some plausible insight on this by comparing an average person to the most brilliant minds today - or comparing the earliest recorded examples of reasoning in history to that of modernity. My intuition is that there are many concepts (quantum physics is a popular example, though I'm not sure it's a good one) that even most people today, and certainly in the past, will never comprehend, at least without massive amounts of effort, and possibly even then. They simply require too much raw cognitive capacity to appreciate. This is at least implicit in the Singularity hypothesis.

As to the energy issue, I don't see any reason to think that such super-human cognition systems necessarily requires more energy - though they may at first.

Comment author: billdesmedt 16 September 2014 02:01:29AM 3 points [-]

Actually, wrt quantum mechanics, the situation is even worse. It's not simply that "most people ... will never comprehend" it. Rather, per Richard Feynman (inventor of Feynman Diagrams, and arguable one of the 20th century's greatest physicists) nobody will ever comprehend it. Or as he put it, "If you think you understand quantum mechanics, you don't understand quantum mechanics." (http://en.wikiquote.org/wiki/Talk:Richard_Feynman#.22If_you_think_you_understand_quantum_mechanics.2C_you_don.27t_understand_quantum_mechanics..22)

Comment author: KatjaGrace 16 September 2014 01:19:55AM 2 points [-]

Common sense and natural language understanding are suspected to be 'AI complete'. (p14) (Recall that 'AI complete' means 'basically equivalent to solving the whole problem of making a human-level AI')

Do you think they are? Why?

Comment author: billdesmedt 16 September 2014 01:51:38AM *  1 point [-]

Human-level natural language facility was, after all, the core competency by which Turing's 1950 Test proposed to determine whether -- across the board -- a machine could think.

Comment author: KatjaGrace 16 September 2014 01:21:45AM *  3 points [-]

What did you find least persuasive in this week's reading?

Comment author: billdesmedt 16 September 2014 01:47:02AM 6 points [-]

Not "least persuasive," but at least a curious omission from Chapter 1's capsule history of AI's ups and downs ("Seasons of hope and despair") was any mention of the 1966 ALPAC report, which singlehandedly ushered in the first AI winter by trashing, unfairly IMHO, the then-nascent field of machine translation.

Comment author: KatjaGrace 16 September 2014 01:06:36AM 1 point [-]

How should someone familiar with past work in AI use that knowledge judge how much work is left to be done before reaching human-level AI, or human-level ability at a particular kind of task?

Comment author: billdesmedt 16 September 2014 01:38:27AM 3 points [-]

one way to apply such knowledge might be in differentiating between approaches that are indefinitely extendable and/or expandable and those that, despite impressive beginnings, tend to max out beyond a certain point. (Think of Joe Weizenbaum's ELIZA as an example of the second.)