Posts

Sorted by New

Wiki Contributions

Comments

Whole Brain Emulation might be such an example, at least insofar as nothing in the approach itself seems to imply that it would be prone to get stuck in some local optimum before its ultimate goal (AGI) is achieved.

Actually, wrt quantum mechanics, the situation is even worse. It's not simply that "most people ... will never comprehend" it. Rather, per Richard Feynman (inventor of Feynman Diagrams, and arguable one of the 20th century's greatest physicists) nobody will ever comprehend it. Or as he put it, "If you think you understand quantum mechanics, you don't understand quantum mechanics." (http://en.wikiquote.org/wiki/Talk:Richard_Feynman#.22If_you_think_you_understand_quantum_mechanics.2C_you_don.27t_understand_quantum_mechanics..22)

Human-level natural language facility was, after all, the core competency by which Turing's 1950 Test proposed to determine whether -- across the board -- a machine could think.

Not "least persuasive," but at least a curious omission from Chapter 1's capsule history of AI's ups and downs ("Seasons of hope and despair") was any mention of the 1966 ALPAC report, which singlehandedly ushered in the first AI winter by trashing, unfairly IMHO, the then-nascent field of machine translation.

one way to apply such knowledge might be in differentiating between approaches that are indefinitely extendable and/or expandable and those that, despite impressive beginnings, tend to max out beyond a certain point. (Think of Joe Weizenbaum's ELIZA as an example of the second.)