taw comments on New Year's Predictions Thread - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (426)
It hasn't worked in sixty years of trying, and I see nothing in the current revival to suggest they have any ideas that are likely to do any better. To be specific, I mean people such as Marcus Hutter, Shane Legg, Steve Omohundro, Ben Goertzel, and so on -- those are the names that come to me off the top of my head. And by their current ideas for AGI I mean Bayesian reasoning, algorithmic information theory, AIXI, Novamente, etc.
I don't think any of these people are stupid or crazy (which is why I don't mention Mentifex in the same breath as them), and I wouldn't try to persuade any of them out of what they are doing unless I had something demonstrably better, but I just don't believe that collection of ideas can be made to work. The fundamental thing that is lacking in AGI research, and always has been, is knowledge of how brains work. The basic ideas that people have tried can be classified as (1) crude imitation of the lowest-level anatomy (neural nets), (2) brute-forced mathematics (automated reasoning, logical or probabilistic), or (3) attempts to code up what it feels like to be a mind (the whole cognitive AI tradition).
My estimates are unaffected by hypothetical possibilities for which there is no evidence, and are protected against that lack of evidence.
Besides, the current state of the world is not suggestive of the presence of AIs in it.
ETA: But this is becoming a digression from the purpose of the thread.
This is my sense as well. I also think there is a substantial limit on what we're likely to learn about the brain given that we can't study brain functionality with large scope, neuron-level definition, in real time given obvious ethical constraints. Does anyone know of any technologies on the horizon that could change this in the next ten years?
http://lesswrong.com/lw/vx/failure_by_analogy/
From quote in that post:
There's no reason to spread such myths about medieval history.
The main characteristics of the Early Middle Ages were low population densities, very low urbanization rates, very low literacy rates, and almost zero lay literacy rates. Being in a reference class of times and places with such characteristics, it would be a miracle if any significant progress happened during Early Middle Ages.
High and Late Middle Ages on the other hand had plenty of technological and intellectual progress.
I'm much more surprised why dense, urbanized, and highly literate Roman Empire was so stagnant.
China also springs to mind. I have listened to documentary about the Chinese empire and distinctly remember how advanced yet stagnant it seemed. At the time my explanation was authoritarianism.