Aron2 comments on Should I believe what the SIAI claims? - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (600)
The analogy that AGI can be to us as we are to chimps. This is the part that needs the focus.
We could have said in the 1950s that machines beat us at arithmetic by orders of magnitude. Classical AI researchers clearly were deluded by success at easy problems. The problem with winning on easy problems is that it says little about hard ones.
What I see is that in the domain of problems for which human level performance is difficult to replicate, computers are capable of catching us and likely beating us, but gaining a great distance on us in performance is difficult. After all, a human can still beat the best chess programs with a mere pawn handicap. This may never get to two pawns. ever. Certainly the second pawn is massively harder than the first. It's the nature of the problem space. In terms of runaway AGI control of the planet, we have to wonder if humans will always have the equivalent of a pawn handicap via other means (mostly as a result of having their hands on the reigns of the economic, political, and legal structures).
BTW, is ELO supposed to have that kind of linear interpretation?
Yes, this is the important part. Chimps lag behind humans in 2 distinct ways - they differ in degree, and in kind. Chimps can do a lot of human-things, but very minimally. Painting comes to mind. They do a little, but not a lot. (Degree.) Language is another well-studied subject. IIRC, they can memorize some symbols and use them, but not in the recursive way that modern linguistics (pace Chomsky) seems to regard as key, not recursive at all. (Kind.)
What can we do with this distinction? How does it apply to my three examples?
O RLY?
Ever is a long time. Would you like to make this a concrete prediction I could put on PredictionBook, perhaps something along the lines of 'no FIDE grandmaster will lose a 2-pawns-odds chess match(s) to a computer by 2050'?
I'm not an expert on ELO by any means (do we know any LW chess experts?), but reading through http://en.wikipedia.org/wiki/Elo_rating_system#Mathematical_details doesn't show me any warning signs - ELO point differences are supposed to reflect probabilistic differences in winning, or a ratio, and so the absolute values shouldn't matter. I think.
This is a possibility (made more plausible if we're talking about those reins being used to incentivize early AIs to design more reliable and transparent safety mechanisms for more powerful successive AI generations), but it's greatly complicated by international competition: to the extent that careful limitation and restriction of AI capabilities and access to potential sources of power reduces economic, scientific, and military productivity it will be tough to coordinate. Not to mention that existing economic, political, and legal structures are not very reliably stable: electorates and governing incumbents often find themselves unable to retain power.
It seems that whether or not it's supposed to, in practice it does. From the just released "Intrinsic Chess Ratings", which takes Rybka and does exhaustive evaluations (deep enough to be 'relatively omniscient') of many thousands of modern chess games; on page 9: