Perplexed comments on What I would like the SIAI to publish - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (218)
What I am actually claiming is that if such an AGI is developed by someone who does not sufficiently understand what the hell they are doing, then it's going to end up doing Bad Things.
Trivial example: the "neural net" that was supposedly taught to identify camouflaged tanks, and actually learned to recognize what time of day the pictures were taken.
This sort of mistake is the normal case for human programmers to make. The normal case. Not extraordinary, not unusual, just run-of-the-mill "d'oh" moments.
It's not that AI is malevolent, it's that humans are stupid. To claim that AI isn't dangerous, you basically have to prove that even the very smartest humans aren't routinely stupid.
What I meant by "Without specific discussions" was, "since I haven't proposed any policy measures, and you haven't said what measures you object to, I don't see what there is to discuss." We are discussing the argument for why AGI development dangers are underrated, not what should be done about that fact.
Simple historical observation demonstrates that -- with very, very few exceptions -- progress is made by the people who aren't stuck in their perception of the way things are or are "supposed to be".
So, it's not necessary to know what the "best possible general intelligence" would be: even if human-scale is all you have, just fixing the bugs in the human brain would be more than enough to make something that runs rings around us.
Hell, just making something that doesn't use most of its reasoning capacity to argue for ideas it already has should be enough to outclass, say, 99.995% of the human race.
What part of "people fall for 419 scams" don't you understand? (Hell, most 419 scams and phishing attacks suffer from being painfully obvious -- if they were conducted by someone doing a little research, they could be a lot better.)
People also fall for pyramid schemes, stock bubbles, and all sorts of exploitable economic foibles that could easily end up with an AI simply owning everything, or nearly everything, with nobody even the wiser.
Or, alternatively, the AI might fail at its attempts, and bring the world's economy down in the process.
Here's the argument: people are idiots. All people. Nearly all the time. Especially when it comes to computer programming.
The best human programmer -- the one who knows s/he's an idiot and does his/her best to work around the fact -- is still an idiot, and in possession of a brain that cannot be convinced to believe that it's really an idiot.(vs. all those other idiots out there), and thus still makes idiot mistakes.
The entire history of computer programming shows us that we think we can be 100% clear about what we mean/intend for a computer to do, and that we are wrong. Dead wrong. Horribly, horribly, unutterably wrong.
We are like, the very worst you can be at computer programming, while actually still doing it. We are just barely good enough to be dangerous.
That makes tinkering with making intelligent, self-motivating programs inherently dangerous, because when you tell that machine what you want it to do, you are still programming...
And you are still an idiot.
This is the bottom line argument for AI danger, and it isn't counterable until you can show me even ONE person whose computer programs never do anything that they didn't fully expect.and intend before they wrote it.
(It is also a supporting argument for why an AI needn't be all that smart to overrun humans -- it just has to not be as much of an idiot, in the ways that we are idiots, even if it's a total idiot in other ways we can't counter-exploit.)
An outstanding piece of reasoning/rhetoric which deserves to be revised and relocated to top-level-postdom.