Juno_Watt comments on Muehlhauser-Wang Dialogue - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (284)
Just a suggestion for future dialogs: The amount of Less Wrong jargon, links to Less Wrong posts explaining that jargon, and the Yudkowsky "proclamation" in this paragraph is all a bit squicky, alienating and potentially condescending. And I think they muddle the point you're making.
Anyway, biting Pei's bullet for a moment, if building an AI isn't safe, if it's, like Pei thinks, similar to educating a child (except, presumably, with a few orders of magnitude more uncertainty about the outcome) that sounds like a really bad thing to be trying to do. He writes :
There's a very good chance he's right. But we're terrible at educating children. Children routinely grow up to be awful people. And this one lacks the predictable, well-defined drives and physical limits that let us predict how most humans will eventually act (pro-social, in fear of authority). It sounds deeply irresponsible, albeit, not of immediate concern. Pei's argument is a grand rebuttal of the proposal that humanity spend more time on AI safety (why fund something that isn't possible?) but no argument at all against the second part of the proposal-- defund AI capabilities research.
On average, they grow up to be average people. They generally don't grow up to be Genghis Khan or a James Bond villain, which is what the UFAI scenario predicts. FAI only needs to produce AIs that are as good as the average person, however bad that is in average terms.
How dangerous would an arbitrarily selected average person be to the rest of us if given significantly superhuman power?
The topic is intelligence. Some people have superhuman (well, more than 99.99% of humans) intelligence, and we are generally not afraid of them. We expect them to have ascended to the higher reaches of the Kohlberg hierarchy. There doesn't seem to be a problem of Unfriendly Natural Intelligence. We don't kill off smart people on the basis that they might be a threat. We don't refuse people education on the grounds that we don't know what they will do with all that dangerous knowledge. (There may have been societies that worked that way, but they don't seem to be around any more).
Agreed with all of this.