jacob_cannell comments on Muehlhauser-Wang Dialogue - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (284)
Just a suggestion for future dialogs: The amount of Less Wrong jargon, links to Less Wrong posts explaining that jargon, and the Yudkowsky "proclamation" in this paragraph is all a bit squicky, alienating and potentially condescending. And I think they muddle the point you're making.
Anyway, biting Pei's bullet for a moment, if building an AI isn't safe, if it's, like Pei thinks, similar to educating a child (except, presumably, with a few orders of magnitude more uncertainty about the outcome) that sounds like a really bad thing to be trying to do. He writes :
There's a very good chance he's right. But we're terrible at educating children. Children routinely grow up to be awful people. And this one lacks the predictable, well-defined drives and physical limits that let us predict how most humans will eventually act (pro-social, in fear of authority). It sounds deeply irresponsible, albeit, not of immediate concern. Pei's argument is a grand rebuttal of the proposal that humanity spend more time on AI safety (why fund something that isn't possible?) but no argument at all against the second part of the proposal-- defund AI capabilities research.
Yes. Well said. The deeper issue though is the underlying causes of said squicky, alienating paragraphs. Surface recognition of potentially condescending paragraphs is probably insufficient.
Its unclear that Pei would agree with your presumption that educating an AGI will entail "a few orders of magnitude more uncertainty about the outcome". We can control every aspect of an AGI's development and education to a degree unimaginable in raising human children. Examples: We can directly monitor their thoughts. We can branch successful designs. And perhaps most importantly, we can raise them in a highly controlled virtual environment. All of this suggests we can vastly decrease the variance in outcome compared to our current haphazard approach of creating human minds.
Compared to what? Compared to an ideal education? Your point thus illustrates the room for improvement in educating AGI.
Routinely? Nevertheless, this only shows the scope and potential for improvement. To simplify: if we can make AGI more intelligent, we can also make it less awful.
An unfounded assumption. To the extent that humans have these "predictable, well-defined drives and physical limits" we can also endow AGI's with these qualities.
Which doesn't really require much of an argument against. Who is going to defund AI capabilities research such that this would actually prevent global progress?