hairyfigment comments on SIAI - An Examination - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (203)
These two excerpts summarize where I disagree with SIAI:
So, SIAI plans to develop an AI that will take over the world, keeping their techniques secret, and therefore not getting critiques from the rest of the world.
This is WRONG. Horrendously, terrifyingly, irrationally wrong.
There are two major risks here. One is the risk of an arbitrarily-built AI, made not with Yudkowskian methodologies, whatever they will be, but with due diligence and precautions taken by the creators to not build something that will kill everybody.
The other is the risk of building a "FAI" that works, and then successfully becomes dictator of the universe for the rest of time, and this turns out more poorly than we had hoped.
I'm more afraid of the second than of the first. I find it implausible that it is harder to build an AI that doesn't kill or enslave everybody, than to build an AI that does enslave everybody, in a way that wiser beings than us would agree was beneficial.
And I find it even more implausible, if the people building the one AI can get advice from everyone else in the world, while the people building the FAI do not.
Why?
The SIAI claims they want to build an AI that asks what wiser beings than us would want (where the definition includes our values right before the AI gets the ability to alter our brains). They say it would look at you just as much as it looks at Eliezer in defining "wise". And we don't actually know it would "enslave everybody". You think it would because you think a superhumanly bright AI that only cares about 'wisdom' so defined would do so, and this seems unwise to you. What do you mean by "wiser" that makes this seem logically coherent?
Those considerations obviously ignore the risk of bugs or errors in execution. But to this layman, bugs seem far more likely to kill us or simply break the AI than to hit that sweet spot (sour spot?) which keeps us alive in a way we don't want. Which may or may not address your actual point, but certainly addresses the quote.