jsalvatier comments on SIAI - An Examination - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (203)
These two excerpts summarize where I disagree with SIAI:
So, SIAI plans to develop an AI that will take over the world, keeping their techniques secret, and therefore not getting critiques from the rest of the world.
This is WRONG. Horrendously, terrifyingly, irrationally wrong.
There are two major risks here. One is the risk of an arbitrarily-built AI, made not with Yudkowskian methodologies, whatever they will be, but with due diligence and precautions taken by the creators to not build something that will kill everybody.
The other is the risk of building a "FAI" that works, and then successfully becomes dictator of the universe for the rest of time, and this turns out more poorly than we had hoped.
I'm more afraid of the second than of the first. I find it implausible that it is harder to build an AI that doesn't kill or enslave everybody, than to build an AI that does enslave everybody, in a way that wiser beings than us would agree was beneficial.
And I find it even more implausible, if the people building the one AI can get advice from everyone else in the world, while the people building the FAI do not.
I'm having a hard time parsing what that last clause refers to; what is supposed to be better, enslaving or not enslaving?