orthonormal comments on Existential Risk and Public Relations - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (613)
I am one of those who haven't been convinced by the SIAI line. I have two main objections.
First, EY is concerned about risks due to technologies that have not yet been developed; as far as I know, there is no reliable way to make predictions about the likelihood of the development of new technologies. (This is also the basis of my skepticism about cryonics.) If you're going to say "Technology X is likely to be developed" then I'd like to see your prediction mechanism and whether it's worked in the past.
Second, shouldn't an organization worried about the dangers of AI be very closely in touch with AI researchers in computer science departments? Sure, there's room for pure philosophy and mathematics, but you'd need some grounding in actual AI to understand what future AIs are likely to do.
I think multifoliaterose is right that there's a PR problem, but it's not just a PR problem. It seems, unfortunately, to be a problem with having enough justification for claims, and a problem with connecting to the world of professional science. I think the PR problems arise from being too disconnected from the demands placed on other scientific or science policy organizations. People who study other risks, say epidemic disease, have to get peer-reviewed, they have to get government funding -- their ideas need to pass a round of rigorous criticism. Their PR is better by necessity.
As was mentioned in other threads, SIAI's main arguments rely on disjunctions and antipredictions more than conjunctions and predictions. That is, if several technology scenarios lead to the same broad outcome, that's a much stronger claim than one very detailed scenario.
For instance, the claim that AI presents a special category of existential risk is supported by such a disjunction. There are several technologies today which we know would be very dangerous with the right clever 'recipe'– we can make simple molecular nanotech machines, we can engineer custom viruses, we can hack into some very sensitive or essential computer systems, etc. What these all imply is that a much smarter agent with a lot of computing power is a severe existential threat if it chooses to be.