SarahC comments on Ben Goertzel: The Singularity Institute's Scary Idea (and Why I Don't Buy It) - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (432)
Is anyone in SIAI making the argument that we should spend more because our models are too uncertain to provide expected costs, or more generally that our very uncertainty of model is a significant source of concern? My impression was more that it's "we have good reasons to doubt people's estimation that Friendliness is easy" and "we have good reason to believe it's actually quite hard."
fair enough -- this is my caution against the logic "I can think of a risk, therefore we need to worry about it!" It seems that SIAI is making the stronger claim that unfriendliness is very likely.
My personal view is that AI is very hard itself, and that working on, say, a computer that can do what a mouse can do is likely to take a long time, and is harmless but very interesting research. I don't think we're anywhere near a point when we need to shut down anybody's current research.
Consider marginal utility. Many people are working on AI, machine learning, computational psychology, and related fields. Nobody is working on preference theory, formal understanding of our goals under reflection. If you want to do interesting research and if you have the background to advance either of those fields, do you think the world will be better off with you on the one side or on the other?
Maybe that's true, but that's a separate point. "Let's work on preference theory so that it'll be ready when the AI catches up" is one thing -- tentatively, I'd say it's a good idea. "Let's campaign against anybody doing AI research" seems less useful (and less likely to be effective.)