SarahC comments on Ben Goertzel: The Singularity Institute's Scary Idea (and Why I Don't Buy It) - Less Wrong

32 Post author: ciphergoth 30 October 2010 09:31AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (432)

You are viewing a single comment's thread. Show more comments above.

Comment author: [deleted] 30 October 2010 12:47:10PM *  10 points [-]

One thing that I think is relevant, in the discussion of existential risk, is Martin Weitzmann's "Dismal Theorem" and Jim Manzi's analysis of it. (Link to the article, link to the paper.)

There, the topic is not unfriendly AI, but climate change. Regardless of what you think of the topic, it has attracted more attention than AGI, and people writing about existential risk are often using climate change as an example.

Martin Weitzman, a Harvard economist, deals with the probability of extreme disasters, and whether it's worth it in cost-benefit terms to deal with them. Our problem, in cases of extreme uncertainty, is that we don't only have probability distributions, we have uncertain probability distributions; it's possible we got the models wrong. Weitzman's paper takes this into account. He creates a family of probability distributions, indexed over a certain parameter, and integrates over it -- and he proves that the process of taking "probability distributions of probability distributions" has the result of making the final distribution fat-tailed. So fat-tailed that the integral doesn't converge.

This is a terrible consequence. Because if the PDF of the cost of the risk doesn't converge, then we cannot define an expected cost. We can't do cost-benefit analysis at all. Weitzman's conclusion is that the right amount to spend mitigating risk is "more than we're doing."

Manzi criticizes this approach as just an elaborately stated version of the precautionary principle. If it's conceivable that your models are wrong and things are even riskier than you imagined, it doesn't follow that you should spend more to mitigate the risk; the reductio is that if you knew nothing at all, you should spend all your money mitigating the most unknown possible risk!

This is relevant to people talking about AGI. We're not considering spending a lot of money to mitigate this particular risk, but we are considering forgoing a lot of money -- the value of a possible useful AI. And it may be tempting to propose a shortcut, a la Marty Weitzman, claiming that the very uncertainty of the risk is an argument for being more aggressive in mitigating it. The problem is that this leads to absurd conclusions. You could think up anything -- murderous aliens! Killer vacuum cleaners! and claim that because we don't know how likely they are, and because the outcome would be world-endingly terrible, we should be spending all our time trying to mitigate the risk!

Uncertainty about an existential risk is not an argument in favor of spending more on it. There are arguments in favor of spending more on an existential risk -- they're the old-fashioned, cost-benefit ones. (For example, I think there's a strong case, in old-fashioned cost-benefit terms, for asteroid collision prevention.) But if you can't justify spending on cost-benefit grounds, you can't try a Hail Mary and say "You should spend even more -- because we could be wrong!"

Comment author: FrankAdamek 30 October 2010 01:17:58PM 4 points [-]

Is anyone in SIAI making the argument that we should spend more because our models are too uncertain to provide expected costs, or more generally that our very uncertainty of model is a significant source of concern? My impression was more that it's "we have good reasons to doubt people's estimation that Friendliness is easy" and "we have good reason to believe it's actually quite hard."

Comment author: [deleted] 30 October 2010 01:36:09PM 4 points [-]

fair enough -- this is my caution against the logic "I can think of a risk, therefore we need to worry about it!" It seems that SIAI is making the stronger claim that unfriendliness is very likely.

My personal view is that AI is very hard itself, and that working on, say, a computer that can do what a mouse can do is likely to take a long time, and is harmless but very interesting research. I don't think we're anywhere near a point when we need to shut down anybody's current research.

Comment author: andreas 30 October 2010 07:53:08PM 4 points [-]

Consider marginal utility. Many people are working on AI, machine learning, computational psychology, and related fields. Nobody is working on preference theory, formal understanding of our goals under reflection. If you want to do interesting research and if you have the background to advance either of those fields, do you think the world will be better off with you on the one side or on the other?

Comment author: [deleted] 30 October 2010 08:33:33PM 3 points [-]

Maybe that's true, but that's a separate point. "Let's work on preference theory so that it'll be ready when the AI catches up" is one thing -- tentatively, I'd say it's a good idea. "Let's campaign against anybody doing AI research" seems less useful (and less likely to be effective.)