ciphergoth comments on Existential Risk and Public Relations - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (613)
I am one of those who haven't been convinced by the SIAI line. I have two main objections.
First, EY is concerned about risks due to technologies that have not yet been developed; as far as I know, there is no reliable way to make predictions about the likelihood of the development of new technologies. (This is also the basis of my skepticism about cryonics.) If you're going to say "Technology X is likely to be developed" then I'd like to see your prediction mechanism and whether it's worked in the past.
Second, shouldn't an organization worried about the dangers of AI be very closely in touch with AI researchers in computer science departments? Sure, there's room for pure philosophy and mathematics, but you'd need some grounding in actual AI to understand what future AIs are likely to do.
I think multifoliaterose is right that there's a PR problem, but it's not just a PR problem. It seems, unfortunately, to be a problem with having enough justification for claims, and a problem with connecting to the world of professional science. I think the PR problems arise from being too disconnected from the demands placed on other scientific or science policy organizations. People who study other risks, say epidemic disease, have to get peer-reviewed, they have to get government funding -- their ideas need to pass a round of rigorous criticism. Their PR is better by necessity.
There needs to be an article on this point. In the absence of a really good way of deciding what technologies are likely to be developed, you are still making a decision. You haven't signed up yet; whether you like it or not, that is a decision. And it's a decision that only makes sense if you think technology X is unlikely to be developed, so I'd like to see your prediction mechanism and whether it's worked in the past. In the absence of really good information, we sometimes have to decide on the information we have.
EDIT: I was thinking about cryonics when I wrote this, though the argument generalizes.
My point, with this, is that everybody is risk-averse and everybody has a time preference. The less is known about the prospects of a future technology, the less willing people are to invest resources into ventures that depend on the future development of that technology. (Whether to take advantage of the technology -- as in cryonics -- or to mitigate its dangers -- as in FAI.) Also, the farther in the future the technology is, the less people care about it; we're not willing to spend much to achieve benefits or forestall risks in the far future.
I don't think it's reasonable to expect people to change these ordinary features of economic preference. If you're going to ask people to chip in to your cause, and the time horizon is too far, or the uncertainty too high, they're not going to want to spend their resources that way. And they'll be justified.
Note: yes, there ought to be some magnitude of benefit or cost that overcomes both risk aversion and time preference. Maybe you're going to argue that existential risk and cryonics are issues of such great magnitude that they outweigh both risk aversion and time preference.
But: first of all, the importance of the benefit or cost is also an unknown (and indeed subjective.) How much do you value being alive? And, second of all, nobody says our risk and time preferences are well-behaved. There may be a date so far in the future that I don't care about anything that happens then, no matter how good or how bad. There may be loss aversion -- an amount of money that I'm not willing to risk losing, no matter how good the upside. I've seen some experimental evidence that this is common.
From what I understand this applies to most people but not everyone, especially outside of contrived laboratory circumstances. Overconfidence and ambition essentially amount to risk-loving choices for some major life choices.
What is it that is making you think that whatever SarahC hasn't "signed up" to is having a positive effect - and that she can't do something better with her resources?