CarlShulman comments on against "AI risk" - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (89)
Speaking only for myself, most of the bullets you listed are forms of AI risk by my lights, and the others don't point to comparably large, comparably neglected areas in my view (and after significant personal efforts to research nuclear winter, biotechnology risk, nanotechnology, asteroids, supervolcanoes, geoengineering/climate risks, and non-sapient robotic weapons). Throwing in all x-risks and the kitchen sink in, regardless of magnitude, would be virtuous in a grand overview, but it doesn't seem necessary when trying to create good source materials in a more neglected area.
Not AI risk.
I have studied bio risk (as has Michael Vassar, who has even done some work encouraging the plucking of low-hanging fruit in this area when opportunities arose), and it seems to me that it is both a smaller existential risk than AI, and nowhere near as neglected. Likewise the experts in this survey, my conversations with others expert in the field, and reading their work.
Bio existential risk seems much smaller than bio catastrophic risk (and not terribly high in absolute terms), while AI catastrophic and x-risk seem close in magnitude, and much larger than bio x-risk. Moreover, vastly greater resources go into bio risks, e.g. Bill Gates is interested and taking it up at the Gates Foundation, governments pay attention, and there are more opportunities for learning (early non-extinction bio-threats can mobilize responses to guard against later ones).
This is in part because most folk are about as easily mobilized against catastrophic as existential risks (e.g. Gates thinks that AI x-risk is larger than bio x-risk, but prefers to work on bio rather than AI because he thinks bio catastrophic risk is larger, at least in the medium-term, and more tractable). So if you are especially concerned about x-risk, you should expect bio risk to get more investment than you would put into it (given the opportunity to divert funds to address other x-risks).
Nanotech x-risk would seem to come out of mass-producing weapons that kill survivors of an all out war (which leaves neither side standing), like systems that could replicate in the wild and destroy the niche of primitive humans, really numerous robotic weapons that would hunt down survivors over time, and such like. The FHI survey gives it a lot of weight, but after reading the work of the Foresight Institute and Center for Responsible Nanotechnology (among others) from the last few decades since Drexler's books, I am not very impressed with the magnitude of the x-risk here or the existence of distinctive high-leverage ways to improve outcome around the area, and the Foresight Institute continues to operate in any case (not to mention Eric Drexler visiting FHI this year).
Others disagree (Michael Vassar has worked with the CRN, and Eliezer often names molecular nanotechnology as the x-risk he would move to focus on if he knew that AI was impossible), but that's my take.
This is AI risk. Brain emulations are artificial intelligence by standard definitions, and in articles like Chalmers' "The Singularity: a Philosophical Analysis."
It's hard to destroy all life with a war not involving AI, or the biotech/nanotech mentioned above. The nuclear winter experts have told me that they think x-risk from a global nuclear war is very unlikely conditional on such a war happening, and it doesn't seem that likely.
There are already massive, massive, massive investments in tug-of-war over politics, norms, and values today. Shaping the conditions or timelines for game-changing technologies looks more promising to me than adding a few more voices to those fights. On the other hand, Eliezer has some hopes for education in rationality and critical thinking growing contagiously to shift some of those balances (not as a primary impact, and I am more skeptical). Posthuman value evolution does seem to sensibly fall under "AI risk," and shaping the development and deployment of technologies for posthumanity seems like a leveraged way to affect that.
AI risk again.
Probably some groups with a prophecy of upcoming doom, looking to every thing in the news as a possible manifestation.
I have a few questions, and I apologize if these are too basic:
1) How concerned is SI with existential risks vs. how concerned is SI with catastrophic risks?
2) If SI is solely concerned with x-risks, do I assume correctly that you also think about how cat. risks can relate to x-risks (certain cat. risks might raise or lower the likelihood of other cat. risks, certain cat. risks might raise or lower the likelihood of certain x-risks, etc.)? It must be hard avoiding the conjunction fallacy! Or is this sort of thing more what the FHI does?
3) Is there much tension in SI thinking between achieving FAI as quickly as possible (to head off other x-risks and cat. risks) vs. achieving FAI as safely as possible (to head off UFAI), or does one of these goals occupy signficantly more of your attention and activities?
Edited to add: thanks for responding!
Different people have different views. For myself, I care more about existential risks than catastrophic risks, but not overwhelmingly so. A global catastrophe would kill me and my loved ones just as dead. So from the standpoint of coordinating around mutually beneficial policies, or "morality as cooperation" I care a lot about catastrophic risk affecting current and immediately succeeding generations. However, when I take a "disinterested altruism" point of view x-risk looms large: I would rather bring 100 trillion fantastic lives into being than improve the quality of life of a single malaria patient.
Yes.
They spend more time on it, relatively speaking.
Given that powerful AI technologies are achievable in the medium to long term, UFAI would seem to me be a rather large share of the x-risk, and still a big share of the catastrophic risk, so that speedups are easily outweighed by safety gains.
What's your break even point for "bring 100 trillion fantastic lives into being with probability p" vs. "improve the quality of a single malaria patient" and why?
It depends on the context (probability distribution over number and locations and types of lives), with various complications I didn't want to get into in a short comment.
Here's a different way of phrasing things: if I could trade off probability p1 of increasing the income of everyone alive today (but not providing lasting benefits into the far future) to at least $1,000 per annum with basic Western medicine for control of infectious disease, against probability p2 of a great long-term posthuman future with colonization, I would prefer p2 even if it was many times smaller than p1. Note that those in absolute poverty are a minority of current people, a tiny minority of the people who have lived on Earth so far, their life expectancy is a large fraction of that of the rich, and so forth.