Speaking only for myself, most of the bullets you listed are forms of AI risk by my lights, and the others don't point to comparably large, comparably neglected areas in my view (and after significant personal efforts to research nuclear winter, biotechnology risk, nanotechnology, asteroids, supervolcanoes, geoengineering/climate risks, and non-sapient robotic weapons). Throwing in all x-risks and the kitchen sink in, regardless of magnitude, would be virtuous in a grand overview, but it doesn't seem necessary when trying to create good source materials in a more neglected area.
bio/nano-tech disaster
Not AI risk.
I have studied bio risk (as has Michael Vassar, who has even done some work encouraging the plucking of low-hanging fruit in this area when opportunities arose), and it seems to me that it is both a smaller existential risk than AI, and nowhere near as neglected. Likewise the experts in this survey, my conversations with others expert in the field, and reading their work.
Bio existential risk seems much smaller than bio catastrophic risk (and not terribly high in absolute terms), while AI catastrophic and x-risk seem close in magnitude, and much larger than bio x-risk. Mo...
Why? This is highly non-obvious. To reach our current technological level, we had to use a lot of non-renewable resources. There's still a lot of coal and oil left, but the remaining coal and oil is harder to reach and much more technologically difficult to reliably use. That trend will only continue. It isn't obvious that if something set the tech level back to say 1600 that we'd have the resources to return to our current technology level.
It's been discussed repeatedly here on Less Wrong, and in many other places. The weight of expert opinion is on recovery, and I think the evidence is strong. Most resources are more accessible in ruined cities than they were in the ground, and more expensive fossil fuels can be substituted for by biomass, hydropower, efficiency, and so forth. It looks like there was a lot of slack in human development, e.g. animal and plant breeding is still delivering good returns after many centuries, humans have been adapting to civilization over the last thousands of years and would continue to become better adapted with a long period of low-fossil fuel near-industrial technology. And for many catastrophes knowledge from the previous civilization would be available to future generations.
It's been discussed repeatedly here on Less Wrong, and in many other places. The weight of expert opinion is on recovery
Can you give sources for this? I'm particularly interested in the claim about expert opinion, since there doesn't seem to be much discussion in the literature of this. Bostrom has mentioned it, but hasn't come to any detailed conclusion. I'm not aware of anyone else discussing it.
Most resources are more accessible in ruined cities than they were in the ground
Right. This bit has been discussed on LW before in the context of many raw metals. The particularly good example is aluminum which is resource intensive and technically difficult to refine, but is easy to use once one has a refined a bit. That's been discussed before, and looking around for such discussion I see that you and I discussed that here, but didn't discuss the power issue in general.
I think you are being optimistic about power. Hydropower and biomass while they can exist with minimal technology (and in fact, the first US commercial power plant outside New York was hydroelectric), they both have severe limitations as power methods. Hydroelectric power can only be placed in limited areas, and l...
I have the opposite perception, that "Singularity" is worse than "artificial intelligence." If you want to avoid talking about FOOM, "Singularity" has more connotation of that than AI in my perception.
I'm also not sure exactly what you mean by the "single scenario" getting privileged, or where you would draw the lines. In the Yudkowsky-Hanson debate and elsewhere Eliezer talked about many separate posthuman AIs coordinating to divvy up the universe without giving humanity or humane values a share, about monocultures of seemingly separate AIs with shared values derived from a common ancestor, and so forth. Whole brain emulations coming first, which then invent AIs that race ahead of the WBEs were discussed, and so forth.
Here's another reason why I don't like "AI risk": it brings to mind analogies like physics catastrophes or astronomical disasters, and lets AI researchers think that their work is ok as long as they have little chance of immediately destroying Earth. But the real problem is how do we build or become a superintelligence that shares our values, and given this seems very difficult, any progress that doesn't contribute to the solution but brings forward the date by which we must solve it (or be stuck with something very suboptimal even if it doesn't kill us) is bad, and this includes AI progress that is not immediately dangerous.
ETA: I expanded this comment into a post here.
to justify LessWrong's ignoring the topic
That's a much lower standard than "should Luke make this a focus when trading breadth vs speed in making his document". If people get enthused about that, they're welcome to. I've probably put 50-300 hours (depending on how inclusive a criterion I use for relevant hours) into the topic, and saw diminishing returns. If I overlap with Eric Drexler or such folk at a venue I would inquire, and I would read a novel contribution, but I'm not going to be putting much into it given my alternatives soon.
SI/LW sometimes gives the impression of being a doomsday cult...
I certainly never had this impression. The worst that can be said about SI/LW is that some use inappropriately strong language with respect to risks from AI.
What I endorse:
What I think is unjustified:
Are there any doomsday cults that say "doom is probably coming, we're not sure how but here are some likely possibilities"?
No, but there are lots of cults that say "we are the people to solve all the world's problems." Acknowledging the benefits of Division of Labour is un-cult-like.
Why does SI/LW focus so much on AI-FOOM disaster, with apparently much less concern for things like
- Malthusian upload scenario
For my part I consider that scenario pretty damn close to the AI-FOOM. ie. It'll quite probably result in a near equivalent outcome but just take slightly longer before it becomes unstoppable.
Personally, I care primarily about AI risk for a few reasons. One is that it is an extremely strong feedback loop. There are other dangerous feedback loops, including nanotech, and I am not confident which will be a problem first. But I think AI is the hardest risk to solve, and also has the most potential for negative utility. I also think that we are relatively close to being able to create AGI.
As far as I know, the SI is defined by its purpose of reducing AI risk. If other risks need long-term work, then each risk needs a dedicated group to work on it.
As for LW, I think it's simply that people read EY's writing on AI risk, and those that agree tend to stick around and discuss it here.
To me it seems reasonable to focus on self-improving AI instead of wars and nanotechnology. If we get the AI right, then we can give it a task to solve our problems with wars, nanotechnology, et cetera (the "suboptimal singleton" problem is included in "getting the AI right"). One solution will help us with other solutions.
As an analogy, imagine yourself as an intelligent designer of your favorite species. You can choose to give them an upgrade: fast feet, thick fur, improved senses, or human-like brain. Of course you should choose a hu...
The answer to your initial question is that Eliezer and Luke believe that if we create AI, the default result is itkills us all.or does.something else equally unpleasant. And also that creating Friendly AI will be an extraordinarily good thing, in part (and only in part) because it would be excellent protection against other risks.
That said, I think there is a limit to how confident anyone ought to be in that view, and it is worth trying to prepare for other scenarios.
What does "doomsday cult" mean? I had been under the impression that it referred to groups like Heaven's Gate or Family Radio which prophesied a specific end-times scenario, down to the date and time of doomsday.
However, Wikipedia suggests the term originated with John Lofland's research on the Unification Church (the Moonies):
...Doomsday cult is an expression used to describe groups who believe in Apocalypticism and Millenarianism, and can refer both to groups that prophesy catastrophe and destruction, and to those that attempt to bring it about.
SI/LW sometimes gives the impression of being a doomsday cult
To whom? In the post you linked, the main source of the concern (google hits) turned out not to mean the thing the author originally thought (edit: this is false. Sorry). Merely "raising the issue" is merely privileging the hypothesis.
Anywho, is the main idea of this post "this other bad stuff is similarly bad, and SI could be doing similar amounts to reduce the risk of these bad things?" I seem to recall their justification for focus on AI was that with self-improving ...
bad memes/philosophies spreading among humans or posthumans and overriding our values
Well,
I am going to assert that the fear of unfriendly AI over the threats you mention is a product of the same cognitive bias which makes us more fascinated by evil dictators and fictional dark lords than more mundane villains. The quality of "evil mind" is what really frightens us, not the impersonal swarm of "mindless" nanobots, viruses or locusts. However, since this quality of "mind," which encapsulates such qualities as "consciousness" and "volition," is so poorly understood by science and so totally undemo...
Why does SI/LW focus so much on AI-FOOM disaster, with apparently much less concern for things like
Why, for example, is lukeprog's strategy sequence titled "AI Risk and Opportunity", instead of "The Singularity, Risks and Opportunities"? Doesn't it seem strange to assume that both the risks and opportunities must be AI related, before the analysis even begins? Given our current state of knowledge, I don't see how we can make such conclusions with any confidence even after a thorough analysis.
SI/LW sometimes gives the impression of being a doomsday cult, and it would help if we didn't concentrate so much on a particular doomsday scenario. (Are there any doomsday cults that say "doom is probably coming, we're not sure how but here are some likely possibilities"?)