Why does SI/LW focus so much on AI-FOOM disaster, with apparently much less concern for things like

  • bio/nano-tech disaster
  • Malthusian upload scenario
  • highly destructive war
  • bad memes/philosophies spreading among humans or posthumans and overriding our values
  • upload singleton ossifying into a suboptimal form compared to the kind of superintelligence that our universe could support

Why, for example, is lukeprog's strategy sequence titled "AI Risk and Opportunity", instead of "The Singularity, Risks and Opportunities"? Doesn't it seem strange to assume that both the risks and opportunities must be AI related, before the analysis even begins? Given our current state of knowledge, I don't see how we can make such conclusions with any confidence even after a thorough analysis.

SI/LW sometimes gives the impression of being a doomsday cult, and it would help if we didn't concentrate so much on a particular doomsday scenario. (Are there any doomsday cults that say "doom is probably coming, we're not sure how but here are some likely possibilities"?)

New Comment
91 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Speaking only for myself, most of the bullets you listed are forms of AI risk by my lights, and the others don't point to comparably large, comparably neglected areas in my view (and after significant personal efforts to research nuclear winter, biotechnology risk, nanotechnology, asteroids, supervolcanoes, geoengineering/climate risks, and non-sapient robotic weapons). Throwing in all x-risks and the kitchen sink in, regardless of magnitude, would be virtuous in a grand overview, but it doesn't seem necessary when trying to create good source materials in a more neglected area.

bio/nano-tech disaster

Not AI risk.

I have studied bio risk (as has Michael Vassar, who has even done some work encouraging the plucking of low-hanging fruit in this area when opportunities arose), and it seems to me that it is both a smaller existential risk than AI, and nowhere near as neglected. Likewise the experts in this survey, my conversations with others expert in the field, and reading their work.

Bio existential risk seems much smaller than bio catastrophic risk (and not terribly high in absolute terms), while AI catastrophic and x-risk seem close in magnitude, and much larger than bio x-risk. Mo... (read more)

7endoself
Are you including just the extinction of humanity in your definition of x-risk in this comment or are you also counting scenarios resulting in a drastic loss of technological capability?
6CarlShulman
I expect losses of technological capability to be recovered with high probability.

Why? This is highly non-obvious. To reach our current technological level, we had to use a lot of non-renewable resources. There's still a lot of coal and oil left, but the remaining coal and oil is harder to reach and much more technologically difficult to reliably use. That trend will only continue. It isn't obvious that if something set the tech level back to say 1600 that we'd have the resources to return to our current technology level.

It's been discussed repeatedly here on Less Wrong, and in many other places. The weight of expert opinion is on recovery, and I think the evidence is strong. Most resources are more accessible in ruined cities than they were in the ground, and more expensive fossil fuels can be substituted for by biomass, hydropower, efficiency, and so forth. It looks like there was a lot of slack in human development, e.g. animal and plant breeding is still delivering good returns after many centuries, humans have been adapting to civilization over the last thousands of years and would continue to become better adapted with a long period of low-fossil fuel near-industrial technology. And for many catastrophes knowledge from the previous civilization would be available to future generations.

It's been discussed repeatedly here on Less Wrong, and in many other places. The weight of expert opinion is on recovery

Can you give sources for this? I'm particularly interested in the claim about expert opinion, since there doesn't seem to be much discussion in the literature of this. Bostrom has mentioned it, but hasn't come to any detailed conclusion. I'm not aware of anyone else discussing it.

Most resources are more accessible in ruined cities than they were in the ground

Right. This bit has been discussed on LW before in the context of many raw metals. The particularly good example is aluminum which is resource intensive and technically difficult to refine, but is easy to use once one has a refined a bit. That's been discussed before, and looking around for such discussion I see that you and I discussed that here, but didn't discuss the power issue in general.

I think you are being optimistic about power. Hydropower and biomass while they can exist with minimal technology (and in fact, the first US commercial power plant outside New York was hydroelectric), they both have severe limitations as power methods. Hydroelectric power can only be placed in limited areas, and l... (read more)

4satt
This may be worth expanding into a discussion post; I can't remember any top-level posts devoted to this topic, and I reckon it's important enough to warrant at least one. Your line of argument seems more plausible to me than CarlShulman's (although that might change if CS can point to specific experts and arguments for why a technological reset could be overcome).
3Tyrrell_McAllister
Is there a typo in this sentence?
1JoshuaZ
Yes. Intended to be something like:
5A1987dM
On what timescale? I find the focus on x-risks as defined by Bostrom (those from which Earth-originating intelligent life will never, ever recover) way too narrow. A situation in which 99% of humanity dies and the rest reverts to hunting and gathering for a few millennia before recovering wouldn't look much brighter than that -- let alone one in which humanity goes extinct but in (say) a hundred million years the descendants of (say) elephants create a new civilization. In particular, I can't see why we would prefer the latter to (say) a civilization emerging on Alpha Centauri -- so per the principle of charity I'll just pretend that instead of “Earth-originating intelligent life” he had said “descendants of present-day humans”.
3loup-vaillant
It depends on what you value. I see 3 situations: * Early Singularity. Everyone currently living is saved. * Late Singularity. Nearly everyone currently living dies anyway. * Very late Singularity, or "Semi-crush". everyone currently living dies, and most of our yet to be born descendants (up to the second renaissance) will die as well. There is a point however were everyone is saved. * Crush. Everyone will die, now and for ever. Plus, humanity dies with our sun. If you most value those currently living, that's right, it doesn't make much difference. But if you care about the future of humanity itself, a Very Late Singularity isn't such a disaster.
3A1987dM
Now that I think about it, I care both about those currently living and about humanity itself, but with a small but non-zero discount rate (of the order of the reciprocal of the time humanity has existed so far). Also, I value humanity not only genetically but also memetically, so having people with human genome but Palaeolithic technocultural level surviving would be only slightly better for me than no-one surviving at all.
5Wei Dai
Perhaps it's mainly a matter of perceptions, where "AI risk" typically brings to mind a particular doomsday scenario, instead of a spread of possibilities that includes posthuman value drift, which is also not helped by the fact that around here we talk much more about UFAI going FOOM than the other scenarios. Given this, do you think we should perhaps favor phrases like "Singularity-related risks and opportunities" where appropriate?

I have the opposite perception, that "Singularity" is worse than "artificial intelligence." If you want to avoid talking about FOOM, "Singularity" has more connotation of that than AI in my perception.

I'm also not sure exactly what you mean by the "single scenario" getting privileged, or where you would draw the lines. In the Yudkowsky-Hanson debate and elsewhere Eliezer talked about many separate posthuman AIs coordinating to divvy up the universe without giving humanity or humane values a share, about monocultures of seemingly separate AIs with shared values derived from a common ancestor, and so forth. Whole brain emulations coming first, which then invent AIs that race ahead of the WBEs were discussed, and so forth.

7Wei Dai
I see... I'm not sure what to suggest then. Anyone else have ideas? I think the scenario that "AI risk" tends to bring to mind is a de novo or brain-inspired AGI (excluding uploads) rapidly destroying human civilization. Here are a couple of recent posts along these lines and using the phrase "AI risk". * utilitymonster's What is the best compact formalization of the argument for AI risk from fast takeoff? * XiXiDu's A Primer On Risks From AI * ETA: See also lukeprog's Facing the Singularity, which talks about this AI risk and none of the other ones you consider to be "AI risk"
1steven0461
"Posthumanity" or "posthuman intelligence" or something of the sort might be an accurate summary of the class of events you have in mind, but it sounds a lot less respectable than "AI". (Though maybe not less respectable than "Singularity"?)
1Wei Dai
How about "Threats and Opportunities Associated With Profound Sociotechnological Change", and maybe shortened to "future-tech threats and opportunities" in informal use?
4Wei Dai
Apparently it's also common to not include uploads in the definition of AI. For example, here's Eliezer:
9CarlShulman
Yeah, there's a distinction between things targeting a broad audience, where people describe WBE as a form of AI, versus some "inside baseball" talk in which it is used to contrast against WBE.
5Wei Dai
That paper was written for the book "Global Catastrophic Risks" which I assume is aimed at a fairly general audience. Also, looking at the table of contents for that book, Eliezer's chapter was the only one talking about AI risks, and he didn't mention the three listed in my post that you consider to be AI risks. Do you think I've given enough evidence to support the position that many people, when they say or hear "AI risk", is either explicitly thinking of something narrower than your definition of "AI risk", or have not explicitly considered how to define "AI" but is still thinking of a fairly narrow range of scenarios? Besides that, can you see my point that an outsider/newcomer who looks at the public materials put out by SI (such as Eliezer's paper and Luke's Facing the Singularity website) and typical discussions on LW would conclude that we're focused on a fairly narrow range of scenarios, which we call "AI risk"?
1CarlShulman
Yes.
3Dmytry
Seems like a prime example of where to apply rationality: what are the consequences to trying to work on AI risk right now? Versus on something else? Does AI risk work have good payoff? What's of the historical cases? The one example I know of is this: http://www.fas.org/sgp/othergov/doe/lanl/docs1/00329010.pdf (thermonuclear ignition of atmosphere scenario). Can a bunch of people with little physics related expertise do something about such risks >10 years before? Beyond the usual anti war effort? Bill Gates will work on AI risk when it becomes clear what to do about it.
2Wei Dai
Have you seen Singularity and Friendly AI in the dominant AI textbook?
1Dmytry
I'm kind of dubious that you needed 'beware of destroying mankind' in a physics textbook to get Teller to check if nuke can cause thermonuclear ignition in atmosphere or seawater, but if it is there, I guess it won't hurt.

Here's another reason why I don't like "AI risk": it brings to mind analogies like physics catastrophes or astronomical disasters, and lets AI researchers think that their work is ok as long as they have little chance of immediately destroying Earth. But the real problem is how do we build or become a superintelligence that shares our values, and given this seems very difficult, any progress that doesn't contribute to the solution but brings forward the date by which we must solve it (or be stuck with something very suboptimal even if it doesn't kill us) is bad, and this includes AI progress that is not immediately dangerous.

ETA: I expanded this comment into a post here.

1Dmytry
Well, there's this implied assumption that super-intelligence that 'does not share our values' shares our domain of definition of the values. I can make a fairly intelligent proof generator, far beyond human capability if given enough CPU time; it won't share any values with me, not even the domain of applicability; the lack of shared values with it is so profound as to make it not do anything whatsoever in the 'real world' that I am concerned with. Even if it was meta - strategic to the point of potential for e.g. search for ways to hack into a mainframe to gain extra resources to do the task 'sooner' by wallclock time, it seems very dubious that by mere accident it will have proper symbol grounding, won't wirelead (i.e. would privilege the solutions that don't involve just stopping said clock), etc etc. Same goes for other practical AIs, even the evil ones that would e.g. try to take over internet.
6Wei Dai
You're still falling into the same trap, thinking that your work is ok as long as it doesn't immediately destroy the Earth. What if someone takes your proof generator design, and uses the ideas to build something that does affect the real world?
1Dmytry
Well let's say in 2022 we have a bunch of tools along the lines of automatic problem solving, unburdened by their own will (not because they were so designed but by simple omission of immense counter productive effort). Someone with a bad idea comes around, downloads some open source software, cobbles together some self propelling 'thing' that is 'vastly superhuman' circa 2012. Keep in mind that we still have our tools that make us 'vastly superhuman' circa 2012 , and i frankly don't see how 'automatic will', for lack of better term, is contributing anything here that would make the fully automated system competitive.
8Wei Dai
Well, one thing the self-willed superintelligent AI could do is read your writings, form a model of you, and figure out a string of arguments designed to persuade you to give up your own goals in favor of its goals (or just trick you into doing things that further its goals without realizing it). (Or another human with superintelligent tools could do this as well.) Can you ask your "automatic problem solving tools" to solve the problem of defending against this, while not freezing your mind so that you can no longer make genuine moral/philosophical progress? If you can do this, then you've pretty much already solved the FAI problem, and you might as well ask the "tools" to tell you how to build an FAI.
-1XiXiDu
Does agency enable the AI to do so? If not, then why wouldn't a human being not be able to do the same by using the AI in tool mode? Just make it list equally convincing counter-arguments.
3Wei Dai
Yeah, I realized this while writing the comment: "(Or another human with superintelligent tools could do this as well.)" So this isn't a risk with self-willed AI per se. But note this actually makes my original point stronger, since I was arguing against the idea that progress on AI is safe as long as it doesn't have a "will" to act in the real world. So every time you look at a (future equivalent of) website or email, you ask your tool to list equally convincing counter-arguments to whatever you're looking at? What does "equally convincing" mean? An argument that exactly counteracts the one that you're reading, leaving your mind unchanged?
1XiXiDu
Sure, why not? I think IBM is actually planning to do this with IBM Watson. Once mobile phones become fast enough you can receive constant feedback about ideas and arguments you encounter. For example, some commercial tells you that you can lose 10 pounds in 1 day by taking a pill. You then either ask your "IBM Oracle" or have it set up to give you automatic feedback. It will then tell you that there are no studies that indicate that something as advertised is possible and that it won't be healthy anyway. Or something along those lines. I believe that in future it will be possible to augment everything with fact-check annotations. But that's besides the point. The idea was that if you run the AI box experiment with Eliezer posing as malicious AI trying to convince the gatekeeper to let it out of the box, and at the same time as a question answering tool using the same algorithms as the AI, then I don't think someone would let him out of the box. He would basically have to destroy his own arguments by giving unbiased answers about the trustworthiness of the boxed agent and possible consequences of letting it out of the box. At the very best the AI in agent mode would have to contradict the tool mode version and thereby reveal that it is dishonest and not trustworthy.
1khafra
When I'm feeling down and my mom sends me an email trying to cheer me up, that'll be a bit of a bummer.
2Dmytry
Yep. Majorly awesome scenario degrades into ads vs adblock when you consider everything in the future not just the self willed robot. Matter of fact, a lot of work is put into constructing convincing strings of audio and visual stimuli, and into ignoring those strings.
3David_Gerard
Superstimuli and the Collapse of Western Civilization. Using such skills to manipulate other humans appears to be what we grew intelligence for, of course. As I note, western civilisation is already basically made of the most virulent toxic memes we can come up with. In the noble causes of selling toothpaste and car insurance and, of course, getting laid. It seems to be what we do now we've more or less solved the food and shelter problems.
0TheOtherDave
They'd probably have to be more convincing, since convincing a human being out of a position they already hold is usually a more difficult task than convincing them to hold the position in the first place.
1XiXiDu
If I have a superhuman answering machine on one side. A tool that just lists a number of answers to my query, just like a superhuman Google. And on the other side I have the same tool in agent mode. Then why would I be more convinced by the agent mode output? An agent has an incentive to trick me. While the same algorithm, minus the agency module, will just output unbiased answers to my queries. If the answers between the tool and agent mode differ, then I naturally believe the tool mode output. If for example the agent mode was going to drivel something about acausal trade and the tool mode would just output some post by Eliezer Yudkowsky explaining why I shouldn't let the AI out of the box, then how could the agent mode possible be more convincing? Especially since putting the answering algorithm into agent mode shouldn't improve the answers.
0[anonymous]
You wouldn't, necessarily. Nor did I suggest that you would. I also agree that if (AI in "agent mode") does not have any advantages over ("tool mode" plus human agent), then there's no reason to expect its output to be superior, though that's completely tangential to the comment you replied to. That said, it's not clear to me that (AI in "agent mode") necessarily lacks advantages over ("tool mode" plus human agent).
0TheOtherDave
You wouldn't, necessarily. Nor did I suggest that you would. I also agree that if (AI in "agent mode") does not have any advantages over ("tool mode" plus human agent), then there's no reason to expect its output to be superior, though that's completely tangential to the comment you replied to. That said, it's not clear to me that (AI in "agent mode") necessarily lacks advantages over ("tool mode" plus human agent).
0XiXiDu
I don't think that anyone with the slightest idea that an AI in agent mode could have malicious intentions, and therefore give biased answers, wouldn't be as easily swayed by counter-arguments made by a similarly capable algorithm. I mean, we shouldn't assume an idiot gatekeeper who never heard of anything we're talking about here. So the idea that an AI in agent mode could brainwash someone to the extent that it afterwards takes even stronger arguments to undo it seems rather far-fetched (ETA What's it supposed to say? That the tool uses the same algorithms as itself but is somehow wrong in claiming that the AI in agent mode tries to brainwash the gatekeeper?). The idea is that given a sufficiently strong AI in tool mode, it might be possible to counter any attempt to trick a gatekeeper. And in the case that the tool mode agrees, then it probably is a good idea to let the AI out of the box. Although anyone familiar with the scenario would probably rather assume a systematic error elsewhere, e.g. a misinterpretation of one's questions by the AI in tool mode.
0TheOtherDave
Ah, I see. Sure, OK, that's apposite. Thanks for clarifying that. I disagree with your prediction.
3XiXiDu
This is actually one of Greg Egan's major objections. That superhuman tools come first and that artificial agency won't make those tools competitive against augmented humans. Further, you can't apply any work done to ensure that an artificial agents is friendly to augmented humans.
2Turgurth
I have a few questions, and I apologize if these are too basic: 1) How concerned is SI with existential risks vs. how concerned is SI with catastrophic risks? 2) If SI is solely concerned with x-risks, do I assume correctly that you also think about how cat. risks can relate to x-risks (certain cat. risks might raise or lower the likelihood of other cat. risks, certain cat. risks might raise or lower the likelihood of certain x-risks, etc.)? It must be hard avoiding the conjunction fallacy! Or is this sort of thing more what the FHI does? 3) Is there much tension in SI thinking between achieving FAI as quickly as possible (to head off other x-risks and cat. risks) vs. achieving FAI as safely as possible (to head off UFAI), or does one of these goals occupy signficantly more of your attention and activities? Edited to add: thanks for responding!
8CarlShulman
Different people have different views. For myself, I care more about existential risks than catastrophic risks, but not overwhelmingly so. A global catastrophe would kill me and my loved ones just as dead. So from the standpoint of coordinating around mutually beneficial policies, or "morality as cooperation" I care a lot about catastrophic risk affecting current and immediately succeeding generations. However, when I take a "disinterested altruism" point of view x-risk looms large: I would rather bring 100 trillion fantastic lives into being than improve the quality of life of a single malaria patient. Yes. They spend more time on it, relatively speaking. Given that powerful AI technologies are achievable in the medium to long term, UFAI would seem to me be a rather large share of the x-risk, and still a big share of the catastrophic risk, so that speedups are easily outweighed by safety gains.
2multifoliaterose
What's your break even point for "bring 100 trillion fantastic lives into being with probability p" vs. "improve the quality of a single malaria patient" and why?
0CarlShulman
It depends on the context (probability distribution over number and locations and types of lives), with various complications I didn't want to get into in a short comment. Here's a different way of phrasing things: if I could trade off probability p1 of increasing the income of everyone alive today (but not providing lasting benefits into the far future) to at least $1,000 per annum with basic Western medicine for control of infectious disease, against probability p2 of a great long-term posthuman future with colonization, I would prefer p2 even if it was many times smaller than p1. Note that those in absolute poverty are a minority of current people, a tiny minority of the people who have lived on Earth so far, their life expectancy is a large fraction of that of the rich, and so forth.
1steven0461
What about takeover by an undesirable singleton? Also, if nanotechnology enables AI or uploads, that's an AI risk, but it might still involve unique considerations we don't usually think to talk about. The opportunities to reduce risk here have to be very small to justify LessWrong's ignoring the topic almost entirely, as it seems to me that it has. The site may well have low-hanging conceptual insights to offer that haven't been covered by CRN or Foresight.

to justify LessWrong's ignoring the topic

That's a much lower standard than "should Luke make this a focus when trading breadth vs speed in making his document". If people get enthused about that, they're welcome to. I've probably put 50-300 hours (depending on how inclusive a criterion I use for relevant hours) into the topic, and saw diminishing returns. If I overlap with Eric Drexler or such folk at a venue I would inquire, and I would read a novel contribution, but I'm not going to be putting much into it given my alternatives soon.

5steven0461
I agree that it's a lower standard. I didn't mean to endorse Wei's claims in the original post, certainly not based on nanotech alone. If you don't personally think it's worth more of your time to pay attention to nanotech, I'm sure you're right, but it still seems like a collective failure of attention that we haven't talked about it at all. You'd expect some people to have a pre-existing interest. If you ever think it's worth it to further describe the conclusions of those 50-300 hours, I'd certainly be curious.
3CarlShulman
I'll keep that in mind.

SI/LW sometimes gives the impression of being a doomsday cult...

I certainly never had this impression. The worst that can be said about SI/LW is that some use inappropriately strong language with respect to risks from AI.

What I endorse:

  • Risks from AI (including WBE) are an underfunded research area and might currently be the best choice for anyone who seeks to do good by contribute money to an important cause.

What I think is unjustified:

  • This is crunch time. This is crunch time for the entire human species. And it’s crunch time not just for us, it’s
... (read more)
2Rain
Charitable giving in the US in 2010: ~$290,890,000,000 SI's annual budget for 2010: ~$500,000 US Peace Corps volunteers in 2010 (3 years of service in a foreign country for sustenance wages): ~8,655 SI volunteers in 2010 (work from home or California hot spots): like 5?
2XiXiDu
I am not sure what you are trying to tell me by those numbers. I think that there are a few valid criticisms regarding SI as an organization. It is also not clear that they could usefully spend more than ~$500,000 at this time. In other words, even if risks from AI was the by far (not just slightly) most important cause, it is not clear that contributing money to SI is better than withholding funds from it it at this point. If for example they can't usefully spend more money at this point, and there is nothing medium probable that you yourself can do against AI risk right now, then you should move on to the next most important cause that needs funding and support it instead.
4Rain
1. You think SI is "probably the top charity right now". 2. SI is smaller than the rounding error in US charitable giving. 3. You think they might have more than enough money Those don't add up.
-2Rain
I think it's funny.
0thomblake
I think you misread "top charity" as "biggest charity" instead of "most important charity".
0Rain
No, I didn't.

Are there any doomsday cults that say "doom is probably coming, we're not sure how but here are some likely possibilities"?

No, but there are lots of cults that say "we are the people to solve all the world's problems." Acknowledging the benefits of Division of Labour is un-cult-like.

Why does SI/LW focus so much on AI-FOOM disaster, with apparently much less concern for things like

  • Malthusian upload scenario

For my part I consider that scenario pretty damn close to the AI-FOOM. ie. It'll quite probably result in a near equivalent outcome but just take slightly longer before it becomes unstoppable.

Personally, I care primarily about AI risk for a few reasons. One is that it is an extremely strong feedback loop. There are other dangerous feedback loops, including nanotech, and I am not confident which will be a problem first. But I think AI is the hardest risk to solve, and also has the most potential for negative utility. I also think that we are relatively close to being able to create AGI.

As far as I know, the SI is defined by its purpose of reducing AI risk. If other risks need long-term work, then each risk needs a dedicated group to work on it.

As for LW, I think it's simply that people read EY's writing on AI risk, and those that agree tend to stick around and discuss it here.

-2inachu
There are two forms of AI in my book and either one contains risk. The learned AI or the AI that comes with complete knowledge. to involve AI in risk assesment you will need the AI in the wilderness with nothing held back. Truly though would you do that to AI? Kind of like shoving all information down the brain of a 13 year old girl. She would just go berserk and will become defiant in the end. The best alternative safe AI that contains no risk is the copied brain of a scientist.

To me it seems reasonable to focus on self-improving AI instead of wars and nanotechnology. If we get the AI right, then we can give it a task to solve our problems with wars, nanotechnology, et cetera (the "suboptimal singleton" problem is included in "getting the AI right"). One solution will help us with other solutions.

As an analogy, imagine yourself as an intelligent designer of your favorite species. You can choose to give them an upgrade: fast feet, thick fur, improved senses, or human-like brain. Of course you should choose a hu... (read more)

The answer to your initial question is that Eliezer and Luke believe that if we create AI, the default result is itkills us all.or does.something else equally unpleasant. And also that creating Friendly AI will be an extraordinarily good thing, in part (and only in part) because it would be excellent protection against other risks.

That said, I think there is a limit to how confident anyone ought to be in that view, and it is worth trying to prepare for other scenarios.

What does "doomsday cult" mean? I had been under the impression that it referred to groups like Heaven's Gate or Family Radio which prophesied a specific end-times scenario, down to the date and time of doomsday.

However, Wikipedia suggests the term originated with John Lofland's research on the Unification Church (the Moonies):

Doomsday cult is an expression used to describe groups who believe in Apocalypticism and Millenarianism, and can refer both to groups that prophesy catastrophe and destruction, and to those that attempt to bring it about.

... (read more)
0timtyler
In one out of three quoted meanings? It seems to be a relatively unimportant factor to me.
0Luke_A_Somers
The first, well, anyone raising a concern is going to have that. Numbers 2 and 3 (religious problem-solving, seekership) are right out. Number 4 (turning point), okay. Number 5 (formation of affective bonds)... I dunno, maaybe? I mean, you can't really blame a group for people liking it. I think this was meant way more strongly than we have here. Number 6 Neutralization of external attachments? Absolutely not. You didn't name the seventh, unless it's the deprivation, which again... no. So, arguably 3 out of seven, of which 2 are so common as to be kind of silly, and one of those was a major stretch. Whee.

SI/LW sometimes gives the impression of being a doomsday cult

To whom? In the post you linked, the main source of the concern (google hits) turned out not to mean the thing the author originally thought (edit: this is false. Sorry). Merely "raising the issue" is merely privileging the hypothesis.

Anywho, is the main idea of this post "this other bad stuff is similarly bad, and SI could be doing similar amounts to reduce the risk of these bad things?" I seem to recall their justification for focus on AI was that with self-improving ... (read more)

5Wei Dai
The post was triggered by a private message from someone, so unfortunately I can't link to it. Not quite. I'm saying there are a bunch of Singularity-related risks that aren't AI risks, and a bunch of Singularity-related opportunities that aren't AI opportunities. The AI-related opportunities affect the non-AI risks, and the non-AI opportunities affect the AI risks. (For example successfully building FAI would prevent war as much as it prevents UFAI.) We shouldn't be thinking just about AI risks and opportunities at this point, or giving the impression that we are.
[-][anonymous]00

bad memes/philosophies spreading among humans or posthumans and overriding our values

Well,

[This comment is no longer endorsed by its author]Reply

I am going to assert that the fear of unfriendly AI over the threats you mention is a product of the same cognitive bias which makes us more fascinated by evil dictators and fictional dark lords than more mundane villains. The quality of "evil mind" is what really frightens us, not the impersonal swarm of "mindless" nanobots, viruses or locusts. However, since this quality of "mind," which encapsulates such qualities as "consciousness" and "volition," is so poorly understood by science and so totally undemo... (read more)

1Zetetic
I'm going to assert that it has something to do with who started the blog.
[+]Dmytry-60