Once, a smart potential supporter stumbled upon the Singularity Institute's (old) website and wanted to know if our mission was something to care about. So he sent our concise summary to an AI researcher and asked if we were serious. The AI researcher saw the word 'Singularity' and, apparently without reading our concise summary, sent back a critique of Ray Kurzweil's "accelerating change" technology curves. (Even though SI researchers tend to be Moore's Law agnostics, and our concise summary says nothing about accelerating change.)
Of course, the 'singularity' we're talking about at SI is intelligence explosion, not accelerating change, and intelligence explosion doesn't depend on accelerating change. The term "singularity" used to mean intelligence explosion (or "the arrival of machine superintelligence" or "an event horizon beyond which we can't predict the future because something smarter than humans is running the show"). But with the success of The Singularity is Near in 2005, most people know "the singularity" as "accelerating change."
How often do we miss out on connecting to smart people because they think we're arguing for Kurzweil's curves? One friend in the U.K. told me he never uses the world "singularity" to talk about AI risk because the people he knows thinks the "accelerating change" singularity is "a bit mental."
LWers are likely to have attachments to the word 'singularity,' and the term does often mean intelligence explosion in the technical literature, but neither of these is a strong reason to keep the word 'singularity' in the name of our AI Risk Reduction organization. If the 'singularity' term is keeping us away from many of the people we care most about reaching, maybe we should change it.
Here are some possible alternatives, without trying too hard:
- The Center for AI Safety
- The I.J. Good Institute
- Beneficial Architectures Research
- A.I. Impacts Research
We almost certainly won't change our name within the next year, but it doesn't hurt to start gathering names now and do some market testing. You were all very helpful in naming "Rationality Group". (BTW, the winning name, "Center for Applied Rationality," came from LWer beoShaffer.)
And, before I am vilified by people who have as much positive affect toward the name "Singularity Institute" as I do, let me note that this was not originally my idea, but I do think it's an idea worth taking seriously enough to bother with some market testing.
I don't care about opinion of a bunch that is here on LW. Also, that goal was within that particular thread. At the current point I am expressing my opinion on what I think about this whole anti-social activity of sitting, looking at how a string was processed, and making another string as to maximize donations (and the general enterprise of looking at "why people think we're cranks" and changing just the appearance). Centre for AI safety, huh. No one ever done anything that doesn't rely on extreme singularity scenario (FOOM), and it's a centre for AI safety, something that from the name oughta work on safety of self driving cars. (you may not care about my opinion which is totally fine)
I suppose it's too much to ask that a moderator get involved with someone who is clearly here to vent rather than provide constructive criticism.