Once, a smart potential supporter stumbled upon the Singularity Institute's (old) website and wanted to know if our mission was something to care about. So he sent our concise summary to an AI researcher and asked if we were serious. The AI researcher saw the word 'Singularity' and, apparently without reading our concise summary, sent back a critique of Ray Kurzweil's "accelerating change" technology curves. (Even though SI researchers tend to be Moore's Law agnostics, and our concise summary says nothing about accelerating change.)
Of course, the 'singularity' we're talking about at SI is intelligence explosion, not accelerating change, and intelligence explosion doesn't depend on accelerating change. The term "singularity" used to mean intelligence explosion (or "the arrival of machine superintelligence" or "an event horizon beyond which we can't predict the future because something smarter than humans is running the show"). But with the success of The Singularity is Near in 2005, most people know "the singularity" as "accelerating change."
How often do we miss out on connecting to smart people because they think we're arguing for Kurzweil's curves? One friend in the U.K. told me he never uses the world "singularity" to talk about AI risk because the people he knows thinks the "accelerating change" singularity is "a bit mental."
LWers are likely to have attachments to the word 'singularity,' and the term does often mean intelligence explosion in the technical literature, but neither of these is a strong reason to keep the word 'singularity' in the name of our AI Risk Reduction organization. If the 'singularity' term is keeping us away from many of the people we care most about reaching, maybe we should change it.
Here are some possible alternatives, without trying too hard:
- The Center for AI Safety
- The I.J. Good Institute
- Beneficial Architectures Research
- A.I. Impacts Research
We almost certainly won't change our name within the next year, but it doesn't hurt to start gathering names now and do some market testing. You were all very helpful in naming "Rationality Group". (BTW, the winning name, "Center for Applied Rationality," came from LWer beoShaffer.)
And, before I am vilified by people who have as much positive affect toward the name "Singularity Institute" as I do, let me note that this was not originally my idea, but I do think it's an idea worth taking seriously enough to bother with some market testing.
Paraphrasing, I believe it was said by an SIer that "if uFAI wasn't the most significant and manipulable existential risk, then the SI would be working on something else." If that's true, then shouldn't its name be more generic? Something to do with reducing existential risk...?
I think there are some significant points in favor of a generic name.
Outsiders will more likely see your current focus (FAI) as the result of pruning causes rather than leaping toward your passion -- imagine if GiveWell were called GiveToMalariaCauses.
By attaching yourself directly with reducing existential risk, you bring yourself status by connecting with existing high status causes such as climate change. Moreover, this creates debate with supporters of other causes connected to existential risk -- this gives you acknowledgement and visibility.
The people you wish to convince won't be as easily mind-killed by research coming from "The Center for Reducing Existential Risk" or such.
Is it worth switching to a generic name? I'm not sure, but I believe it's worth discussing.
I feel like you could get more general by using the "space of mind design" concept....
Like an Institute for Not Giving Immense Optimisation Power to an Arbitrarily Selected Point in Mindspace, but snappier.