So I read this, and my brain started brainstorming. None of the names I came up with were particularly good. But I did happen to produce a short mnemonic for explaining the agenda and the research focus of the Singularity Institute.
A one word acronym that unfolds into a one sentence elevator pitch:
Crisis: Catastrophic Risks in Self Improving Software
Lots of fun ways to play around with this term, to make it memorable in conversations.
It has some urgency to it, it's fairly concrete, it's memorable.
It compactly combines goals of catastrophic risk reduction and self improving systems research.
Bonus: You practically own this term already.
An incognito Google search gives me no hits for "Catastrophic Risks In Self Improving Software", when in quotes. Without quotes, top hits include the Singularity Institute, the Singularity Summit, intelligencexplosion.com. Nick Bostrom and the Oxford group is also in there. I don't think he would mind too much.
This is clever but sounds too much like something out of Hollywood. I'd prefer bland but respectable.
It's worth noting that your current name has advantages too; people who are interested in the accelerating change singularity will naturally run into you guys. These are people, some pretty smart, who are at home with weird ideas and like thinking about the far future. Isn't this how Louie found out about SI?
Maybe instead of changing your name, you could spin out yet another organization (with most of your current crew) to focus on AI risk, and leave the Singularity Institute as it is to sponsor the Singularity Summit and so on. My impression is that SI has a fairly high brand value, so I would think twice before discarding part of that. Additionally, I know at least one person assumed the Singularity Summit was all you guys did. So having the summit organized independently of the main AI risk thrust could be good.
Center for AI Safety most accurately describes what you do.
To be honest, the I. J. Good Institute sounds the most prestigious.
Beneficial Architectures Research makes you sound like you're researching earthquake safety or something similar. I don't think you necessarily need to shy away from the word "AI."
AI Impacts Research sounds incomplete, though I think it would sound good with the word "society," "foundation," or "institute" tacked onto either end.
I really like Center for AI Safety.
The AI Risk Reduction Center
Center for AI Risk Reduction
Institute for Machine Ethics
Center for Ethics in Artificial Intelligence
And I favor this kind of name change pretty strongly.
You can donate it to my startup instead, our board of directors has just unanimously decided to adopt this name. Paypal is fine. Our mission is developing heuristics for personal income optimization.
The obvious change if Singularity has been co-opted is the Institute for Artificial Intelligence. (but IAI is not a great acronym).
Institute for Artifical Intelligence Safety lets you keep the S, but it's at the wrong spot. Safety Institution for Artificial Intelligence is off-puttingly incorrect.
The Institute for Friendly Artificial Intelligence (pron. eye-fay) is IFAI... maybe?
If you go with the Center for Friendly Artificial Intelligence you get CFAI, sort of parallel to CFAR (if that's what you want).
Oh! If associating with CFAR is okay, then what's really lovely is the Center for Friendly Artificial Intelligence Research, acronym as CFAIR. (You could even get to do cute elevator pitches asking people how they'd program their obviously well-defined "fairness" into an AI.)
Edit: I do agree that "Friendly" is not, on the whole, desirable. I prefer "Risk Reduction" to "Safety", because I think Safety might bring a little bit of the same unsophistication that Friendly would bring.
Center for Friendly Artificial Intelligence Research
Including "Friendly" is good for those that understand that it is being used as a jargon term with a specific meaning. Unfortunately it could give an undesirable impression of unsophisticated to the naive audience (which is the target).
I also strongly object to 'Friendly' being used in the name -- it's a technical term that I think people are very likely to misunderstand.
Paraphrasing, I believe it was said by an SIer that "if uFAI wasn't the most significant and manipulable existential risk, then the SI would be working on something else." If that's true, then shouldn't its name be more generic? Something to do with reducing existential risk...?
I think there are some significant points in favor of a generic name.
Outsiders will more likely see your current focus (FAI) as the result of pruning causes rather than leaping toward your passion -- imagine if GiveWell were called GiveToMalariaCauses.
By attaching yourself directly with reducing existential risk, you bring yourself status by connecting with existing high status causes such as climate change. Moreover, this creates debate with supporters of other causes connected to existential risk -- this gives you acknowledgement and visibility.
The people you wish to convince won't be as easily mind-killed by research coming from "The Center for Reducing Existential Risk" or such.
Is it worth switching to a generic name? I'm not sure, but I believe it's worth discussing.
I have direct experience of someone highly intelligent, a prestigious academic type, dismissing SI out of hand because of its name. I would support changing the name.
Almost all the suggestions so far attempt to reflect the idea of safety or friendliness into the name. I think this might be a mistake, because for people who haven't thought about it much, this invokes images of Hollywood). Instead, I propose having the name imply that SI does some kind of advanced, technical research involving AI and is prestigious, perhaps affiliated with a university (think IAS).
Center for Advanced AI Research (CAAIR)
The Center for AI Safety
Like it. What you actually do.
The I.J. Good Institute
Eww. Pretentious and barely relevant. Some guy who wrote a paper in 1965. Whatever. Do it if for some reason you think prestigious sounding initials will give enough academic credibility to make up for having a lame irrelevant name. Money and prestige are more important than self respect.
Beneficial Architectures Research
Architectures? Word abuse! Why not go all the way and throw in "emergent"?
A.I. Impacts Research
Not too bad.
The AI researcher saw the word 'Singularity' and, apparently without reading our concise summary, sent back a critique of Ray Kurzweil's "accelerating change" technology curves. (Even though SI researchers tend to be Moore's Law agnostics, and our concise summary says nothing about accelerating change.)
Worse, when you try to tell someone who already mainly associates the idea of the singularity with accelerating change curves about the distinctions between different types of singularity, they can, somewhat justifiably from their perspective, dismiss it as just a bunch of internal doctrinal squabbling among those loony people who obsess over technology curves, squabbling that it's really beneath them to investigate too deeply.
The Center for AI Safety-- best of the bunch. It might be clearer as The Center for Safe AI.
The I.J. Good Institute-- I have no idea what the IJ stands for.
Beneficial Architectures Research-- sounds like an effort to encourage better buildings.
A.I. Impacts Research-- reads like a sentence. It might be better as Research on AI Impacts.
I would guess that exactly zero of my non-Less Wronger friends have ever heard of I. J. Good.
Which is fine; to everyone else, it's some guy's name, with moderately positive affect. I'd be less in favour of this scheme if the idea of intelligence explosion had first been proposed by noted statistician I J Bad.
You are worried that the SIAI name signals a lack of credibility. You should be worried about its people do. No, it's not the usual complaints about Eliezer. I'm talking about Will Newsome, Stephen Omohundro, and Ben Goertzel.
Will Newsome has apparently gone off the deep end: http://lesswrong.com/lw/ct8/this_post_is_for_sacrificing_my_credibility/6qjg The typical practice in these cases, as I understand it, is to sweep these people under the rug and forget that they had anything to do with the organization. This might not be the most intellectually honest thing to do, but it's more PR-minded than leaving them listed, and more polite than adding them to a hall of shame.
And, while the Singularity Institute is announcing that it is absolutely dangerous to build an AGI without proof of friendlyness, two of its advisors, Omohundro and Goertzel, are, separately, attempting to build AGIs. Of course, this is only what I have learned from http://singularity.org/advisors/ -- maybe they have since changed their minds?
...but neither of these is a strong reason to keep the word 'singularity' in the name of our AI Risk Reduction organization.
Why not just call it that, then ? "AI Risk Reduction Institute".
"Safe" is a wrong word for describing a process of rewriting the universe.
(An old tweet of mine; not directly relevant here.)
I think something about "Machine ethics" sounds best to me. "Machine learning" is essentially statistics with a computational flavor, but it has a much sexier name. You think statistics and you think boring tables, you think "machine learning" and you think Matrix or Terminator.
Joke suggestions: "Mom's friendly robot institute," "Institute for the development of typesafe wishes" (ht Hofstadter).
I think a name change is a great idea. I can certainly imagine someone being reluctant to associate their name with the "Singularity" idea even if they support what SIAI actually does. I think if I was a famous researcher/donor, I would be a bit reluctant to be strongly associated with the Singularity meme in its current degraded form. Yes, there are some high-status people who know better, but there are many more who don't.
Here is a suggestion: Center for Emerging Technology Safety. This name affiliates with the high-status term "emerging t...
Semi-serious suggestions:
Do we actually have rigorous evidence of a need for name change? It seems that we're seriously considering an expensive and risky move on the basis of mere anecdote.
It’s quite likely you can solve the problem of people miss-associating SI with “accelerating change“ without having to change names.
The AI researcher saw the word 'Singularity' and, apparently without reading our concise summary, sent back a critique of Ray Kurzweil's "accelerating change" technology curves.
What if the AI researcher read (or more likely, skimmed) the concise summary before responding to the potential supporter? At least this line in the first paragraph, “artificial intelligence beyond some threshold level would snowball, crea...
AI Impacts Research seems to me the best of the bunch, because it's pretty easy to understand. People who know nothing about Eliezer's work can see it and think, "Oh, duh AI will have an impact, it's worth thinking about that." On the other hand:
I actually suspect that the word "Singularity" serves as a way of differentiating you from the huge number of academic institutes to do with AI so I'm not endorsing change necessarily.
However, if you do change, I vote for something to do with the phrase "AI Risk" - your marketing speel is about reducing risk and I think you're name will attract more donor attention if people can see a purpose rather than a generic name. As such, I vote against "I.J. Good Institute".
I also think "Beneficial Architectures Research" is...
A.I. Safety Foundation
Center for existential risk reduction
Friendly A.I. Group
A.I. Ethics Group
Institute for A.I. ethics
"The Mandate is a Gnostic School founded by Seswatha in 2156 to continue the war against the Consult and to protect the Three Seas from the return of the No-God.
... [it] also differs in the fanaticism of its members: apparently, all sorcerers of rank continuously dream Seswartha's experiences of the Apocalypse every night ...
...the power of the Gnosis makes the Mandate more than a match for schools as large as, say, the Scarlet Spires."
No-God/UFAI, Gnosis/x-rationality, the Consult/AGI community? ;-)
Does this mean it's too late to suggest "The Rationality Institute for Human Intelligence" for the recent spin-off, considering the original may no longer run parallel to that?
Seriously though, and more to the topic, I like "The Center for AI Safety", not only because it sounds good and is unusually clear as to the intention of the organization, but also because it would apparently, well, run parallel with "The Center for Modern Rationality" (!), which is (I think) the name that was ultimately (tentatively?) picked for the spin-off.
Come to think of it, SI have a bigger problem than the name: getting a cooler logo than these guys.
/abg frevbhf
More than that, many people in SU-affiliated circles use the word "Singularity" by itself to mean Singularity University ("I was at Singularity"), or else next-gen technology; and not any of the three definitions of the Singularity. These are smart, innovative people, but some may not even be familiar with Kurzweil's discussion of the Singularity as such.
I'd suggest using the name change as part of a major publicity campaign, which means you need some special reason for the campaign, such as a large donation (see James Miller's excellent idea).
A suggestion: it may be a bad idea to use word 'artificial intelligence' in the name without qualifiers, as to serious people in the field
the 'artificial intelligence' has much, much broader meaning than what SI is concerning itself with
there is very significant disdain for the commonplace/'science fiction' use of 'artificial intelligence'
I like "AI Risk Reduction Institute". It's direct, informative, and gives an accurate intuition about the organization's activities. I think "AI Risk Reduction" is the most intuitive phrase I've heard so far with respect to the organization.
I'll focus on "The Center for AI Safety", since that seems to be the most popular. I think "safety" comes across as a bit juvenile, but I don't know why I have that reaction. And if you say the actual words Artificial Intelligence, "The Center for Artificial Intelligence Safety" it gets to be a mouthful, in my opinion. I think a much better option is "The Center for Safety in Artificial Intelligence", making it CSAI, which is easily pronounced See-Sigh.
You could reuse the name of the coming December conference, and go for AI Impacts (no need to add "institute" or "research").
Center for AI Ethics Research
Center for Ethical AI
Singularity Institute for Ethical AI
The Good Future Research Center
A wink to the earlier I.J. Good Institute idea, it matches the tone of the current logo while being unconfining in scope.
It would be nice if the name reflected the SI's concern that the dangers come not just from some cunning killer robots escaping a secret government lab or a Skynet gone amok, or a Frankenstein monster constructed by a mad scientist, but from recursive self-improvement ("intelligence explosion") of an initially innocuous and not-very smart contraption.
I am also not sure whether the qualifier "artificial" conveys the right impression, as the dangers might come from an augmented human brain suddenly developing the capacity for recursive s...
I agree that something along the lines of "AI Safety" or "AI RIsk Reduction" or "AI Impacts Research" would be good. It is what the organization seems to be primarily about.
As a side-effect, it might deter folks from asking why you're not building AIs, but it might make it harder to actually build an AI.
I'd worry about funding drying up from folks who want you to make AI faster, but I don't know the distribution of reasons for funding.
I'd prefer AI Safety Institute over Center for AI Safety, but I agree with the others that that general theme is the most appropriate given what you do.
The Centre for the Development of Benevolent Goal Architectures
Once, a smart potential supporter stumbled upon the Singularity Institute's (old) website and wanted to know if our mission was something to care about. So he sent our concise summary to an AI researcher and asked if we were serious. The AI researcher saw the word 'Singularity' and, apparently without reading our concise summary, sent back a critique of Ray Kurzweil's "accelerating change" technology curves. (Even though SI researchers tend to be Moore's Law agnostics, and our concise summary says nothing about accelerating change.)
For what it...
Can a moderator please deal with private_messaging, who is clearly here to vent rather than provide constructive criticism?
You currently have 290 posts on LessWrong and Zero (0) total Karma.
I don't care about opinion of a bunch that is here on LW.
Others: please do not feed the trolls.
Heh. It's a pretty rare organisation that does Research in Artificial Intelligence Risk Reduction.
(Artificial Intelligence Risk Reduction by itself might work.)
Here's a few:
While the concise summary clearly associates SI with Good's intelligence explosion, nowhere does it specifically say anything about Kurzweil or accelerating change. If people really are getting confused about what sort of singularity you're thinking about, would it be helpful as a temporary measure to put some kind of one-sentence disclaimer in the first couple paragraphs of the summary? I can understand that maybe this would only further the association between "singularity" and Kurzweil's technology curves, but if you don't want to lose the wor...
Ok.
The Center for AI safety and Centre for Friendly Artificial Intelligence research sound the most correct as of now.
If you wanted to aim for a more creative name, then here are some
Centre for Coding Goodness
Man's Best Friend Group (If the slightly implied sexism of "Man's" is Ok..)
The Artificial Angels Institute / Centre for Machine Angels - The angels word directly conveys goodness and superiority over humans, but due to its christian origins and other associated imagery, it might be walking a tight rope.
Man's Best Friend Group (If the slightly implied sexism of "Man's" is Ok..)
Naming your research institute after a pet dog reference and it is the non gender neutral word that seems like the problem?
Once, a smart potential supporter stumbled upon the Singularity Institute's (old) website and wanted to know if our mission was something to care about. So he sent our concise summary to an AI researcher and asked if we were serious. The AI researcher saw the word 'Singularity' and, apparently without reading our concise summary, sent back a critique of Ray Kurzweil's "accelerating change" technology curves. (Even though SI researchers tend to be Moore's Law agnostics, and our concise summary says nothing about accelerating change.)
Of course, the 'singularity' we're talking about at SI is intelligence explosion, not accelerating change, and intelligence explosion doesn't depend on accelerating change. The term "singularity" used to mean intelligence explosion (or "the arrival of machine superintelligence" or "an event horizon beyond which we can't predict the future because something smarter than humans is running the show"). But with the success of The Singularity is Near in 2005, most people know "the singularity" as "accelerating change."
How often do we miss out on connecting to smart people because they think we're arguing for Kurzweil's curves? One friend in the U.K. told me he never uses the world "singularity" to talk about AI risk because the people he knows thinks the "accelerating change" singularity is "a bit mental."
LWers are likely to have attachments to the word 'singularity,' and the term does often mean intelligence explosion in the technical literature, but neither of these is a strong reason to keep the word 'singularity' in the name of our AI Risk Reduction organization. If the 'singularity' term is keeping us away from many of the people we care most about reaching, maybe we should change it.
Here are some possible alternatives, without trying too hard:
We almost certainly won't change our name within the next year, but it doesn't hurt to start gathering names now and do some market testing. You were all very helpful in naming "Rationality Group". (BTW, the winning name, "Center for Applied Rationality," came from LWer beoShaffer.)
And, before I am vilified by people who have as much positive affect toward the name "Singularity Institute" as I do, let me note that this was not originally my idea, but I do think it's an idea worth taking seriously enough to bother with some market testing.