You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Suggest alternate names for the "Singularity Institute"

24 Post author: lukeprog 19 June 2012 04:42AM

Once, a smart potential supporter stumbled upon the Singularity Institute's (old) website and wanted to know if our mission was something to care about. So he sent our concise summary to an AI researcher and asked if we were serious. The AI researcher saw the word 'Singularity' and, apparently without reading our concise summary, sent back a critique of Ray Kurzweil's "accelerating change" technology curves. (Even though SI researchers tend to be Moore's Law agnostics, and our concise summary says nothing about accelerating change.)

Of course, the 'singularity' we're talking about at SI is intelligence explosion, not accelerating change, and intelligence explosion doesn't depend on accelerating change. The term "singularity" used to mean intelligence explosion (or "the arrival of machine superintelligence" or "an event horizon beyond which we can't predict the future because something smarter than humans is running the show"). But with the success of The Singularity is Near in 2005, most people know "the singularity" as "accelerating change."

How often do we miss out on connecting to smart people because they think we're arguing for Kurzweil's curves? One friend in the U.K. told me he never uses the world "singularity" to talk about AI risk because the people he knows thinks the "accelerating change" singularity is "a bit mental." 

LWers are likely to have attachments to the word 'singularity,' and the term does often mean intelligence explosion in the technical literature, but neither of these is a strong reason to keep the word 'singularity' in the name of our AI Risk Reduction organization. If the 'singularity' term is keeping us away from many of the people we care most about reaching, maybe we should change it.

Here are some possible alternatives, without trying too hard:

 

  • The Center for AI Safety
  • The I.J. Good Institute
  • Beneficial Architectures Research
  • A.I. Impacts Research

 

We almost certainly won't change our name within the next year, but it doesn't hurt to start gathering names now and do some market testing. You were all very helpful in naming "Rationality Group". (BTW, the winning name, "Center for Applied Rationality," came from LWer beoShaffer.)

And, before I am vilified by people who have as much positive affect toward the name "Singularity Institute" as I do, let me note that this was not originally my idea, but I do think it's an idea worth taking seriously enough to bother with some market testing.

Comments (152)

Comment author: Jack 19 June 2012 02:19:12PM *  16 points [-]

I really like Center for AI Safety.

The AI Risk Reduction Center

Center for AI Risk Reduction

Institute for Machine Ethics

Center for Ethics in Artificial Intelligence

And I favor this kind of name change pretty strongly.

Comment author: NancyLebovitz 19 June 2012 02:44:18PM 9 points [-]

"Risk Reduction" is very much in the spirit of "Less Wrong".

Comment author: Bugmaster 19 June 2012 05:11:52PM 4 points [-]

I like "Institute for Machine Ethics", though some people could find the name a bit pretentious.

Comment author: Kaj_Sotala 20 June 2012 09:34:39AM 2 points [-]

Machine Ethics is more associated with narrow AI, though.

Comment author: Alex_Altair 19 June 2012 11:03:40PM 0 points [-]

I think the word "machine" is too reminiscent of robots.

Comment author: yli 19 June 2012 12:30:16PM *  11 points [-]

The AI researcher saw the word 'Singularity' and, apparently without reading our concise summary, sent back a critique of Ray Kurzweil's "accelerating change" technology curves. (Even though SI researchers tend to be Moore's Law agnostics, and our concise summary says nothing about accelerating change.)

Worse, when you try to tell someone who already mainly associates the idea of the singularity with accelerating change curves about the distinctions between different types of singularity, they can, somewhat justifiably from their perspective, dismiss it as just a bunch of internal doctrinal squabbling among those loony people who obsess over technology curves, squabbling that it's really beneath them to investigate too deeply.

Comment author: IlyaShpitser 19 June 2012 04:13:39PM *  7 points [-]

I think something about "Machine ethics" sounds best to me. "Machine learning" is essentially statistics with a computational flavor, but it has a much sexier name. You think statistics and you think boring tables, you think "machine learning" and you think Matrix or Terminator.

Joke suggestions: "Mom's friendly robot institute," "Institute for the development of typesafe wishes" (ht Hofstadter).

Comment author: i77 19 June 2012 05:49:40PM 3 points [-]

Singularity Institute for Machine Ethics.

Keep the old brand, add clarification about flavor of singularity.

Comment author: ChrisHallquist 20 June 2012 02:28:02AM 1 point [-]

I like this one a lot. Term that has a clear meaning in the existing literature.

Comment author: thomblake 20 June 2012 01:51:07PM 0 points [-]

But Machine Ethics generally refers to narrow AI - I think it's too vague (but then, "AI" might have the same problem).

Comment author: Bugmaster 19 June 2012 07:23:15AM 7 points [-]

...but neither of these is a strong reason to keep the word 'singularity' in the name of our AI Risk Reduction organization.

Why not just call it that, then ? "AI Risk Reduction Institute".

Comment author: RichardKennaway 19 June 2012 09:10:48PM 27 points [-]

LessDoomed.

Comment author: ChrisHallquist 20 June 2012 02:25:53AM *  13 points [-]

Upvoted for funny, but probably not a great name for a non-profit.

Comment author: John_Maxwell_IV 19 June 2012 05:55:45AM *  23 points [-]

It's worth noting that your current name has advantages too; people who are interested in the accelerating change singularity will naturally run into you guys. These are people, some pretty smart, who are at home with weird ideas and like thinking about the far future. Isn't this how Louie found out about SI?

Maybe instead of changing your name, you could spin out yet another organization (with most of your current crew) to focus on AI risk, and leave the Singularity Institute as it is to sponsor the Singularity Summit and so on. My impression is that SI has a fairly high brand value, so I would think twice before discarding part of that. Additionally, I know at least one person assumed the Singularity Summit was all you guys did. So having the summit organized independently of the main AI risk thrust could be good.

Comment author: Alex_Altair 19 June 2012 06:25:28AM 2 points [-]

The spin-off sounds a little appealing to me too, but the problem is that the Summit provides a lot of their revenue.

Comment author: John_Maxwell_IV 19 June 2012 11:12:07PM 1 point [-]

Good point. Maybe this could continue to happen though with sufficiently clever lawyering.

Comment author: negamuhia 28 August 2012 04:33:03PM *  0 points [-]

I agree. You should change the name iff your current name-brand is irreparably damaged. Isn't that an important decision procedure for org rebrands? I forget.

EDIT: Unless, of course, the brand is already irreparably damaged...in which case this "advice" would be redundant!

Comment author: wedrifid 19 June 2012 08:56:57AM *  14 points [-]

The Center for AI Safety

Like it. What you actually do.

The I.J. Good Institute

Eww. Pretentious and barely relevant. Some guy who wrote a paper in 1965. Whatever. Do it if for some reason you think prestigious sounding initials will give enough academic credibility to make up for having a lame irrelevant name. Money and prestige are more important than self respect.

Beneficial Architectures Research

Architectures? Word abuse! Why not go all the way and throw in "emergent"?

A.I. Impacts Research

Not too bad.

Comment author: [deleted] 26 June 2012 08:22:24AM *  0 points [-]

How is it word abuse? "Architecture" is much more informative than "magic" or "thingy"; it conveys that they investigate how putting together algorithms results in optimization. That differentiates them from Givewell, The United Nations First Committee, the International Risk Governance Council, The Cato Institute, ICOS, Club of Rome, the Svalbard Global Seed Vault, the Foresight Institute, and most other organizations I can think of that study global economic / political / ecological stability, x-risk reduction, or optimal philanthropy.

Comment author: betterthanwell 19 June 2012 05:10:52PM *  27 points [-]

So I read this, and my brain started brainstorming. None of the names I came up with were particularly good. But I did happen to produce a short mnemonic for explaining the agenda and the research focus of the Singularity Institute.

A one word acronym that unfolds into a one sentence elevator pitch:

Crisis: Catastrophic Risks in Self Improving Software

  • "So, what do you do?"
  • "We do CRISIS research, that is, we work on figuring out and trying to manage the catastrophic risks that may be inherent to self improving software systems. Consider, for example..."

Lots of fun ways to play around with this term, to make it memorable in conversations.

It has some urgency to it, it's fairly concrete, it's memorable.
It compactly combines goals of catastrophic risk reduction and self improving systems research.

Bonus: You practically own this term already.

An incognito Google search gives me no hits for "Catastrophic Risks In Self Improving Software", when in quotes. Without quotes, top hits include the Singularity Institute, the Singularity Summit, intelligencexplosion.com. Nick Bostrom and the Oxford group is also in there. I don't think he would mind too much.

Comment author: Jack 19 June 2012 07:26:56PM *  13 points [-]

This is clever but sounds too much like something out of Hollywood. I'd prefer bland but respectable.

Comment author: betterthanwell 22 June 2012 12:26:18PM *  2 points [-]

This is clever but sounds too much like something out of Hollywood. I'd prefer bland but respectable.

I don't entirely disagree, but I do think Catastrophic Risks In Self-Improving Systems can be useful in pointing out the exact problem that the Singularity Institute exists to solve. I'm not at all sure at all that it would make a good name for the organisation itself. But I do perhaps think it would raise fewer questions, and be less confusing than The Singularity Institute for Artificial Intelligence or The Singularity Institute.

In particular, there would be little chance of confusion stemming from familiarity with Kurzweil's singularity from accelerating change.

There are lessons to be learned from Scientist are from Mars the Public is from Earth, and first impressions are certainly important. That said, this description is less over-exaggerated than it may at seem at first glance. The usage can be qualified in that the technical meanings of these words are established, mutually supportive and applicable.

Looking at the technical meaning of the words, the description is (perhaps surprisingly) accurate:

Catastrophic failure: Small changes in certain parameters of a nonlinear system can cause equilibria to appear or disappear, or to change from attracting to repelling and vice versa, leading to large and sudden changes of the behaviour of the system.

Catastrophe theory: Small changes in certain parameters of a nonlinear system can cause equilibria to appear or disappear, (...) leading to large and sudden changes of the behaviour of the system.

Risk is the potential that a chosen action or activity (including the choice of inaction) will lead to a loss (an undesirable outcome). The notion implies that a choice having an influence on the outcome exists (or existed).

Is the CRISIS mnemonic / acronym overly dramatic?

Crisis: From Ancient Greek κρίσις (krisis, “a separating, power of distinguishing, decision, choice, election, judgment, dispute”), from κρίνω (krinō, “pick out, choose, decide, judge”)

A crisis is any event that is, or expected to lead to, an unstable and dangerous situation affecting an individual, group, community or whole society. Crises are deemed to be negative changes in the security, economic, political, societal or environmental affairs, especially when they occur abruptly, with little or no warning. More loosely, it is a term meaning 'a testing time' or an 'emergency event'.

Usage: crisis (plural crises)

  • A crucial or decisive point or situation; a turning point.
  • An unstable situation, in political, social, economic or military affairs, especially one involving an impending abrupt change.
  • A sudden change in the course of a disease, usually at which the patient is expected to recover or die.
  • (psychology) A traumatic or stressful change in a person's life.
  • (drama) A point in a drama at which a conflict reaches a peak before being resolved.

Perhaps CRISIS is overly dramatic in the common usage. But one would quite easily be able to explain how the use of this term is be qualified, and this in it self gives an attractive angle to journalists. In the process they would, inadvertently in a sense, explain what the Singularity institute does and why their work is important.

Comment author: Michelle_Z 19 June 2012 06:14:57PM 2 points [-]

I agree. That doesn't sound bad at all.

Comment author: betterthanwell 19 June 2012 07:23:20PM *  5 points [-]

After thinking this over while taking a shower:

The CRISIS Research Institute — Catastrophic Risks In Self-Improving Systems
Or, more akin to the old name: Catastrophic Risk Institute for Self-Improving Systems

Hmm, maybe better suited as a book title than the name of an organization.

Comment author: faul_sname 20 June 2012 06:01:04AM 4 points [-]

It would make an excellent book title, wouldn't it.

Comment author: thomblake 20 June 2012 01:54:21PM 1 point [-]

That's brilliant.

Comment author: Epiphany 21 August 2012 03:52:51AM 0 points [-]

Center for Preventing a C.R.I.S.I.S. A.I.

C.R.I.S.I.S. A.I. could be a new term also.

Comment author: JonathanLivengood 19 June 2012 08:50:24PM 5 points [-]

Semi-serious suggestions:

  • Intelligence Explosion Risk Research Group
  • Foundation for Obviating Catastrophes of Intelligence (FOCI)
  • Foundation for Evaluating and Inhibiting Risks from Intelligence Explosion (FEIRIE)
  • Center for Reducing Intelligence Explosion Risks (CRIER)
  • Society for Eliminating Existential Risks (SEERs) of Intelligence Explosion
  • Center for Understanding and Reducing Existential Risks (CURER)
  • Averting Existential Risks from Intelligence Explosion (AERIE) Research Group (or Society or ...)
Comment author: gwern 19 June 2012 01:53:28PM *  5 points [-]

'A.I. Impact Institute', although that leads to the unfortunate acronym AIII...

Comment author: faul_sname 19 June 2012 03:09:05PM 8 points [-]

Though it is a remarkably accurate imitation of the reactions of those first hearing about it.

Comment author: Risto_Saarelma 19 June 2012 04:58:21PM 5 points [-]

You might get away with using AI3.

Comment author: crazy88 19 June 2012 08:32:51AM *  5 points [-]

I actually suspect that the word "Singularity" serves as a way of differentiating you from the huge number of academic institutes to do with AI so I'm not endorsing change necessarily.

However, if you do change, I vote for something to do with the phrase "AI Risk" - your marketing speel is about reducing risk and I think you're name will attract more donor attention if people can see a purpose rather than a generic name. As such, I vote against "I.J. Good Institute".

I also think "Beneficial Architectures Research" is too opaque a name and suspect (though with less certainty) that suggestions to do with "Friendly AI" are also too opaque (the name might seem cuddly but I don't think it will have deeper meaning to those who don't already know what you do).

I think something like "The Center for AI Safety" or "The AI Risk Institute" (TARI) would be your best bet (if you did decide a change was a good move).

Clearly, though, that's simply a list of one person's opinions on the matter.

Comment author: wedrifid 19 June 2012 06:26:37PM *  13 points [-]
  • Center for Helpful Artificial Optimizer Safety (CHAOS)
  • Center for Slightly Less Probable Extinction
  • Freindly Optimisation Of the Multiverse (FOOM)
  • Yudkowsky's Army
  • The Center for World Domination
  • Pinky and The Brain Institute
  • Cyberdyne Systems
Comment author: JGWeissman 19 June 2012 06:33:13PM 21 points [-]

The Center for World Domination

We prefer to think of it as World Optimization.

Comment author: Zetetic 19 June 2012 08:21:07PM *  7 points [-]

Winners Evoking Dangerous Recursively Improving Future Intelligences and Demigods

Comment author: wedrifid 19 June 2012 08:22:09PM *  3 points [-]

I commit to donating $20k to the organisation if they adopt this name! Or $20k worth of labor, whatever they prefer. Actually, make that $70k.

Comment author: Zetetic 19 June 2012 09:50:34PM 18 points [-]

You can donate it to my startup instead, our board of directors has just unanimously decided to adopt this name. Paypal is fine. Our mission is developing heuristics for personal income optimization.

Comment author: thomblake 20 June 2012 01:56:14PM 2 points [-]

Cyberdyne Systems

There's already a Cyberdyne making robotic exoskeletons and stuff in Japan.

Comment author: nshepperd 24 June 2012 04:30:50PM 0 points [-]

The Sirius Cybernetics Corporation?

Comment author: roll 21 June 2012 04:47:59AM *  0 points [-]

Center for Helpful Artificial Optimizer Safety

What concerns me is lack of research into artificial optimizers in general... Artificial optimizers are commonplace already, they are algorithms to find optimal solutions to mathematical models, not to optimize the real world in the manner that SI is concerned with (correct me if I am wrong). Furthermore the premise is that such optimizers would 'foom', and i fail to see how foom is not a type of singularity.

Comment author: [deleted] 26 June 2012 09:29:12AM 0 points [-]

Recent published SI work concerns AI safety. They have not recently published results on AGI, to whatever extent that is separable from safety research, for which I am very grateful. Common optimization algorithms do apply to mathematical models, but that doesn't limit their real world use; an implemented optimization algorithm designed to work with a given model can do nifty things if that model roughly captures the structure of a problem domain. Or to put it simply, models model things. SI is openly concerned with exactly that type of optimization, and how it becomes unsafe if enough zealous undergrads with good intentions throw this, that, and their grandmother's hippocampus into a pot until it supposedly does fantastic venture capital attracting things. The fact that SI is not writing papers on efficient adaptive particle swarms is good and normal for an organization with their mission statement. Foom was a metaphorical onomatopoeia for an intelligence explosion, which is indeed a commonly used sense of the term "technological singularity".

Comment author: roll 05 July 2012 03:25:19PM *  0 points [-]

SI is openly concerned with exactly that type of optimization, and how it becomes unsafe

Any references? I haven't seen anything that is in any way relevant to the type of optimization that we currently know how to implement. The SI is concerned with notion of some 'utility function', which appears very fuzzy and incoherent - what it is, a mathematical function? What does it have at input and what it has at output? The number of paperclips in the universe is given as example of 'utility function', but you can't have 'universe' as the input domain to a mathematical function. In the AI the 'utility function' is defined on the model rather than the world, and lacking the 'utility function' defined on the world, the work on ensuring correspondence of the model and the world is not an instrumental sub-goal arising from maximization of the 'utility function' defined on the model. This is rather complicated, technical issue, and to be honest the SI stance looks indistinguishable from confusion that would result from inability to distinguish function of model and the property of the world, and subsequent assumption that correspondence of model and the world is an instrumental goal of any utility maximizer. (Furthermore that sort of confusion would normally be expected as a null hypothesis when evaluating an organization so outside the ordinary criteria of competence)

edit: also, by the way, it it would improve my opinion of this community if, when you think that I am incorrect, you would explain your thought rather than click down vote button. While you may want to signal to me that "i am wrong" by pressing the vote button, that, without other information, is unlikely to change my view on the technical side of the issue. Keep in mind that one can not be totally certain in anything, and while this may be a normal discussion forum that happens to be owned by an AI researcher that is being misunderstood due to poor ability to communicate the key concepts he uses, it might also be a support ground for pseudoscientific research, and the norm of substance-less disagreement would seem to be more probable in the latter than in the former.

Comment author: Michelle_Z 19 June 2012 06:51:35PM 0 points [-]

Creative and amusing, at least. :]

Comment author: ScottMessick 19 June 2012 06:04:14PM *  12 points [-]

I have direct experience of someone highly intelligent, a prestigious academic type, dismissing SI out of hand because of its name. I would support changing the name.

Almost all the suggestions so far attempt to reflect the idea of safety or friendliness into the name. I think this might be a mistake, because for people who haven't thought about it much, this invokes images of Hollywood. Instead, I propose having the name imply that SI does some kind of advanced, technical research involving AI and is prestigious, perhaps affiliated with a university (think IAS).

Center for Advanced AI Research (CAAIR)

Comment author: [deleted] 21 June 2012 03:50:01AM 1 point [-]

This name might actually sound scary to people worried about AI risks.

Comment author: roll 21 June 2012 04:40:38AM 0 points [-]

Hmm what do you think would have happened with that someone if the name was more attractive and that person spent more time looking into SI? Do you think that person wouldn't ultimately dismiss it? Many of the premises here seem more far fetched than singularity. I know that from our perspective it'd be great to have feedback from such people, but it wastes their time and it is unclear if that is globally beneficial.

Comment author: faul_sname 19 June 2012 05:12:52AM 18 points [-]

Center for AI Safety most accurately describes what you do.

To be honest, the I. J. Good Institute sounds the most prestigious.

Beneficial Architectures Research makes you sound like you're researching earthquake safety or something similar. I don't think you necessarily need to shy away from the word "AI."

AI Impacts Research sounds incomplete, though I think it would sound good with the word "society," "foundation," or "institute" tacked onto either end.

Comment author: radical_negative_one 19 June 2012 05:32:23AM 19 points [-]

IJ Good Institute would make me think that it was founded by IJ Good.

Comment author: Viliam_Bur 19 June 2012 11:18:54AM *  2 points [-]

I would suspect that it means "The Good Institute", something related to either philantropy or religion, with a waving hand and smiling face the webmaster failed to mark properly as a Wingdings font. :D

Comment author: siodine 19 June 2012 06:05:07PM *  11 points [-]

Paraphrasing, I believe it was said by an SIer that "if uFAI wasn't the most significant and manipulable existential risk, then the SI would be working on something else." If that's true, then shouldn't its name be more generic? Something to do with reducing existential risk...?

I think there are some significant points in favor of a generic name.

  • Outsiders will more likely see your current focus (FAI) as the result of pruning causes rather than leaping toward your passion -- imagine if GiveWell were called GiveToMalariaCauses.

  • By attaching yourself directly with reducing existential risk, you bring yourself status by connecting with existing high status causes such as climate change. Moreover, this creates debate with supporters of other causes connected to existential risk -- this gives you acknowledgement and visibility.

  • The people you wish to convince won't be as easily mind-killed by research coming from "The Center for Reducing Existential Risk" or such.

Is it worth switching to a generic name? I'm not sure, but I believe it's worth discussing.

Comment author: shokwave 19 June 2012 07:45:22PM 2 points [-]

Is it worth switching to a generic name?

I feel like you could get more general by using the "space of mind design" concept....

Like an Institute for Not Giving Immense Optimisation Power to an Arbitrarily Selected Point in Mindspace, but snappier.

Comment author: beoShaffer 19 June 2012 07:58:19PM *  4 points [-]

A.I. Safety Foundation

Center for existential risk reduction

Friendly A.I. Group

A.I. Ethics Group

Institute for A.I. ethics

Comment author: NancyLebovitz 19 June 2012 08:35:09AM *  9 points [-]

The Center for AI Safety-- best of the bunch. It might be clearer as The Center for Safe AI.

The I.J. Good Institute-- I have no idea what the IJ stands for.

Beneficial Architectures Research-- sounds like an effort to encourage better buildings.

A.I. Impacts Research-- reads like a sentence. It might be better as Research on AI Impacts.

Comment author: pjeby 19 June 2012 05:10:04PM 5 points [-]

It might be clearer as The Center for Safe AI

Indeed - it better implies that you're actually working towards safe AI, as opposed to just worrying about whether it's going to be safe, or lobbying for OSHA-like safety regulations.

Comment author: Jayson_Virissimo 19 June 2012 08:46:02AM *  3 points [-]

The I.J. Good Institute-- I have no idea what the IJ stands for.

Irving John ("Jack").

I would guess that exactly zero of my non-Less Wronger friends have ever heard of I. J. Good.

Comment author: ciphergoth 19 June 2012 09:50:09AM 7 points [-]

I would guess that exactly zero of my non-Less Wronger friends have ever heard of I. J. Good.

Which is fine; to everyone else, it's some guy's name, with moderately positive affect. I'd be less in favour of this scheme if the idea of intelligence explosion had first been proposed by noted statistician I J Bad.

Comment author: Kaj_Sotala 19 June 2012 12:38:03PM *  1 point [-]

Now I have Johnny C Bad playing in my head.

(Well, not really, but it made for a fun comment.)

Comment author: ciphergoth 19 June 2012 04:11:06PM 1 point [-]

Better than Johnny D Ugly.

Comment author: Douglas_Knight 19 June 2012 03:31:25PM 1 point [-]

The I.J. Good Institute-- I have no idea what the IJ stands for.

Did you not understand that "I.J. Good" is a person's name? (Note that in this thread ciphergoth asserts that everyone recognizes the form as a name, despite your comment which I read as a counterexample.)

Comment author: pjeby 19 June 2012 05:08:25PM 3 points [-]

Did you not understand that "I.J. Good" is a person's name?

Until I read the comment thread, I thought maybe it was facetious and stood for "It's Just Good".

Comment author: NancyLebovitz 19 June 2012 04:11:54PM 3 points [-]

At this point, I'm not sure what I was thinking. It's plausible that knowing what the initials meant would be enough to identify the person.

I'm pretty sure I was thinking "ok, I. J. Good founded a foundation, but who cares?".

Comment author: TheOtherDave 19 June 2012 04:23:25PM 1 point [-]

I can imagine, upon discovering that the "I.J.Good Institute" is interested in developing stably ethical algorithms, deciding that the name was some sort of pun... that it stood for "Invariant Joint Good" or some such thing.

Comment author: Vladimir_Nesov 19 June 2012 12:53:45PM *  7 points [-]

"Safe" is a wrong word for describing a process of rewriting the universe.

(An old tweet of mine; not directly relevant here.)

Comment author: Pavitra 22 June 2012 06:22:23AM 3 points [-]

Do we actually have rigorous evidence of a need for name change? It seems that we're seriously considering an expensive and risky move on the basis of mere anecdote.

Comment author: shokwave 19 June 2012 05:26:54AM *  14 points [-]

The obvious change if Singularity has been co-opted is the Institute for Artificial Intelligence. (but IAI is not a great acronym).

Institute for Artifical Intelligence Safety lets you keep the S, but it's at the wrong spot. Safety Institution for Artificial Intelligence is off-puttingly incorrect.

The Institute for Friendly Artificial Intelligence (pron. eye-fay) is IFAI... maybe?

If you go with the Center for Friendly Artificial Intelligence you get CFAI, sort of parallel to CFAR (if that's what you want).

Oh! If associating with CFAR is okay, then what's really lovely is the Center for Friendly Artificial Intelligence Research, acronym as CFAIR. (You could even get to do cute elevator pitches asking people how they'd program their obviously well-defined "fairness" into an AI.)

Edit: I do agree that "Friendly" is not, on the whole, desirable. I prefer "Risk Reduction" to "Safety", because I think Safety might bring a little bit of the same unsophistication that Friendly would bring.

Comment author: wedrifid 19 June 2012 11:09:16AM 14 points [-]

Center for Friendly Artificial Intelligence Research

Including "Friendly" is good for those that understand that it is being used as a jargon term with a specific meaning. Unfortunately it could give an undesirable impression of unsophisticated to the naive audience (which is the target).

Comment author: Dorikka 19 June 2012 11:50:14AM 12 points [-]

I also strongly object to 'Friendly' being used in the name -- it's a technical term that I think people are very likely to misunderstand.

Comment author: RichardHughes 19 June 2012 08:45:54PM 0 points [-]

Agreed that people are very likely to misunderstand it - however, even the obvious, naive reading still creates a useful approximation of what it is you guys actually do. I would consider that misreading to be a feature, not a flaw, because the layman's reading produces a useful layman's understanding.

Comment author: Dorikka 19 June 2012 10:08:00PM 4 points [-]

The approximation might end up being 'making androids to be friends with people', or some kind of therapy-related research. Seriously. Given that even many people involved with AGI research do not seem to understand that Friendliness is a problem, I don't think that the first impression generated by that word will be favorable.

It would be convenient to find some laymen to test on, since our simulations of a layman's understanding may be in error.

Comment author: RichardHughes 25 June 2012 07:04:49PM 0 points [-]

I have no ability to do any actual random selection, but you raise a good point - some focus group testing on laymen would be a good precaution to take before settling on a name.

Comment author: [deleted] 19 June 2012 05:42:45AM 1 point [-]

upvoted for CFAIR

Comment author: MarkusRamikin 19 June 2012 07:43:45AM 7 points [-]

I hate CFAIR.

Comment author: tgb 21 June 2012 12:35:07AM 3 points [-]

But than Eliezer and co. could be called CFAIRers!

Comment author: gwern 21 June 2012 01:02:21AM 1 point [-]

As long as they don't pledge themselves or emulated instances of themselves for 10 billion man-years of labor.

Comment author: Multiheaded 19 June 2012 08:25:17AM *  0 points [-]

So far I like IFAI best; it's conscise and sounds like a logical update of SIAI.

"At first they were just excited about all kinds of singularities, now they've decided how to best get to one" is what someone who only ever heard the name "IFAI (formerly SIAI)" would think.

Comment author: James_Miller 19 June 2012 05:28:41AM 11 points [-]

Sell the naming rights.

Comment author: Jack 19 June 2012 02:23:10PM 16 points [-]

If you could sell it to a prestigious tech firm... "The IBM Institute for AI Safety" actually sounds pretty fantastic.

Comment author: LucasSloan 19 June 2012 06:42:21AM 7 points [-]

I think this comment is the first that I couldn't decide whether to upvote or downvote, but definitely didn't want to leave a zero.

Comment author: Manfred 19 June 2012 08:22:38AM 0 points [-]

Don't worry, I'll fix it.

Comment author: knb 19 June 2012 10:15:03AM 6 points [-]

I think a name change is a great idea. I can certainly imagine someone being reluctant to associate their name with the "Singularity" idea even if they support what SIAI actually does. I think if I was a famous researcher/donor, I would be a bit reluctant to be strongly associated with the Singularity meme in its current degraded form. Yes, there are some high-status people who know better, but there are many more who don't.

Here is a suggestion: Center for Emerging Technology Safety. This name affiliates with the high-status term "emerging technology", while terms with "Singularity" and even "AI" often (unfairly, in my opinion) strike people as being crackpot/kooky. Admittedly, this is less descriptive than some other possible names (but more descriptive than "The I.J. Good Institute"), but descriptiveness isn't the most important factor. Rather, you should consider what kind of organization potential donors or (high-status) employees would like to brag about to the their non-LW reading friends/family at dinner parties.

Comment author: Plasmon 19 June 2012 06:17:28PM *  1 point [-]

I understand that the original name can be taken as overly techno-optimistic/Kurzweilian. IMHO this name errs on the other side, it sets of Luddite-detecting heuristics.

Comment author: novalis 23 June 2012 03:04:45AM *  8 points [-]

You are worried that the SIAI name signals a lack of credibility. You should be worried about its people do. No, it's not the usual complaints about Eliezer. I'm talking about Will Newsome, Stephen Omohundro, and Ben Goertzel.

Will Newsome has apparently gone off the deep end: http://lesswrong.com/lw/ct8/this_post_is_for_sacrificing_my_credibility/6qjg The typical practice in these cases, as I understand it, is to sweep these people under the rug and forget that they had anything to do with the organization. This might not be the most intellectually honest thing to do, but it's more PR-minded than leaving them listed, and more polite than adding them to a hall of shame.

And, while the Singularity Institute is announcing that it is absolutely dangerous to build an AGI without proof of friendlyness, two of its advisors, Omohundro and Goertzel, are, separately, attempting to build AGIs. Of course, this is only what I have learned from http://singularity.org/advisors/ -- maybe they have since changed their minds?

Comment author: wedrifid 23 June 2012 03:11:03AM 4 points [-]

And, while the Singularity Institute is announcing that it is absolutely dangerous to build an AGI without proof of friendlyness, two of its advisors, Omohundro and Goertzel, are, separately, attempting to build AGIs. Of course, this is only what I have learned from http://singularity.org/advisors/ -- maybe the have since changed their minds?

Goertzel is still there? I'm surprised.

Comment author: Halfwit 11 January 2013 01:53:10AM *  0 points [-]
Comment author: novalis 14 January 2013 05:55:37PM 0 points [-]

Does Kurzweil have anything to do with the Singularity Institute? Because I don't see him listed as a director or advisor on their site.

Comment author: Halfwit 15 January 2013 01:44:15AM 0 points [-]

He was an adviser. But I see he no longer is. Retracted.

Comment author: David_Gerard 19 June 2012 07:35:30AM 5 points [-]

"Singularity Institue? Oh, Kurzweil!" It's as if he has a virtual trademark on the word. Yeah.

Comment author: private_messaging 21 June 2012 04:51:06PM *  0 points [-]

To think about it, SIAI name worked in favour of my evaluation of SI. I sort of mixed up EY with Kurzweil, thought that the EY has created some character recognition software and whatnot. Kurzweil is pretty low status but it's not zero. What I see instead is a person who by the looks of it likely wouldn't even be able to implement belief propagation with loops in the graph, or at least never considered what's involved (as evident from the rationality/bayesianism stuff here, Bayes vs science stuff, and so on). You know, if I were preaching rationality, I'd make a bayes belief propagation applet with nodes and lines connecting them, for demonstration of possible failure modes also (and investigation of how badly incompleteness of the graph breaks it, as well as demonstration of NP-complete in certain cases). I can do that in a week or two. edit: actually, perhaps I'll do that sometime. Or actually, I think there's such applications for medical purposes.

Comment author: David_Gerard 21 June 2012 09:56:46PM 0 points [-]

A simple open-source one would be an actually useful thing to show people failure modes and how not to be stupid.

Comment author: private_messaging 22 June 2012 12:13:26AM *  2 points [-]

Well it won't be useful for making glass eyed 'we found truth' cult because it'd actually kill the confidence, in the Dunning-Kruger way where more competent are less confident.

The guys here haven't even wondered how exactly do you 'propagate' when A is evidence for B and B is evidence for C and C is evidence for A (or when you only see a piece of cycle, or several cycles intersecting). Or when there's unknown nodes. Or what happens out of the nodes that were added based on reachability or importance or selected to be good for the wallet of dear leader. Or how badly it breaks if some updates are onto wrong nodes. Or how badly it breaks when you ought to update on something outside the (known)graph but pick closest-looking something inside. Or how low the likelihood of correctness gets when there's some likelihood of such errors. Or how difficult it is to ensure sane behaviour on partial graphs. Or how all kinds of sloppiness break the system entirely making it arrive at superfluous very high and very low probabilities.

People go into such stuff for immediate rewards - now i feel smarter than others kind of stuff.

Comment author: MarkusRamikin 19 June 2012 05:01:42AM 5 points [-]

Why did the "AI" part get dropped from "SIAI" again?

Comment author: VincentYu 19 June 2012 08:19:38AM *  5 points [-]

Zack_M_Davis on this:

(Disclaimer: I don't speak for SingInst, nor am I presently affiliated with them.)

But recall that the old name was "Singularity Institute for Artificial Intelligence," chosen before the inherent dangers of AI were understood. The unambiguous for is no longer appropriate, and "Singularity Institute about Artificial Intelligence" might seem awkward.

I seem to remember someone saying back in 2008 that the organization should rebrand as the "Singularity Institute For or Against Artificial Intelligence Depending on Which Seems to Be a Better Idea Upon Due Consideration," but obviously that was only a joke.

Comment author: wedrifid 19 June 2012 08:58:12AM 6 points [-]

So essentially the problem with "SIAI" is the letter "f" in the middle.

Comment author: Normal_Anomaly 20 June 2012 01:23:41AM 2 points [-]

The Singularity Institute was for AI before it was against it! :P

Comment author: CommanderShepard 19 June 2012 01:19:26PM 6 points [-]

Cerberus

Comment author: [deleted] 21 June 2012 06:02:59PM 1 point [-]

Ah yes, "Paperclip Maximizers..."

Comment author: [deleted] 20 June 2012 03:57:38AM 4 points [-]

Center for AI Safety sounds excellent actually.

Comment author: Arran_Stirton 19 June 2012 05:01:28PM 4 points [-]

It’s quite likely you can solve the problem of people miss-associating SI with “accelerating change“ without having to change names.

The AI researcher saw the word 'Singularity' and, apparently without reading our concise summary, sent back a critique of Ray Kurzweil's "accelerating change" technology curves.

What if the AI researcher read (or more likely, skimmed) the concise summary before responding to the potential supporter? At least this line in the first paragraph, “artificial intelligence beyond some threshold level would snowball, creating a cascade of self-improvements,” doesn’t necessarily make it obvious enough that SI isn’t about “accelerating change”. (In fact, it sounds a lot like an accelerating-change-type idea.)

In my opinion at least, you need to get any potential supporter/critic to make the association between the name “Singularity Institute” and what SI actually does(/it’s goals) as soon as possible. While changing the name could do that, “Singularity Institute” has many useful aesthetic qualities that a replacement name probably won’t have.

On the other hand doing something like adding a clear tag-line about what SI does (e.g. “Pioneering safe-AI research”) to the header, would be a relatively cheap and effective solution. Perhaps rewriting the concise summary to discuss the dangers of a smarter-than-human AI before postulating the possibility of an intelligence explosion would also be effective; seeing as a smarter-than-human AI would need to be friendly, intelligence explosion or no.

Comment author: ChrisHallquist 19 June 2012 09:17:42AM *  4 points [-]

AI Impacts Research seems to me the best of the bunch, because it's pretty easy to understand. People who know nothing about Eliezer's work can see it and think, "Oh, duh AI will have an impact, it's worth thinking about that." On the other hand:

  • Center for AI Safety: not bad, but people who don't know Eliezer's work might wonder why we need it (same thing with a name involving "risk")
  • The I.J. Good Institute: Sounds pretigious, but gives no information to someone who doesn't know who I.J. Good is.
  • Beneficial Architectures Research: meaningless to 99% of the population, agree with whoever said people will think it's about bridge design.

Somehow, "AI Impact Research" sounds better than "Impact," perhaps to avoid the "reads as a sentence" thing, or just because my brain thinks of "impact" as one (possibly very complex) thing. Also agree that "society" or "institute" could go in there somewhere.

Comment author: JonathanLivengood 19 June 2012 03:45:52PM 3 points [-]

The I.J. Good Institute: Sounds pretigious, but gives no information to someone who doesn't know who I.J. Good is.

And gives potentially wrong information to someone who does know who I.J. Good is but doesn't know about his intelligence explosion work.

Comment author: incariol 27 June 2012 11:07:15AM 2 points [-]

Mandate

"The Mandate is a Gnostic School founded by Seswatha in 2156 to continue the war against the Consult and to protect the Three Seas from the return of the No-God.

... [it] also differs in the fanaticism of its members: apparently, all sorcerers of rank continuously dream Seswartha's experiences of the Apocalypse every night ...

...the power of the Gnosis makes the Mandate more than a match for schools as large as, say, the Scarlet Spires."

No-God/UFAI, Gnosis/x-rationality, the Consult/AGI community? ;-)

Comment author: Multiheaded 27 June 2012 11:33:47AM 0 points [-]

Haha, we're gonna see a lot more of such comparisons as the community extends.

Comment author: metaweta 19 June 2012 08:34:06PM 2 points [-]

AI Ballistics Lab? You're trying to direct the explosion that's already underway.

Comment author: GuySrinivasan 19 June 2012 06:27:44PM 2 points [-]

Center for General Artificial Intelligence Readiness Research

Comment author: JoshuaFox 19 June 2012 06:19:52AM *  2 points [-]

More than that, many people in SU-affiliated circles use the word "Singularity" by itself to mean Singularity University ("I was at Singularity"), or else next-gen technology; and not any of the three definitions of the Singularity. These are smart, innovative people, but some may not even be familiar with Kurzweil's discussion of the Singularity as such.

I'd suggest using the name change as part of a major publicity campaign, which means you need some special reason for the campaign, such as a large donation (see James Miller's excellent idea).

Comment author: Crux 20 June 2012 04:25:13AM *  3 points [-]

Does this mean it's too late to suggest "The Rationality Institute for Human Intelligence" for the recent spin-off, considering the original may no longer run parallel to that?

Seriously though, and more to the topic, I like "The Center for AI Safety", not only because it sounds good and is unusually clear as to the intention of the organization, but also because it would apparently, well, run parallel with "The Center for Modern Rationality" (!), which is (I think) the name that was ultimately (tentatively?) picked for the spin-off.

Comment author: [deleted] 19 June 2012 06:15:37PM 3 points [-]

The Last Organization.

Comment author: MarkusRamikin 19 June 2012 09:59:44AM *  2 points [-]

Come to think of it, SI have a bigger problem than the name: getting a cooler logo than these guys.

/abg frevbhf

Comment author: RobertLumley 19 June 2012 03:02:00PM 2 points [-]

I'll focus on "The Center for AI Safety", since that seems to be the most popular. I think "safety" comes across as a bit juvenile, but I don't know why I have that reaction. And if you say the actual words Artificial Intelligence, "The Center for Artificial Intelligence Safety" it gets to be a mouthful, in my opinion. I think a much better option is "The Center for Safety in Artificial Intelligence", making it CSAI, which is easily pronounced See-Sigh.

Comment author: mwengler 19 June 2012 03:24:35PM *  3 points [-]

On the one hand, "The Center for AI Safety" really puts me off. Who would want to associate with a bunch of people who are worried about the safety of something that doesn't even exist yet? Certainly you want to be concerned with Safety, but it should be subsidiary to the more appealing goal of actually getting something interesting to work.

On the other hand, if I weren't trying to have positive karma, I would have zero or negative karma, suggesting I am NOT the target demographic for this institute. And if I am not the target demographic, changing the name is a good idea because I like SIAI.

Comment author: shminux 19 June 2012 06:38:45PM *  1 point [-]

It would be nice if the name reflected the SI's concern that the dangers come not just from some cunning killer robots escaping a secret government lab or a Skynet gone amok, or a Frankenstein monster constructed by a mad scientist, but from recursive self-improvement ("intelligence explosion") of an initially innocuous and not-very smart contraption.

I am also not sure whether the qualifier "artificial" conveys the right impression, as the dangers might come from an augmented human brain suddenly developing the capacity for recursive self-improvement, or from some other creation that does not look like a collection of silicon gates.

If I understand it correctly, SI wants to ensure "safe recursive self-improvement" of an intelligence of any kind, "safe" for the rest of the (human?) intelligences existing at that time, though not necessarily for the self-improver itself.

Of course, a name like "Society For Safe Recursive Self-Improvement" is both unwieldy and unclear to an outsider. (And the acronym sounds like parseltongue.) Maybe there is a way to phrase it better.

Comment author: wedrifid 19 June 2012 07:04:47PM 0 points [-]

I am also not sure whether the qualifier "artificial" conveys the right impression, as the dangers might come from an augmented human brain suddenly developing the capacity for recursive self-improvement, or some other creation that does not look like a collection of silicon gates.

The Singularity Institute (folks) does consider the dangers to be from the "artificial" things. They don't (unless I am very much mistaken) consider a human brain to have the possibility to recursively self-improve. Whole Brain Emulation FOOMing would fall under their scope of concern but that certainly qualifies as "artificial".

Comment author: RomeoStevens 19 June 2012 05:52:49AM -1 points [-]

Wasn't this discussed before?

Center for Applied Rationality Education?

Comment author: ciphergoth 19 June 2012 07:33:09AM 8 points [-]

You're thinking of the CfAR naming. CfAR has been spun out as a separate organisation from SI.

Comment author: RomeoStevens 19 June 2012 07:59:55AM 2 points [-]

ah yes.

Comment author: Zaine 20 June 2012 12:44:22AM 0 points [-]
  • Remedial Investigation [or Instruction] of Safety Kernel for AI [or: 'for AGI'; 'for Friendly AI'; 'for Friendly AGI'; 'for AGI Research'; etc.] (RISK for AI; RISK for Friendly AI)
  • Friendly Architectures Research (FAR)
  • Sapiens Friendly Research (SFR - pronounced 'Safer')
  • Sapiens' Research Foundation (SRF)
  • Sapiens' Extinction [or Existential] Risk Reduction Cooperative [or Conglomerate] (SERRC)
  • Researchers for Sapiens Friendly AI (RSFAI)
Comment author: VincentYu 19 June 2012 07:14:24AM *  0 points [-]

Retaining the meaning of 'intelligence explosion' without the word 'singularity':

Comment author: roll 21 June 2012 05:05:10AM 1 point [-]

A suggestion: it may be a bad idea to use word 'artificial intelligence' in the name without qualifiers, as to serious people in the field

  • the 'artificial intelligence' has much, much broader meaning than what SI is concerning itself with

  • there is very significant disdain for the commonplace/'science fiction' use of 'artificial intelligence'

Comment author: [deleted] 21 June 2012 03:51:51AM *  1 point [-]

Center for AI Ethics Research

Center for Ethical AI

Singularity Institute for Ethical AI

Comment author: Nic_Smith 20 June 2012 05:13:37PM 1 point [-]

The Good Future Research Center

A wink to the earlier I.J. Good Institute idea, it matches the tone of the current logo while being unconfining in scope.

Comment author: [deleted] 20 June 2012 02:18:36PM 1 point [-]

Institute for Friendly Artificial Intelligence (IFAI).

Comment author: patrickscottshields 19 June 2012 04:38:16PM 1 point [-]

I like "AI Risk Reduction Institute". It's direct, informative, and gives an accurate intuition about the organization's activities. I think "AI Risk Reduction" is the most intuitive phrase I've heard so far with respect to the organization.

  • "AI Safety" is too vague. If I heard it mentioned, I don't think I'd have a good intuition about what it meant. Also, it gives me a bad impression because I visualize things like parents ordering their children to fasten their seatbelts.
  • "Beneficial Architectures" is too vague. It's not clear it's AI-related.
  • "AI Impacts Research" is too vague and non-prescriptive. Unlike "AI Risk Reduction", it's ambiguous in its intentions.
Comment author: thomblake 19 June 2012 03:13:13PM 1 point [-]

I agree that something along the lines of "AI Safety" or "AI RIsk Reduction" or "AI Impacts Research" would be good. It is what the organization seems to be primarily about.

As a side-effect, it might deter folks from asking why you're not building AIs, but it might make it harder to actually build an AI.

I'd worry about funding drying up from folks who want you to make AI faster, but I don't know the distribution of reasons for funding.

Comment author: Stuart_Armstrong 19 June 2012 02:29:45PM *  1 point [-]

You could reuse the name of the coming December conference, and go for AI Impacts (no need to add "institute" or "research").

Comment author: Gastogh 19 June 2012 12:33:52PM 1 point [-]

I'd prefer AI Safety Institute over Center for AI Safety, but I agree with the others that that general theme is the most appropriate given what you do.

Comment author: Clarity 18 September 2015 03:21:49PM 0 points [-]

Going by the google suggest principle, how about the AI Safety Syndicate (ASS)

Comment author: gjm 18 September 2015 03:50:33PM 0 points [-]

Leaving aside the facts that (1) they already changed their name and (2) they probably don't want to be called "ASS" and (3) that webpage looks as sketchy as all hell ... what principle exactly are you referring to?

The "obvious" principle is this: if you start typing something that possible customers might start typing into the Google search box, and one of the first autocomplete suggestions is your name, you win. But if I type "ai safety" into a Google search box, "syndicate" is not one of the suggestions that come up. (Not even if I start typing "syndicate".)

(Perhaps you mean that having a name that begins with "ai safety" is a good idea if people are going to be searching for "ai safety", which is probably true but has nothing to do with Google's search suggestions. And are a lot of people actually going to be searching for "ai safety"?)

Comment author: [deleted] 21 June 2012 10:18:27PM 0 points [-]

Once, a smart potential supporter stumbled upon the Singularity Institute's (old) website and wanted to know if our mission was something to care about. So he sent our concise summary to an AI researcher and asked if we were serious. The AI researcher saw the word 'Singularity' and, apparently without reading our concise summary, sent back a critique of Ray Kurzweil's "accelerating change" technology curves. (Even though SI researchers tend to be Moore's Law agnostics, and our concise summary says nothing about accelerating change.)

For what it's worth, my instinct would be to send back a message (if I had the opportunity) saying, "Yes, I agree completely; I don't believe that Kurzweil's accelerating change argument has merit. In fact, I believe that most Singularity Institute researchers feel the same way. If you'd like to hear an argument in favor of FAI that does have merit, I'd suggest reading such-and-such."

Comment author: JGWeissman 21 June 2012 10:36:19PM 2 points [-]

That misses the point that SIAI only gets the chance to respond in such a way if the potential supporter actually contacts them and tells them the story. It makes you wonder how many potential supporters they never heard from because the supporter themself or someone the supporter asked for advice rejected a misunderstanding of what SIAI is about.

Comment author: shokwave 20 June 2012 06:45:32AM 0 points [-]

Heh. It's a pretty rare organisation that does Research in Artificial Intelligence Risk Reduction.

(Artificial Intelligence Risk Reduction by itself might work.)

Comment author: thomblake 20 June 2012 02:20:10PM 0 points [-]

That name reminds me eerily of RAINN.

Comment author: [deleted] 20 June 2012 03:48:47AM *  0 points [-]

Comment author: wedrifid 20 June 2012 05:00:21AM *  8 points [-]

"They Who Must Not Be Named"? Like it.

Comment author: Jay_Schweikert 19 June 2012 04:39:27PM 0 points [-]

While the concise summary clearly associates SI with Good's intelligence explosion, nowhere does it specifically say anything about Kurzweil or accelerating change. If people really are getting confused about what sort of singularity you're thinking about, would it be helpful as a temporary measure to put some kind of one-sentence disclaimer in the first couple paragraphs of the summary? I can understand that maybe this would only further the association between "singularity" and Kurzweil's technology curves, but if you don't want to lose the word entirely, it might help to at least make clear that the issue is in dispute.

Also, on a separate subject, I notice that the summary presently has a number of "??" marks, presumably as a kind of formatting error. Just a heads-up. :)

Comment author: blogospheroid 19 June 2012 10:36:46AM 0 points [-]

Ok.

The Center for AI safety and Centre for Friendly Artificial Intelligence research sound the most correct as of now.

If you wanted to aim for a more creative name, then here are some

Centre for Coding Goodness

Man's Best Friend Group (If the slightly implied sexism of "Man's" is Ok..)

The Artificial Angels Institute / Centre for Machine Angels - The angels word directly conveys goodness and superiority over humans, but due to its christian origins and other associated imagery, it might be walking a tight rope.

Comment author: wedrifid 19 June 2012 01:19:18PM 7 points [-]

Man's Best Friend Group (If the slightly implied sexism of "Man's" is Ok..)

Naming your research institute after a pet dog reference and it is the non gender neutral word that seems like the problem?

Comment author: blogospheroid 20 June 2012 09:21:48AM 2 points [-]

They'll come for the dogs, they'll stay for the AI. :)