Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

The Singularity Wars

50 Post author: JoshuaFox 14 February 2013 09:44AM

(This is a introduction, for  those not immersed in the Singularity world, into the history of and relationships between SU, SIAI [SI, MIRI], SS, LW, CSER, FHI, and CFAR. It also has some opinions, which are strictly my own.)

The good news is that there were no Singularity Wars. 

The Bay Area had a Singularity University and a Singularity Institute, each going in a very  different direction. You'd expect to see something like the People's Front of Judea and the Judean People's Front, burning each other's grain supplies as the Romans moved in. 

The Singularity Institute for Artificial Intelligence was founded first, in 2000, by Eliezer Yudkowsky.

Singularity University was founded in 2008. Ray Kurzweil, the driving force behind SU, was also active in SIAI, serving on its board in varying capacities in the years up to  2010.

SIAI's multi-part name was clunky, and their domain, singinst.org, unmemorable. I kept accidentally visiting siai.org for months, but it belonged to the Self Insurance Association of Illinois. (The cool new domain name singularity.org, recently acquired after a rather uninspired site appeared there for several years, arrived shortly before before it was no longer relevant.) All the better to confuse you with, SIAI has been going for the last few years by the shortened name Singularity Institute, abbreviated SI.

The annual Singularity Summit was launched by SI, together with Kurzweil, in 2006. SS was SI's premier PR mechanism, mustering geek heroes to give their tacit endorsement for SI's seriousness, if not its views, by agreeing to appear on-stage.

The Singularity Summit was always off-topic for SI: more SU-like than SI-like. Speakers spoke about whatever technologically-advanced ideas interested them. Occasional SI representatives spoke about the Intelligence Explosion, but they too would often stray into other areas like rationality and the scientific process. Yet SS remained firmly in SI's hands.

It became clear over the years that SU and SI  have almost nothing to do with each other except for the word  "Singularity." The word has three major meanings, and of these, Yudkowsky favored the Intelligence Explosion while Kurzweil pushed Accelerating Change.

But actually, SU's activities have little to do with the Singularity, even under Kurzweil's definition. Kurzweil writes of a future, around the 2040s,  in which the human condition is altered beyond recognition. But SU mostly deals with whizzy next-gen technology.  They are doing something important, encouraging technological advancement with a focus on helping humanity, but they spend little time working on optimizing the end of our human existence as we know it.  Yudkowsky calls what they do "technoyay." And maybe that's what the Singularity means, nowadays. Time to stop using the word.

(I've also heard SU graduates saying "I was at Singularity last week," on the pattern of "I was at Harvard last week," eliding "University." I think that that counts as the end of Singularity as we know it.)

You might expect SU and SI to get in a stupid squabble about the name. People love fighting over words. But to everyone's credit, I didn't hear squabbling, just confusion from those who were  not in the know. Or you might expect SI to give up, change its name and close down the Singularity Summit. But lo and behold, SU and SI settled the matter sensibly, amicable, in fact ... rationally. SU bought the Summit and the entire "Singularity" brand from SI -- for money! Yes! Coase rules!

SI chose the new name Machine Intelligence Research Institute. I like it.

The term "Artificial Intelligence" got burned out in the AI Winter in the early 1990's. The term has been firmly taboo since then, even in the software industry, even in the  leading edge of the software industry. I did technical evangelism for Unicorn, a leading industrial ontology software startup, and the phrase "Artificial Intelligence" was most definitely out of bounds. The term was not used even inside the company. This was despite a founder with a CoSci PhD, and a co-founder with a masters in AI.

The rarely-used term "Machine Intelligence" throws off that baggage, and so, SI managed to ditch two taboo words at once.

The MIRI name is perhaps too broad. It could serve for any AI research group. The Machine Intelligence Research Institute focuses on decreasing the chances of a negative Intelligence Explosion and increasing the chances of a positive one, not on rushing to develop machine intelligence ASAP. But the name is accurate.

In 2005, the Future of Humanity Institute at Oxford University was founded, followed by the Centre for the Study of Existential Risk at Cambridge University in early 2013. FHI is doing good work, rivaling MIRI's and in some ways surpassing it. CSER's announced research area, and the reputations of its founders, suggest that we can expect good things. Competition for the sake of humanity! The more the merrier!

In late 2012, SI spun off the Center for Applied Rationality. Since 2008, much of SI's energies, and particularly those of Yudkowsky, had gone to LessWrong.com and the field of rationality. As a tactic to bring in smart, committed new researchers and organizers, this was highly successful, and who can argue with the importance of being more rational? But as a strategy for saving humanity from existential AI risk, this second focus was a distraction. SI got the point, and split off CFAR.

Way to go, MIRI! So many of the criticisms I had about SI's strategic direction and its administration in the  years I first encountered it in 2005 have been resolved recently. 

Next step: A much much better human future.

The TL;DR, conveniently at the bottom of the article to encourage you to actually read it, is:

  • MIRI (formerly SIAI, SI): Working to avoid existential risk from future machine intelligence, while increasing the chances of positive outcome
  • CFAR: Training in applied rationality
  • CSER: Research towards avoiding existential risk, with future machine intelligence as a strong focus
  • FHI: Researching various transhumanist topics, but with a strong research program in existential risk  and future machine intelligence in particular
  • SU: Teaching and encouraging the development of next-generation technologies
  • SS: An annual forum for top geek heroes to speak  on whatever interests them. Favored topics include societal trends, next-gen science and technology, and transhumanism.

Comments (21)

Comment author: matt 14 February 2013 09:41:19PM *  18 points [-]

singularity.org, recently acquired after a rather confused and tacky domain-squatter abandoned it

I would not have described the previous owner of singularity.org as a domain squatter. He provided a small amount of relevant info and linked to other relevant organizations. SI/MIRI made more of the domain than he had, but that hardly earns him membership in that pestilential club.

He sold the domain rather than abandoning it, and behaved honestly and reasonably throughout the transaction.

Comment author: lukeprog 14 February 2013 09:45:38PM 10 points [-]

Agree.

Comment author: JoshuaFox 14 February 2013 09:44:37PM *  8 points [-]

Thanks, and apologies if I wronged the previous owner. I have edited the post.

Comment author: lukeprog 14 February 2013 06:53:59PM *  18 points [-]

I wonder what an SU spokesperson would say about this post. I'm not sure Kurzweil was on SI's Board of Directors very long, and I think he at most attended one board meeting — they'd probably mention that. I'm pretty damn sure they'd contest your claim that "SU's activities have little to do with the Singularity, even under Kurzweil's definition." Remember that in Kurzweil's meaning, the singularity is about accelerating technological change up to the point where the world is transformed. SU aims to enable people to contribute to (or at least ride the wave of) accelerating technological change.

CSER was created in mid 2012, saw lots of press in late 2012, but still has not been funded. Together with FHI they submitted a large grant application (in early 2013) that would supply their initial funding if they win it.

Finally, I'll just note that SI's focus on rationality was quite purposeful. That's what Eliezer was talking about when he wrote that

there are many causes which benefit from the spread of rationality — because it takes a little more rationality than usual to see their case, as a supporter, or even just a supportive bystander. Not just the obvious causes like atheism, but things like marijuana legalization — where you could wish that people were a bit more self-aware about their motives and the nature of signaling, and a bit more moved by inconvenient cold facts. The [Singularity Institute] was merely an unusually extreme case of this, wherein it got to the point that after years of bogging down I threw up my hands and explicitly recursed on the job of creating rationalists.

When writing about high technology, SI wasn't locating/creating enough people who are capable of thinking about the future in non-crazy ways. So Eliezer switched to the creation of a community focused on good thinking, and once he had gathered a mass of people who could see that value is fragile and therefore complex value systems are required to realize valuable futures, then SI started pushing on AI again.

Comment author: JoshuaFox 14 February 2013 08:16:53PM *  3 points [-]

I'm not sure Kurzweil was on SI's Board of Directors very long,

I got the 2007-10 dates from Wikipedia on SI. It has a citation for the 2010; and I vaguely recall seeing Kurzweil on the list of board members as far back as 2007.

[SU people would] contest your claim that "SU's activities have little to do with the Singularity,..."

Yes.

By the way, although the particular formulation is my own, I developed that opinion under the influence of certain SI people.

in Kurzweil's meaning, the singularity is about accelerating technological change up to the point where the world is transformed

To some extent. But Kurzweil talks a lot about a point in the future where accelerating change is blindingly fast. SU does not focus on that, but rather on the current relatively slow stage of acceleration.

Anyway, as I said, there's no sense arguing about words. The brand and the term Singularity can now be used to mean accelerating technological change in general, and it is now correct to say that SU is doing Singularity work.

CSER ... still has not been funded.

Thanks, I didn't know that. From all the noise, I thought that CSER was going strong.

SI wasn't locating/creating enough people who are capable of thinking about the future in non-crazy ways

Part of my skepticism about the need for another recruiting path was that that I got into this in 2005 after simply stumbling upon the SIAI site -- and, maybe, perhaps, at least I hope, I think I'm not crazy :-)

I had assumed that others would do like me. One of those biases. But I see now that the rationality work really did a much better job of bringing in good people.

Comment author: lukeprog 14 February 2013 09:35:20PM 3 points [-]

I got the 2007-10 dates from Wikipedia on SI. It has a citation for the 2010; and I vaguely seeing Kurzweil on the list of board members as far back as 2007.

Yeah. As far as I can tell from board meeting minutes during those years, there was a miscommunication between Tyler Emerson and Ray Kurzweil, and Ray didn't know he was on the Board during most of the time he was "on" the Board. :)

But this was all before I was around, so I don't have first-hand information to offer.

Part of my skepticism about the need for another recruiting path was that that I got into this in 2005 after simply stumbling upon the SIAI site -- and, maybe, perhaps, at least I hope, I think I'm not crazy :-)

Many useful and non-crazy people did come through the writings about AI — including Edwin Evans, Louie Helm, yourself, and many others — but not enough of them. And those who did are probably better thinkers after reading The Sequences, anyway.

Comment author: JoshuaFox 14 February 2013 10:00:51PM 2 points [-]

Thanks, I edited the text about Kurzweil and the board.

Comment author: JoshuaFox 14 February 2013 08:17:31PM 0 points [-]

Typo: Please add the link for "Eliezer was talking about when he wrote..."

Comment author: lukeprog 14 February 2013 09:26:59PM 1 point [-]

fixed, thanks

Comment author: ciphergoth 14 February 2013 06:24:53PM 13 points [-]

I feel like the closest kin to MIRI/CFAR outside the existential risk world is the efficient charity world: GiveWell, GWWC, 80,000 Hours, Efficient Animal Activism etc. There's also this: http://gcrinstitute.org/

Comment author: matt 14 February 2013 09:29:34PM 8 points [-]
Comment author: Larks 14 February 2013 10:16:52AM 9 points [-]

Good summary!

I was delighted when then-known-as-SI spun off the Summit. It was bizzare giving people who didn't really agree with us stage time, and in general non-core activities should be spun off. I think this is a very significant improvement and am likely to donate more to MIRI now (as I've donated in the past) than I would have had they kept the Summit in house.

Also it's nice to see that MIRI and SU successfully applied the coase theorem for optimal results.

Comment author: FiftyTwo 14 February 2013 11:21:50AM 6 points [-]

Great post, I've been following for years and the relevant distinctions still confused me.

Comment author: Mitchell_Porter 21 February 2013 01:59:35AM 2 points [-]

Is there a plan for "Machine Intelligence Summits" that address the subproblems of FAI? Since MIRI is still far from having a coding team.

Comment author: JoshuaFox 19 February 2013 12:24:32PM 2 points [-]

For full accuracy, note that Yudkowsky and Kurzweil each had partners in founding SI and SU respectively. In the article, I focused on them as the headline personalities of each organization.

Comment author: moba 14 February 2013 01:00:17PM 2 points [-]

I really like to know what is MIRI's strategic position on the whole unfriendliness issue.

Do they intend to solve FAI before someone else develop UAI? To convince the world to slow down?

From what I see if they are right about the problem then the situation is quite hopeless.

Comment author: JoshuaFox 14 February 2013 02:13:07PM 7 points [-]

Luke's comment here gives some of the answers.

Comment author: asparisi 14 February 2013 05:25:59PM 4 points [-]

I just hope that the newly-dubbed Machine Intelligence Research Institute doesn't put too much focus on advertising for donations.

That would create a MIRI-ad of issues.

Sorry, if I don't let the pun out it has to live inside my head.

Comment author: Vladimir_Nesov 14 February 2013 07:44:41PM *  13 points [-]

Yes, instead of putting too much focus on advertising, they should put the correct amount of focus on it. The argument also applies to the number of pens.

Comment author: RomeoStevens 15 February 2013 12:48:35AM 5 points [-]

And paperclips, hopefully.

Comment author: Rukifellth 14 February 2013 07:20:57PM -3 points [-]

Wait, did SIRI also advertise for donations?