As of May 2014, there is an existential risk research and outreach organization based in the Boston area. The Future of Life Institute (FLI), spearheaded by Max Tegmark, was co-founded by Jaan Tallinn, Meia Chita-Tegmark, Anthony Aguirre and myself.

Our idea was to create a hub on the US East Coast to bring together people who care about x-risk and the future of life. FLI is currently run entirely by volunteers, and is based on brainstorming meetings where the members come together and discuss active and potential projects. The attendees are a mix of local scientists, researchers and rationalists, which results in a diversity of skills and ideas. We also hold more narrowly focused meetings where smaller groups work on specific projects. We have projects in the pipeline ranging from improving Wikipedia resources related to x-risk, to bringing together AI researchers in order to develop safety guidelines and make the topic of AI safety more mainstream.

Max has assembled an impressive advisory board that includes Stuart Russell, George Church and Stephen Hawking. The advisory board is not just for prestige - the local members attend our meetings, and some others participate in our projects remotely. We consider ourselves a sister organization to FHI, CSER and MIRI, and touch base with them often.

We recently held our launch event, a panel discussion "The Future of Technology: Benefits and Risks" at MIT. The panelists were synthetic biologist George Church, geneticist Ting Wu, economist Andrew McAfee, physicist and Nobel laureate Frank Wilczek and Skype co-founder Jaan Tallinn. The discussion covered a broad range of topics from the future of bioengineering and personal genetics, to autonomous weapons, AI ethics and the Singularity. A video and transcript are available.

FLI is a grassroots organization that thrives on contributions from awesome people like the LW community - here are some ways you can help:

  • If you have ideas for research or outreach we could be doing, or improvements to what we're already doing, please let us know (in the comments to this post, or by contacting me directly).
  • If you are in the vicinity of the Boston area and are interested in getting involved, you are especially encouraged to get in touch with us!
  • Support in the form of donations is much appreciated. (We are grateful for seed funding provided by Jaan Tallinn and Matt Wage.)
More details on the ideas behind FLI can be found in this article
New Comment
35 comments, sorted by Click to highlight new comments since:
[-]Robin260

We consider ourselves a sister organization to FHI, CSER and MIRI, and touch base with them often

How would you differentiate yourself from those organizations?

[-]Vika210

MIRI is focusing on technical research into Friendly AI, and their recent mid-2014 strategic plan explicitly announced that they are leaving the public outreach and strategic research to FHI, CSER and FLI. Compared to FHI and CSER, we are less focused on research and more on outreach, which we are well-placed to do given our strong volunteer base and academic connections. Our location allows us to directly engage Harvard and MIT researchers in our brainstorming and decision-making.

Yeah, just in case this isn't obvious to everyone: I'm excited about FLI and very grateful to Max, Meia, Jaan, Vika, Anthony, and everyone else for all the hard work they're doing over there on the East Coast.

I remember when Max & Meia visited MIRI during Max's Our Mathematical Universe book tour and Max said "I'm thinking of focusing more of my time on x-risk stuff. How can I help?"

I can't remember what I asked for, but it was somewhat more modest than "Please assemble a stellar advisory board and launch a new x-risk organization at MIT." I didn't know I could ask for that! :)

OK, so it seems like FLI promotes the conclusions of other x-risk organizations, but doesn't do any actual research itself.

Do you think it's not worth questioning the conclusions that other organizations have come to? Seems to me that if there are four xrisk organizations (each with reasonably strong connections to each other) there should be some debate between them.

[-]Vika10

What kind of questions would you expect the organizations to disagree about?

I don't know, but if you ask intelligent people what they think about x-risk related to AI it's unlikely they'll come to the exact same conclusions that MIRI etc have.

If you present the ideas of MIRI to intelligent people, some of them will be excited and want to help with donations or volunteering. Others will dismiss you and think you are wrong/crazy.

So to expand on my question... if you find intelligent people who disagree with MIRI on significant things, will you work with them?

[-]V_V130

Why is Morgan Freeman listed as a member of the scientific advisory board?

Probably because there's only the one advisory board, and they decided to call it the 'scientific' advisory board because all but two are scientists.

[-]Vika110

Morgan Freeman is an experienced science communicator, and he can advise us on science outreach.

I think what you're doing is marvellous. Your first link is broken.

[-]Vika30

Thanks - fixed!

Thanks for creating FLI! Just one question: how on Earth did you get Morgan Freeman and Alan Alda on board?

[-]Vika180

Both of them generally care about science and the future. Also, Max Tegmark had pre-existing connections with them :).

[-][anonymous]40

At the moment, the "Get Involved" page only mentions donations. I certainly understand the need for donations, but I'm curious: are you considering other ways to involve the interested or passionate? As this is an outreach group, I suspect participation and communication both play a large part in your long term plans. Do you have any ideas for getting people more involved or connected with what you are doing, either through volunteering, discussion, or collaboration?

Thanks for posting this and putting the word out. The website and people involved (as well as those who have commented here) both make me think there is good potential here as an outreach organization.

[-]Vika10

Agreed! The "Get Involved" page has been fixed, and now also mentions volunteering. We have a number of locals from the Boston area who are attending our meetings and contributing to our projects, and a few remote volunteers as well.

Another way to get involved is to contribute to our "idea bank" by sending us suggestions for projects, talks, collaborations or research questions. Naturally, we will only be able to work on a fraction of the proposed ideas, but it's great to have a large pool to choose from. Thanks everyone for your contributions so far!

What would you say is the most effective organization to donate to to reduce artificial biology X-risks?

[-]Vika20

No single organization comes to mind, though we have a long list of candidates - if any of them seem particularly effective, please let us know!

I cringe at the term x-risk.

It looks childish to me. its looks the same as x-treme.

http://tvtropes.org/pmwiki/pmwiki.php/Main/XMakesAnythingCool

I guess its just me, and its of no real consequence. But it seems to trivialize such a serious subject as existential risk.

Since you invoked TV Tropes, there's a TV Tropes fork at https://allthetropes.orain.org/wiki/ . It gets rid of the censorship at TV Tropes and also uses mediawiki, which makes things work better--you have real categories, it is possible to edit sections, etc.

So one of these finally got some traction, huh? That's mildly encouraging, although a straight fork without the censorship might have long-term problems distinguishing itself -- even with the better wiki software.

Regardless, probably better suited to the open thread.

I cringe at the term x-risk.

Can you think of another five letter description? The shorter the term, the easier of a time people will have remembering it and thus the meme will spread faster than a longer term.

Can one use the backwards-E existence symbol as one of the letters?

If we want ease-of-use, the fact that you typed out "backwards-E existence symbol" instead of "∃" isn't encouraging...

It seems intuitively obvious to me that since the risk event is an absence of existence, we should call them \forall-risks.

Yeah, universal risks instead of existential risks would've been a better name, probably too late to change now though.

Alien teenagers sending robots to bitch-slap every human on earth is a universal risk of bitch-slapping, but isn't an existential risk to humanity.

What matters is not how many people will remember it, it's how many people will remember it and take it seriously.

The shorter the term, the easier of a time people will have remembering it and thus the meme will spread faster than a longer term.

Well...

Is x-risk what happens when x-men do x-rated x-treme stuff?

Vika, although FLI is focused on outreach rather than research, I think there is potential to pursue parallel paths of research to MIRI. They chose the best path they could find, but it is a narrow one, and other research directions could be pursued simultaneously by other organizations. Have you considered that?

[-]Vika30

As a young organization, we are trying to avoid narrowing down our scope prematurely. We are more focused on outreach at the moment, but we are also interested in strategic research questions that might complement MIRI's technical research.

Have some ideas - PM me for email?

I am just confused by one word... "life". The confusion stems from the several uses of the word “life” in English. There are at least three usages as exemplified by the following questions: 1) Is there life on Mars? 2) Is there life in this organism? 3) Is life worth living? My personal thinking is that "life" predates any physical manifestations thereof. In short, energy has been around for ever, albeit in various forms. What would your definition of "life" be?

What are you guys' thoughts about the utility of engaging with Atlantic article commentators?