Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

New organization - Future of Life Institute (FLI)

44 Post author: Vika 14 June 2014 11:00PM

As of May 2014, there is an existential risk research and outreach organization based in the Boston area. The Future of Life Institute (FLI), spearheaded by Max Tegmark, was co-founded by Jaan Tallinn, Meia Chita-Tegmark, Anthony Aguirre and myself.

Our idea was to create a hub on the US East Coast to bring together people who care about x-risk and the future of life. FLI is currently run entirely by volunteers, and is based on brainstorming meetings where the members come together and discuss active and potential projects. The attendees are a mix of local scientists, researchers and rationalists, which results in a diversity of skills and ideas. We also hold more narrowly focused meetings where smaller groups work on specific projects. We have projects in the pipeline ranging from improving Wikipedia resources related to x-risk, to bringing together AI researchers in order to develop safety guidelines and make the topic of AI safety more mainstream.

Max has assembled an impressive advisory board that includes Stuart Russell, George Church and Stephen Hawking. The advisory board is not just for prestige - the local members attend our meetings, and some others participate in our projects remotely. We consider ourselves a sister organization to FHI, CSER and MIRI, and touch base with them often.

We recently held our launch event, a panel discussion "The Future of Technology: Benefits and Risks" at MIT. The panelists were synthetic biologist George Church, geneticist Ting Wu, economist Andrew McAfee, physicist and Nobel laureate Frank Wilczek and Skype co-founder Jaan Tallinn. The discussion covered a broad range of topics from the future of bioengineering and personal genetics, to autonomous weapons, AI ethics and the Singularity. A video and transcript are available.

FLI is a grassroots organization that thrives on contributions from awesome people like the LW community - here are some ways you can help:

  • If you have ideas for research or outreach we could be doing, or improvements to what we're already doing, please let us know (in the comments to this post, or by contacting me directly).
  • If you are in the vicinity of the Boston area and are interested in getting involved, you are especially encouraged to get in touch with us!
  • Support in the form of donations is much appreciated. (We are grateful for seed funding provided by Jaan Tallinn and Matt Wage.)
More details on the ideas behind FLI can be found in this article

Comments (35)

Comment author: Robin 14 June 2014 01:01:51AM 17 points [-]

We consider ourselves a sister organization to FHI, CSER and MIRI, and touch base with them often

How would you differentiate yourself from those organizations?

Comment author: Vika 16 June 2014 04:11:24AM 13 points [-]

MIRI is focusing on technical research into Friendly AI, and their recent mid-2014 strategic plan explicitly announced that they are leaving the public outreach and strategic research to FHI, CSER and FLI. Compared to FHI and CSER, we are less focused on research and more on outreach, which we are well-placed to do given our strong volunteer base and academic connections. Our location allows us to directly engage Harvard and MIT researchers in our brainstorming and decision-making.

Comment author: lukeprog 18 June 2014 04:45:38AM 13 points [-]

Yeah, just in case this isn't obvious to everyone: I'm excited about FLI and very grateful to Max, Meia, Jaan, Vika, Anthony, and everyone else for all the hard work they're doing over there on the East Coast.

I remember when Max & Meia visited MIRI during Max's Our Mathematical Universe book tour and Max said "I'm thinking of focusing more of my time on x-risk stuff. How can I help?"

I can't remember what I asked for, but it was somewhat more modest than "Please assemble a stellar advisory board and launch a new x-risk organization at MIT." I didn't know I could ask for that! :)

Comment author: Robin 20 June 2014 03:24:43AM 4 points [-]

OK, so it seems like FLI promotes the conclusions of other x-risk organizations, but doesn't do any actual research itself.

Do you think it's not worth questioning the conclusions that other organizations have come to? Seems to me that if there are four xrisk organizations (each with reasonably strong connections to each other) there should be some debate between them.

Comment author: Vika 20 June 2014 06:41:10PM 0 points [-]

What kind of questions would you expect the organizations to disagree about?

Comment author: Robin 21 June 2014 12:29:13AM 2 points [-]

I don't know, but if you ask intelligent people what they think about x-risk related to AI it's unlikely they'll come to the exact same conclusions that MIRI etc have.

If you present the ideas of MIRI to intelligent people, some of them will be excited and want to help with donations or volunteering. Others will dismiss you and think you are wrong/crazy.

So to expand on my question... if you find intelligent people who disagree with MIRI on significant things, will you work with them?

Comment author: V_V 14 June 2014 10:08:58AM 8 points [-]

Why is Morgan Freeman listed as a member of the scientific advisory board?

Comment author: Vaniver 14 June 2014 03:24:22PM 13 points [-]

Probably because there's only the one advisory board, and they decided to call it the 'scientific' advisory board because all but two are scientists.

Comment author: Vika 17 June 2014 06:21:48PM 6 points [-]

Morgan Freeman is an experienced science communicator, and he can advise us on science outreach.

Comment author: Benito 13 June 2014 11:28:46AM 7 points [-]

I think what you're doing is marvellous. Your first link is broken.

Comment author: Vika 13 June 2014 01:03:10PM 1 point [-]

Thanks - fixed!

Comment author: ciphergoth 13 June 2014 06:49:19AM 7 points [-]

Thanks for creating FLI! Just one question: how on Earth did you get Morgan Freeman and Alan Alda on board?

Comment author: Vika 13 June 2014 06:28:20PM 12 points [-]

Both of them generally care about science and the future. Also, Max Tegmark had pre-existing connections with them :).

Comment author: Skeptityke 13 June 2014 05:27:35PM 3 points [-]

What would you say is the most effective organization to donate to to reduce artificial biology X-risks?

Comment author: Vika 16 June 2014 04:17:33AM 1 point [-]

No single organization comes to mind, though we have a long list of candidates - if any of them seem particularly effective, please let us know!

Comment author: E_Ransom 07 July 2014 08:27:04PM 2 points [-]

At the moment, the "Get Involved" page only mentions donations. I certainly understand the need for donations, but I'm curious: are you considering other ways to involve the interested or passionate? As this is an outreach group, I suspect participation and communication both play a large part in your long term plans. Do you have any ideas for getting people more involved or connected with what you are doing, either through volunteering, discussion, or collaboration?

Thanks for posting this and putting the word out. The website and people involved (as well as those who have commented here) both make me think there is good potential here as an outreach organization.

Comment author: Vika 28 July 2014 09:46:54PM 0 points [-]

Agreed! The "Get Involved" page has been fixed, and now also mentions volunteering. We have a number of locals from the Boston area who are attending our meetings and contributing to our projects, and a few remote volunteers as well.

Another way to get involved is to contribute to our "idea bank" by sending us suggestions for projects, talks, collaborations or research questions. Naturally, we will only be able to work on a fraction of the proposed ideas, but it's great to have a large pool to choose from. Thanks everyone for your contributions so far!

Comment author: Lethalmud 13 June 2014 12:38:57PM 2 points [-]

I cringe at the term x-risk.

Comment author: Kaj_Sotala 13 June 2014 01:38:50PM 8 points [-]

Why?

Comment author: Lethalmud 13 June 2014 07:33:57PM 5 points [-]

It looks childish to me. its looks the same as x-treme.

http://tvtropes.org/pmwiki/pmwiki.php/Main/XMakesAnythingCool

I guess its just me, and its of no real consequence. But it seems to trivialize such a serious subject as existential risk.

Comment author: arromdee 13 June 2014 09:34:09PM 3 points [-]

Since you invoked TV Tropes, there's a TV Tropes fork at https://allthetropes.orain.org/wiki/ . It gets rid of the censorship at TV Tropes and also uses mediawiki, which makes things work better--you have real categories, it is possible to edit sections, etc.

Comment author: Nornagest 13 June 2014 09:42:50PM *  0 points [-]

So one of these finally got some traction, huh? That's mildly encouraging, although a straight fork without the censorship might have long-term problems distinguishing itself -- even with the better wiki software.

Regardless, probably better suited to the open thread.

Comment author: Robin 14 June 2014 08:16:49PM 4 points [-]

I cringe at the term x-risk.

Can you think of another five letter description? The shorter the term, the easier of a time people will have remembering it and thus the meme will spread faster than a longer term.

Comment author: soreff 14 June 2014 11:15:13PM 5 points [-]

Can one use the backwards-E existence symbol as one of the letters?

Comment author: B_For_Bandana 15 June 2014 12:09:32PM 15 points [-]

If we want ease-of-use, the fact that you typed out "backwards-E existence symbol" instead of "∃" isn't encouraging...

Comment author: Eliezer_Yudkowsky 15 June 2014 10:36:34PM 6 points [-]

It seems intuitively obvious to me that since the risk event is an absence of existence, we should call them \forall-risks.

Comment author: cousin_it 15 June 2014 11:55:20PM 2 points [-]

Yeah, universal risks instead of existential risks would've been a better name, probably too late to change now though.

Comment author: solipsist 16 June 2014 01:27:49AM *  5 points [-]

Alien teenagers sending robots to bitch-slap every human on earth is a universal risk of bitch-slapping, but isn't an existential risk to humanity.

Comment author: army1987 15 June 2014 07:10:16AM 3 points [-]

What matters is not how many people will remember it, it's how many people will remember it and take it seriously.

Comment author: Lumifer 14 June 2014 08:25:19PM 2 points [-]

The shorter the term, the easier of a time people will have remembering it and thus the meme will spread faster than a longer term.

Well...

Is x-risk what happens when x-men do x-rated x-treme stuff?

Comment author: JoshuaFox 24 June 2014 01:26:41PM *  1 point [-]

Vika, although FLI is focused on outreach rather than research, I think there is potential to pursue parallel paths of research to MIRI. They chose the best path they could find, but it is a narrow one, and other research directions could be pursued simultaneously by other organizations. Have you considered that?

Comment author: Vika 10 July 2014 07:13:05PM 1 point [-]

As a young organization, we are trying to avoid narrowing down our scope prematurely. We are more focused on outreach at the moment, but we are also interested in strategic research questions that might complement MIRI's technical research.

Comment author: Dr_Manhattan 13 June 2014 04:19:11PM 1 point [-]

Have some ideas - PM me for email?

Comment author: John_Maxwell_IV 19 June 2014 09:14:22PM 0 points [-]

What are you guys' thoughts about the utility of engaging with Atlantic article commentators?