Future of Life Institute is hiring

16 Vika 17 November 2015 12:34AM

I am a co-founder of the Future of Life Institute based in Boston, and we are looking to fill two job openings that some LessWrongers might be interested in. We are a mostly volunteer-run organization working to reduce catastrophic and existential risks, and increase the chances of a positive future for humanity. Please consider applying and pass this posting along to anyone you think would be a good fit!

PROJECT COORDINATOR

Technology has given life the opportunity to flourish like never before - or to self-destruct. The Future of Life Institute is a rapidly growing non-profit organization striving for the former outcome. We are fortunate to be supported by an inspiring group of people, including Elon Musk, Jaan Tallinn and Stephen Hawking, and you may have heard of our recent efforts to keep artificial intelligence beneficial.

You are idealistic, hard-working and well-organized, and want to help our core team carry out a broad range of projects, from organizing events to coordinating media outreach. Living in the greater Boston area is a major advantage, but not an absolute requirement.

If you are excited about this opportunity, then please send an email to jobs@futureoflife.org with your cv and a brief statement of why you want to work with us. The title of your email must be 'Project coordinator'.

NEWS WEBSITE EDITOR

There is currently huge public interest in the question of how upcoming technology (especially artificial intelligence) may transform our world, and what should be done to seize opportunities and reduce risks.

You are idealistic and ambitious, and want to lead our effort to transform our fledgling news site into the number one destination for anyone seeking up-to-date and in-depth information on this topic, and anybody eager to join what is emerging as one of the most important conversations of our time.

You love writing and have the know-how and drive needed to grow and promote a website. You are self-motivated and enjoy working independently rather than being closely mentored. You are passionate about this topic, and look forward to the opportunity to engage with our second-to-none global network of experts and use it to generate ideas and add value to the site. You look forward to developing and executing your vision for the website using the resources at your disposal, which include both access to experts and funds for commissioning articles, improving the website user interface, etc. You look forward to making use of these resources and making things happen rather than waiting for others to take the initiative.

If you are excited about this opportunity, then please send an email to jobs@futureoflife.org with your cv and answers to these questions:

  • Briefly, what is your vision for our site? How would you improve it?
  • What other site(s) (please provide URLs) have attributes that you'd like to emulate?
  • How would you generate the required content?
  • How would you increase traffic to the site, and what do you view as realistic traffic goals for January 2016 and January 2017?
  • What budget do you need to succeed, not including your own salary?
  • What past experience do you have with writing and/or website management? Please include a selection of URLs that showcase your work.

The title of your application email must be 'Editor'. You can live anywhere in the world. A science background is a major advantage, but not a strict requirement.

Comment author: jsteinhardt 10 November 2015 04:04:05AM *  2 points [-]

I know there are many papers that show that neural nets learn features that can in some regimes be given nice interpretations. However in all cases of which I am aware where these representations have been thoroughly analyzed, they seem to fail obvious tests of naturality, which would include things like:

(1) Good performance on different data sets in the same domain.

(2) Good transference to novel domains.

(3) Robustness to visually imperceptible perturbations to the input image.

Moreover, ANNs almost fundamentally cannot learn natural representations because they fail what I would call the "canonicality" test:

(4) Replacing the learned features with a random invertible linear transformation of the learned features should degrade performance.

Note that the reason for (4) is that if you want to interpret an individual hidden unit in an ANN as being meaningful, then it can't be the case that a random linear combination of lots of units is equally meaningful (since a random linear combination of e.g. cats and dogs and 100 other things is not going to have much meaning).

That was a bit long-winded, but my question is whether the linked paper or any other papers provide representations that you think don't fail any of (1)-(4).

Comment author: Vika 14 November 2015 12:30:01AM 0 points [-]

Thanks for the handy list of criteria. I'm not sure how (3) would apply to a recurrent neural net for language modeling, since it's difficult to make an imperceptible perturbation of text (as opposed to an image).

Regarding (2): given the impressive performance of RNNs in different text domains (English, Wikipedia markup, Latex code, etc), it would be interesting to see how an RNN trained on English text would perform on Latex code, for example. I would expect it to carry over some representations that are common to the training and test data, like the aforementioned brackets and quotes.

Comment author: jsteinhardt 03 November 2015 05:07:33PM 3 points [-]

Thanks for writing this; a couple quick thoughts:

For example, it turns out that a learning algorithm tasked with some relatively simple tasks, such as determining whether or not English sentences are valid, will automatically build up an internal representation of the world which captures many of the regularities of the world – as a pure side effect of carrying out its task.

I think I've yet to see a paper that convincingly supports the claim that neural nets are learning natural representations of the world. For some papers that refute this claim, see e.g.

http://arxiv.org/abs/1312.6199 http://arxiv.org/abs/1412.6572

I think the Degrees of Freedom thesis is a good statement of one of the potential problems. Since it's essentially making a claim about whether a certain very complex statistical problem is identifiable, I think it's very hard to know whether it's true or not without either some serious technical analysis or some serious empirical research --- which is a reason to do that research, because if the thesis is true then that has some worrisome implications about AI safety.

Comment author: Vika 09 November 2015 01:48:57AM 0 points [-]

Here's an example of recurrent neural nets learning intuitive / interpretable representations of some basic aspects of text, like keeping track of quotes and brackets: http://arxiv.org/abs/1506.02078

Comment author: fowlertm 04 October 2015 04:07:43PM 4 points [-]

I think there'd be value in just listing graduate programs in philosophy, economics, etc., by how relevant the research already being done there is to x-risk, AI safety, or rationality. Or by whether or not they contain faculty interested in those topics.

For example, if I were looking to enter a philosophy graduate program it might take me quite some time to realize that Carnegie Melon probably has the best program for people interested in LW-style reasoning about something like epistemology.

Comment author: Vika 07 October 2015 10:34:49PM *  3 points [-]

I think it depends more on specific advisors than on the university. If you're interested in doing AI safety research in grad school, getting in touch with professors who got FLI grants might be a good idea.

Comment author: Curiouskid 07 October 2015 05:23:54AM *  5 points [-]

I have some questions about step 1 (find a flexible program):

My understanding is that there are two sources of inflexibility for PhD programs: A. Requirements for your funding source (e.g. TA-ing) and B. Vague requirements of the program (e.g. publish X papers). I'm excluding Quals, since you just have to pass a test and then you're done.

Elsewhere in the comments, someone wrote:

"Grad school is free. At most good PhD programs in the US, if you get in then they will offer you funding which covers tuition and pays you a stipend on the order of $25K per year. In return, you may have to do some work as a TA or in a professor's lab."

So, there are two types of jobs you can have to fund your PhD (TA-ing and being a RA/Research Assistant to a professor). How time-consuming is TA-ing generally? I imagine it varies based on the school/class. How do you find a TA-ing gig that isn't time consuming? Can you generally TA during your entire PhD? I think I vaguely recall a university that only would let you TA for so many semesters.

You could also fund your PhD by getting a fellowship. Philip Guo has written about applying for the NSF, NDSEG, Hertz fellowships. I'm poorly calibrated about how hard it is to get one of these fellowships. I've also heard that certain schools will offer fellowships to some of their students. How hard are these to get relative to the fellowships mentioned above? There are ~33K science PhDs awarded each year. I wonder what distinguishes the ~4% who get fellowships from the median science PhD student.

Let's say that you were really frugal and/or financially independent already. My impression is that many schools would still require you to TA in order to have your tuition waved.

Let’s assume you have the financial aspect of your PhD taken care of (e.g. You have an easy/enjoyable TA job). What other requirements are there other than passing Quals? Could I read interesting books indefinitely until I find something interesting to publish?

I'd like to believe that achieving step 1 is possible, but as eli_sennesh pointed out, this is hard.

Comment author: Vika 07 October 2015 10:30:30PM 4 points [-]

How much TAing is allowed or required depends on your field and department. I'm in a statistics department that expects PhD students to TA every semester (except their first and final year). It has taken me some effort to weasel out of around half of the teaching appointments, since I find teaching (especially grading) quite time-consuming, while industry internships both pay better and generate research experience. On the other hand, people I know from the CS department only have to teach 1-2 semesters during their entire PhD.

Comment author: Manfred 03 April 2015 03:10:28PM 4 points [-]

Best way to find out is to ask the LWer Vika, who I'm pretty sure was the driving force (Max Tegmark probably had something to do with it too). I think their niche is to be a more celebrity-centered face of existential risk reduction (compared to FHI), but they've also made some moves to try to be a host of discussions, and this grant really means that now they have to play funding agency.

Comment author: Vika 06 April 2015 10:52:44PM 5 points [-]

I'm flattered, but I have to say that Max was the driving force here. The real reason FLI got started was that Max finished his book in the beginning of 2014, and didn't want to give that extra time back to his grad students ;).

MIRI / FHI / CSER are research organizations that have full-time research and admin staff. FLI is more of an outreach and meta-research organization, and is largely volunteer-run. We think of ourselves as sister organizations, and coordinate a fair bit. Most of the FLI founders are CFAR alumni, and many of the volunteers are LWers.

Comment author: Gunnar_Zarncke 27 March 2015 11:13:16PM 6 points [-]

Two data points: I basically did this in two very difficult life situations and in both cases it worked very well.

1) During a relationship crisis I imagined the worst thing that could happen and what would follow from that. That allowed me to acting instead of staying passive-depressive because of lack of perceived options. Actually the options that sprang to live together with an altered view on the relationship led to sudden surge of high I was quite surprised by.

2) During a loss of employment period I also imagined what the worst thing could happen and realized that I could live with that. Gave me some calm back (but acting from that wasn't needed as an earlier promised job actually materialized.

Comment author: Vika 28 March 2015 11:49:44PM 2 points [-]

Did you imagine a realistic or unrealistic worst case in these situations?

Negative visualization, radical acceptance and stoicism

17 Vika 27 March 2015 03:51AM

In anxious, frustrating or aversive situations, I find it helpful to visualize the worst case that I fear might happen, and try to accept it. I call this “radical acceptance”, since the imagined worst case is usually an unrealistic scenario that would be extremely unlikely to happen, e.g. “suppose I get absolutely nothing done in the next month”. This is essentially the negative visualization component of stoicism. There are many benefits to visualizing the worst case:

  • Feeling better about the present situation by contrast.
  • Turning attention to the good things that would still be in my life even if everything went wrong in one particular domain.
  • Weakening anxiety using humor (by imagining an exaggerated “doomsday” scenario).
  • Being more prepared for failure, and making contingency plans (pre-hindsight).
  • Helping make more accurate predictions about the future by reducing the “X isn’t allowed to happen” effect (or, as Anna Salamon once put it, “putting X into the realm of the thinkable”).
  • Reducing the effect of ugh fields / aversions, which thrive on the “X isn’t allowed to happen” flinch.
  • Weakening unhelpful identities like “person who is always productive” or “person who doesn’t make stupid mistakes”.

Let’s say I have an aversion around meetings with my advisor, because I expect him to be disappointed with my research progress. When I notice myself worrying about the next meeting or finding excuses to postpone it so that I have more time to make progress, I can imagine the worst imaginable outcome a meeting with my advisor could have - perhaps he might yell at me or even decide to expel me from grad school (neither of these have actually happened so far). If the scenario is starting to sound silly, that’s a good sign. I can then imagine how this plays out in great detail, from the disappointed faces and words of the rest of the department to the official letter of dismissal in my hands, and consider what I might do in that case, like applying for industry jobs. While building up these layers of detail in my mind, I breathe deeply, which I associate with meditative acceptance of reality. (I use the word “acceptance” to mean “acknowledgement” rather than “resignation”.)

I am trying to use this technique more often, both in the regular and situational sense. A good default time is my daily meditation practice. I might also set up a trigger-action habit of the form “if I notice myself repeatedly worrying about something, visualize that thing (or an exaggerated version of it) happening, and try to accept it”. Some issues have more natural triggers than others - while worrying tends to call attention to itself, aversions often manifest as a quick flinch away from a thought, so it’s better to find a trigger among the actions that are often caused by an aversion, e.g. procrastination. A trigger for a potentially unhelpful identity could be a thought like “I’m not good at X, but I should be”. A particular issue can simultaneously have associated worries (e.g. “will I be productive enough?”), aversions (e.g. towards working on the project) and identities (“productive person”), so there is likely to be something there that makes a good trigger. Visualizing myself getting nothing done for a month can help with all of these to some degree.

System 1 is good at imagining scary things - why not use this as a tool?

Cross-posted

Comment author: nbouscal 19 March 2015 04:39:34PM 6 points [-]

Is there, or will there be, an RSS feed for this? I didn't see one anywhere.

Comment author: Vika 20 March 2015 03:34:08AM 5 points [-]

Apologies - the RSS button is missing from the site for some reason, I'll ask our webmaster to put it back. Here is the RSS link: http://futureoflife.org/rss.php

Future of Life Institute existential risk news site

21 Vika 19 March 2015 02:33PM

I'm excited to announce that the Future of Life Institute has just launched an existential risk news site!

The site will have regular articles on topics related to existential risk, written by journalists, and a community blog written by existential risk researchers from around the world as well as FLI volunteers. Enjoy!

View more: Prev | Next