Comment author: Manfred 03 April 2015 03:10:28PM 4 points [-]

Best way to find out is to ask the LWer Vika, who I'm pretty sure was the driving force (Max Tegmark probably had something to do with it too). I think their niche is to be a more celebrity-centered face of existential risk reduction (compared to FHI), but they've also made some moves to try to be a host of discussions, and this grant really means that now they have to play funding agency.

Comment author: Vika 06 April 2015 10:52:44PM 5 points [-]

I'm flattered, but I have to say that Max was the driving force here. The real reason FLI got started was that Max finished his book in the beginning of 2014, and didn't want to give that extra time back to his grad students ;).

MIRI / FHI / CSER are research organizations that have full-time research and admin staff. FLI is more of an outreach and meta-research organization, and is largely volunteer-run. We think of ourselves as sister organizations, and coordinate a fair bit. Most of the FLI founders are CFAR alumni, and many of the volunteers are LWers.

Comment author: Gunnar_Zarncke 27 March 2015 11:13:16PM 6 points [-]

Two data points: I basically did this in two very difficult life situations and in both cases it worked very well.

1) During a relationship crisis I imagined the worst thing that could happen and what would follow from that. That allowed me to acting instead of staying passive-depressive because of lack of perceived options. Actually the options that sprang to live together with an altered view on the relationship led to sudden surge of high I was quite surprised by.

2) During a loss of employment period I also imagined what the worst thing could happen and realized that I could live with that. Gave me some calm back (but acting from that wasn't needed as an earlier promised job actually materialized.

Comment author: Vika 28 March 2015 11:49:44PM 2 points [-]

Did you imagine a realistic or unrealistic worst case in these situations?

Comment author: nbouscal 19 March 2015 04:39:34PM 6 points [-]

Is there, or will there be, an RSS feed for this? I didn't see one anywhere.

Comment author: Vika 20 March 2015 03:34:08AM 5 points [-]

Apologies - the RSS button is missing from the site for some reason, I'll ask our webmaster to put it back. Here is the RSS link: http://futureoflife.org/rss.php

Comment author: Vika 14 March 2015 05:49:41PM 5 points [-]

A very fitting ending. It would have been nice to see Hermione cast the true Patronus, though!

Comment author: jimrandomh 20 February 2015 11:54:11PM 17 points [-]

"There's something that would make you happier than that," Harry said, his voice breaking again. "There has to be."

Muggle research in the 2010s has revealed much about what actually makes people happy, and how often people are deceived. The best way to find out is with one of those mood-tracking cell phone apps, which eliminate the biases of memory. Quirrell doesn't have that, but as an approximation, I searched the PDF for the word "smile", which appears 310 times in chapters 1-106, and the word "enjoy", which appears 32 times. What did I find?

“Do you know,” the Defense Professor said in soft reflective tones, “there are those who have tried to soften my darker moods, and those who have indeed participated in brightening my day, but you are the first person ever to succeed in doing it deliberately?”

Interacting with Harry makes Quirrell happy. Moreso than killing idiots. Moreso than teaching Battle Magic. Killing him would be a grave mistake.

Comment author: Vika 21 February 2015 12:40:58AM 9 points [-]

The book is mostly from Harry's perspective, so I would expect some selection bias in searching for interactions that make Quirrell happy, since most of the interactions described are with Harry as the protagonist. I agree with your conclusion though.

Comment author: Lumifer 21 January 2015 05:39:04PM 2 points [-]

Researchers are cheap to support

Humanities researchers. Supporting, say, a high-energy experimental physicist can get quite expensive X-)

Comment author: Vika 21 January 2015 08:49:47PM *  3 points [-]

Researchers outside the physical sciences tend to be inexpensive in general - e.g. data scientists / statisticians mostly need access to computing power, which is fairly cheap these days. (Though social science experiments can also be costly.)

Comment author: Raemon 17 January 2015 06:24:01PM 0 points [-]

Huh. Do we know why is not in the list of attendees?

Comment author: Vika 21 January 2015 05:05:35AM 0 points [-]

He attended as a guest, so he is not on the official list.

Comment author: ciphergoth 17 January 2015 08:40:35AM 11 points [-]

Vika, thank you and all at FLI so much for all you've done recently. Three amazing announcements from FLI on each others' heels, each a gigantic contribution to increasing the chances that we'll all see a better future. Really extraordinary work.

Comment author: Vika 17 January 2015 08:33:51PM 7 points [-]

Thanks Paul! We are super excited about how everything is working out (except the alarmist media coverage full of Terminators, but that was likely unavoidable).

Comment author: John_Maxwell_IV 16 January 2015 10:38:10AM *  4 points [-]

My guesses: he chose to donate to FLI because their star-studded advisory board makes them a good public face of the AI safety movement. Yes, they are a relatively young organization, but it looks like they did a good job putting the research priorities letter together (I'm counting 3685 signatures, which is quite impressive... does anyone know how they promoted it?) Also, since they will only be distributing grants, not spending the money themselves, organizational track record is a bit less important. (And they may rely heavily on folks from MIRI/FHI/etc. to figure out how to award the money anyway.) The money will be distributed as grants because grant money is the main thing that motivates researchers, and Musk wants to change the priorities of the AI research community in general, not just add a few new AI safety researchers on the margin. And holding a competition for grants means you can gather more proposals from a wider variety of people. (In particular, people who currently hold prestigious academic jobs and don't want to leave them for a fledgling new institute.)

Comment author: Vika 16 January 2015 06:22:06PM 3 points [-]

Most of the signatures came in after Elon Musk tweeted about the open letter.

Comment author: Sean_o_h 16 January 2015 10:44:10AM 16 points [-]

An FLI person would be best placed to answer. However, I believe the proposal came from Max Tegmark and/or his team, and I fully support it as an excellent way of making progress on AI safety.

(i) All of the above organisations are now in a position to develop specific relevant research plans, and apply to get them funded - rather than it going to one organisation over another. (ii) Given the number of "non-risk" AI researchers at the conference, and many more signing the letter, this is a wonderful opportunity to follow up with that by encouraging them to get involved with safety research and apply. This seems like something that really needs to happen at this stage.

There will be a lot more excellent project submitted for this than the funding will cover, and this will be a great way to demonstrate that there are a lot of tractable problems, and immediately undertake-able work to be done in this area - this should hopefully both attract more AI researchers to the field, and additional funders who see how timely and worthy of funding this work is.

Consider it seed funding for the whole field of AI safety!

Sean (CSER)

Comment author: Vika 16 January 2015 06:20:52PM 6 points [-]

Seconded (as an FLI person)

View more: Prev | Next