Comment author: Vika 14 March 2015 05:49:41PM 5 points [-]

A very fitting ending. It would have been nice to see Hermione cast the true Patronus, though!

Comment author: jimrandomh 20 February 2015 11:54:11PM 17 points [-]

"There's something that would make you happier than that," Harry said, his voice breaking again. "There has to be."

Muggle research in the 2010s has revealed much about what actually makes people happy, and how often people are deceived. The best way to find out is with one of those mood-tracking cell phone apps, which eliminate the biases of memory. Quirrell doesn't have that, but as an approximation, I searched the PDF for the word "smile", which appears 310 times in chapters 1-106, and the word "enjoy", which appears 32 times. What did I find?

“Do you know,” the Defense Professor said in soft reflective tones, “there are those who have tried to soften my darker moods, and those who have indeed participated in brightening my day, but you are the first person ever to succeed in doing it deliberately?”

Interacting with Harry makes Quirrell happy. Moreso than killing idiots. Moreso than teaching Battle Magic. Killing him would be a grave mistake.

Comment author: Vika 21 February 2015 12:40:58AM 9 points [-]

The book is mostly from Harry's perspective, so I would expect some selection bias in searching for interactions that make Quirrell happy, since most of the interactions described are with Harry as the protagonist. I agree with your conclusion though.

Comment author: Lumifer 21 January 2015 05:39:04PM 2 points [-]

Researchers are cheap to support

Humanities researchers. Supporting, say, a high-energy experimental physicist can get quite expensive X-)

Comment author: Vika 21 January 2015 08:49:47PM *  3 points [-]

Researchers outside the physical sciences tend to be inexpensive in general - e.g. data scientists / statisticians mostly need access to computing power, which is fairly cheap these days. (Though social science experiments can also be costly.)

Comment author: Raemon 17 January 2015 06:24:01PM 0 points [-]

Huh. Do we know why is not in the list of attendees?

Comment author: Vika 21 January 2015 05:05:35AM 0 points [-]

He attended as a guest, so he is not on the official list.

Comment author: ciphergoth 17 January 2015 08:40:35AM 11 points [-]

Vika, thank you and all at FLI so much for all you've done recently. Three amazing announcements from FLI on each others' heels, each a gigantic contribution to increasing the chances that we'll all see a better future. Really extraordinary work.

Comment author: Vika 17 January 2015 08:33:51PM 7 points [-]

Thanks Paul! We are super excited about how everything is working out (except the alarmist media coverage full of Terminators, but that was likely unavoidable).

Comment author: John_Maxwell_IV 16 January 2015 10:38:10AM *  4 points [-]

My guesses: he chose to donate to FLI because their star-studded advisory board makes them a good public face of the AI safety movement. Yes, they are a relatively young organization, but it looks like they did a good job putting the research priorities letter together (I'm counting 3685 signatures, which is quite impressive... does anyone know how they promoted it?) Also, since they will only be distributing grants, not spending the money themselves, organizational track record is a bit less important. (And they may rely heavily on folks from MIRI/FHI/etc. to figure out how to award the money anyway.) The money will be distributed as grants because grant money is the main thing that motivates researchers, and Musk wants to change the priorities of the AI research community in general, not just add a few new AI safety researchers on the margin. And holding a competition for grants means you can gather more proposals from a wider variety of people. (In particular, people who currently hold prestigious academic jobs and don't want to leave them for a fledgling new institute.)

Comment author: Vika 16 January 2015 06:22:06PM 3 points [-]

Most of the signatures came in after Elon Musk tweeted about the open letter.

Comment author: Sean_o_h 16 January 2015 10:44:10AM 16 points [-]

An FLI person would be best placed to answer. However, I believe the proposal came from Max Tegmark and/or his team, and I fully support it as an excellent way of making progress on AI safety.

(i) All of the above organisations are now in a position to develop specific relevant research plans, and apply to get them funded - rather than it going to one organisation over another. (ii) Given the number of "non-risk" AI researchers at the conference, and many more signing the letter, this is a wonderful opportunity to follow up with that by encouraging them to get involved with safety research and apply. This seems like something that really needs to happen at this stage.

There will be a lot more excellent project submitted for this than the funding will cover, and this will be a great way to demonstrate that there are a lot of tractable problems, and immediately undertake-able work to be done in this area - this should hopefully both attract more AI researchers to the field, and additional funders who see how timely and worthy of funding this work is.

Consider it seed funding for the whole field of AI safety!

Sean (CSER)

Comment author: Vika 16 January 2015 06:20:52PM 6 points [-]

Seconded (as an FLI person)

Comment author: Gondolinian 15 January 2015 11:53:56PM 2 points [-]

Elon Musk donates $10M to keep AI beneficial, Future of Life Institute, Thursday February 15, 2015

Did you mean to write January instead of February?

Comment author: Vika 16 January 2015 06:13:10PM 1 point [-]

It was a typo on the FLI website, which has now been corrected to January.

Comment author: Baughn 16 January 2015 12:33:33PM 0 points [-]

What I'd like to see are videos. Does anyone know if the presentations were recorded?

Comment author: Vika 16 January 2015 06:07:26PM 5 points [-]

The presentations were not recorded, due to the Chatham House rules.

Comment author: JoshuaFox 08 January 2015 05:09:35PM *  5 points [-]

I'd rather be a hero than a sidekick. But my small contribution to mitigating AI risk has generally been in helping MIRI in whatever way seemed most valuable, rather than inventing my independent way to global utility maximization.

So, what does that make me? A cooperative small-time hero, like one of those obscure minor superhero characters in the comics who occasionally steps up to help the famous ones?

Comment author: Vika 09 January 2015 03:26:38AM 6 points [-]

I think there is such a thing as a hero-in-training. My work with FLI has mostly been in a supporting role so far, but I view myself as an apprentice rather than a sidekick, and I would generally like to be a hero.

View more: Prev | Next