[link] FLI's recommended project grants for AI safety research announced
http://futureoflife.org/misc/2015awardees
You may recognize several familiar names there, such as Paul Christiano, Benja Fallenstein, Katja Grace, Nick Bostrom, Anna Salamon, Jacob Steinhardt, Stuart Russell... and me. (the $20,000 for my project was the smallest grant that they gave out, but hey, I'm definitely not complaining. ^^)
Loading…
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Comments (20)
Anyone know more about this proposal from IDSIA?
I did some searching but Google doesn't seem to know anything about this "EXPAI".
I didn't find anything on EXPAI either, but there's the PI's list of previous publications. At least his Bounded Seed-AGI paper sounds somewhat related:
I saw this news and came back just to say congrats Kaj! I'm looking forward to reading about your thesis work.
Thanks! :)
I'm disappointed that my group's proposal to work on AI containment wasn't funded, and no other AI containment work was funded, either. Still, some of the things that were funded do look promising. I wrote a bit about what we proposed and the experience of the process here.
I am not an expert (not even an amateur) in the area, but I wonder if the AI containment work would be futile without corrigibility figured out, and superfluous once it is? What is the window of AI intelligence where it is not yet super-human (too late to contain), but already too smart to be contained by the standard means?
Oh man, that sucks. :(
I feel for you. I agree with salvatier's point in the linked page. Why don't you try to talk to FHI directly? They should be able to get some funding your way.
I'm surprised and pleased by the diversity of the research space they are exploring. Specifically it's great to see proposals investigating robustness for machine learning and the applications of mechanism design to AI dynamics.