You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

jimrandomh comments on [link] FLI's recommended project grants for AI safety research announced - Less Wrong Discussion

17 Post author: Kaj_Sotala 01 July 2015 03:27PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (20)

You are viewing a single comment's thread.

Comment author: jimrandomh 01 July 2015 05:27:57PM 3 points [-]

I'm disappointed that my group's proposal to work on AI containment wasn't funded, and no other AI containment work was funded, either. Still, some of the things that were funded do look promising. I wrote a bit about what we proposed and the experience of the process here.

Comment author: Kaj_Sotala 01 July 2015 06:09:30PM 2 points [-]

When considering possible failure modes for this proposal, one possibility I didn’t consider was that original research portions would look too much like summaries of existing work.

Oh man, that sucks. :(

Comment author: shminux 01 July 2015 07:54:28PM 1 point [-]

I am not an expert (not even an amateur) in the area, but I wonder if the AI containment work would be futile without corrigibility figured out, and superfluous once it is? What is the window of AI intelligence where it is not yet super-human (too late to contain), but already too smart to be contained by the standard means?

Comment author: blogospheroid 02 July 2015 04:57:20AM 0 points [-]

I feel for you. I agree with salvatier's point in the linked page. Why don't you try to talk to FHI directly? They should be able to get some funding your way.