You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

JoshuaFox comments on Elon Musk donates $10M to the Future of Life Institute to keep AI beneficial - Less Wrong Discussion

54 Post author: ciphergoth 15 January 2015 04:33PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (51)

You are viewing a single comment's thread.

Comment author: JoshuaFox 16 January 2015 08:17:47AM *  12 points [-]

I think that this is almost as much money as has gone into AI existential risk research to all organizations ever.

Comment author: John_Maxwell_IV 16 January 2015 10:49:46AM *  11 points [-]

Yep. Check out the MIRI top donors list to put the amount in perspective.

The survey indicates that LW has nontrivial experience with academia: 7% of LW has a PhD and 9.9% do academic computer science. I wonder if it'd be useful to create an "awarding effective grants" repository type thread on LW, to pool thoughts on how grant money can be promoted and awarded to effectively achieve research goals. For example, my understanding is that there is a skill called "grantwriting" that is not the same as research ability that makes it easier to be awarded grants; I assume one would want to control for grantwriting ability if one wanted to hand out grants with maximum effectiveness. I don't have much practical experience with academia though... maybe someone who does could frame the problem better and go ahead and create the thread? (Or alternatively tell me why this thread is a bad idea. For example, maybe grantwriting skill consists mostly of knowing what the institutions that typically hand out grants like to see, and FLI is an atypical institution.)

An example of the kind of question we could discuss in such a thread: would it be a good idea for grant proposals to be posted for public commentary on FLI's website, to help them better evaluate grants and spur idea sharing on AI risk reduction in general?

Edit: Here's the thread I created.