You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Wei_Dai comments on [link] FLI's recommended project grants for AI safety research announced - Less Wrong Discussion

17 Post author: Kaj_Sotala 01 July 2015 03:27PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (20)

You are viewing a single comment's thread. Show more comments above.

Comment author: diegocaleiro 01 July 2015 10:50:12PM *  5 points [-]

That is false. Bostrom thought of FAI before Eliezer. Paul thought of the Crypto. Bostrom and Armstrong have done more work on orthogonality. Bostrom/Hanson came up with most of the relevant stuff in multipolar scenarios. Sandberg/EY were involved in the oracle/tool/sovereign distinction.

TDT, which is EY work does not show up prominently in Superintelligence. CEV, of course, does, and is EY work. Lots of ideas on Superintelligence are causally connected to Yudkowksy, but no doubt there is more value from Bostrom there than from Yudkowsky.

Bostrom got 1.500.000 and MIRI, through Benja, got 250.000. This seems justified conditional on what has been produced by FHI and MIRI in the past.

Notice also that CFAR, through Anna, has received resources that will also be very useful to MIRI, since it will make potential MIRI researchers become CFAR alumni.

Comment author: Wei_Dai 02 July 2015 09:34:57AM *  15 points [-]

Bostrom thought of FAI before Eliezer.

To be completely fair, although Nick Bostrom realized the importance of the problem before Eliezer, Eliezer actually did more work on it, and published his work earlier. The earliest publication I can find from Nick on the topic is this short 2003 paper basically just describing the problem, at which time Eliezer had already published Creating Friendly AI 1.0 (which is cited by Nick).