Eliezer_Yudkowsky comments on A question of rationality - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (93)
Michael Vassar is much, much better at the H.R. thing. We still have H.R. problems but could now actually expand at a decent clip given more funding.
Unless you're talking about directly working on the core FAI problem, in which case, yes, we have a huge H.R. problem. Phrasing above might sound somewhat misleading; it's not that I hired people for A.I. research but they failed at once, or that I couldn't find anyone above the level of basic stupid failures. Rather that it takes a lot more than "beyond the basic stupid failures" to avoid clever failures and actually get stuff done, and the basic stupid failures give you some idea of the baseline level of competence beyond which we need some number of sds.
Yeah, sorry for phrasing it wrong. I guess I should have said
And yes, I did mean that you had trouble finding people to work directly on the core FAI problem.