steven0461 comments on How can we ensure that a Friendly AI team will be sane enough? - Less Wrong

10 Post author: Wei_Dai 16 May 2012 09:24PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (64)

You are viewing a single comment's thread. Show more comments above.

Comment author: steven0461 18 May 2012 08:48:15PM *  1 point [-]

I mentioned some specific biases that seem especially likely to cause risk for an FAI team. Is that the kind of "understanding" you're talking about, or something else?

I think that falls under my parenthetical comment in the first paragraph. Understanding what rationality-type skills would make this specific thing go well is obviously useful, but it would also be great if we had a general understanding of what rationality-type skills naturally vary together, so that we can use phrases like "more rational" and have a better idea of what they refer to across different contexts.

It seems like there would probably be better ways to spend the extra resources if we had them though.

Maybe? Note that if people like Holden have concerns about whether FAI is too dangerous, that might make them more likely to provide resources toward a separate FAI feasibility team than toward, say, a better FAI team, so it's not necessarily a fixed heap of resources that we're distributing.