NancyLebovitz comments on How can we ensure that a Friendly AI team will be sane enough? - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (64)
Would anyone care to compare the risks from lack of rationality to the risks from making as good an effort as possible, but just plain being wrong?
Are they relevantly different? Actually, now that I think about it it seems that 'lack of rationality' should be a subset of 'trying hard and failing'
I think there's a difference between falling prey to one of the usual biases and just not having enough information.
Of course, but one can lack information and conclude "okay, I don't have enough information", or one may not arrive at such conclusion due to the overconfidence (for example).