Vaniver comments on Debunking Fallacies in the Theory of AI Motivation - LessWrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (343)
Thanks, and take your time!
I feel like this could be an endless source of confusion and disagreement; if we're trying to discuss what makes airplanes fly or crash, should we assume that engineers have done their best and made every no-brainer change? I'd rather we look for the underlying principles, we codify best practices, we come up with lists and tests.
If you are in the business of pointing out to them potential problems they are not aware of, then yes, because they can be assumed to be aware of no brainer issues.
MIRI seeks to point out dangers in AI that aren't the result of gross incompetence or deliberate attempts to weaponise AI: it's banal to point out that these could read to danger.