RobbBB comments on Engaging First Introductions to AI Risk - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (21)
I agree that should be on the list. It's a hard question to answer without lots of time and technical detail, though, which is part of why I went with making the problem seem more vivid and immediate by indirect means. Short of internalizing Cognitive Biases Affecting Judgment of Global Risks or a lot of hard sci-fi, I'm not sure there's any good way to short-circuit people's intuitions that FAI doesn't feel like an imminent risk.
'We really don't know, but it wouldn't be a huge surprise if it happened this century, and it would be surprising if it doesn't happen in the next 300 years' is I think a solid mainstream position. For the purpose of the Core List it might be better handled as a quick 2-3 paragraph overview in a longer (say, 3-page) article answering 'Why is FAI fiercely urgent?'; Luke's When Will AI Be Created? is, I think, a good choice for the Further Reading section.