Stuart_Armstrong comments on Non-orthogonality implies uncontrollable superintelligence - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (45)
Thank you for your answer. I don't think the methods you describe are much good for predictions. On the other hand, few methods are much good for predictions anyway.
I've already picked up a few online AI courses to get some background; emotionally this has made me feel that AI is likely to be somewhat less powerful than anticipated, but that it's motivations are more certain to be more alien than I'd thought. Not sure how much weight to put on these intuitions.