Vaniver comments on Welcome to LessWrong (January 2016) - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (15)
I think that's an answer to "why aren't people supporting MIRI's specific research agenda?" but I see SoerenE's question as about "is there a good reason to not be worried about AI danger?"
(In the steelman universe, I think people understand that different research priorities will stem from different intuitions and skills, and think that there's space for everyone to work in the direction that suits them best.)