You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

IlyaShpitser comments on Welcome to LessWrong (January 2016) - Less Wrong Discussion

7 Post author: Clarity 13 January 2016 09:34PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (15)

You are viewing a single comment's thread. Show more comments above.

Comment author: IlyaShpitser 15 January 2016 10:42:53PM 0 points [-]

There is also the question of what should this type of research actually look like.

Comment author: Vaniver 15 January 2016 11:53:16PM 2 points [-]

There is also the question of what should this type of research actually look like.

I think that's an answer to "why aren't people supporting MIRI's specific research agenda?" but I see SoerenE's question as about "is there a good reason to not be worried about AI danger?"

(In the steelman universe, I think people understand that different research priorities will stem from different intuitions and skills, and think that there's space for everyone to work in the direction that suits them best.)