You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

the-citizen comments on [Link] If we knew about all the ways an Intelligence Explosion could go wrong, would we be able to avoid them? - Less Wrong Discussion

1 Post author: the-citizen 23 November 2014 10:30AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (15)

You are viewing a single comment's thread. Show more comments above.

Comment author: the-citizen 02 December 2014 08:05:17AM 0 points [-]

Thanks for comment. I will reply as follows:

  • Knowing how things could go wrong gives useful knowledge about scenarios/pathways to avoid
  • Our knowledge of how to make things go right is not zero

My intention with the article is to draw attention to some broader non-technical difficulties in implementing FAI. One worrying theme in the reponses I've gotten is a conflation between knowledge of AGI risk and building a FAI. I think they are separate projects, and that success of the second relies on comprehensive prior knowledge of the first. Apparently MIRI's approach doesn't really acknowledge the two as separate.