You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

eli_sennesh comments on [Link] If we knew about all the ways an Intelligence Explosion could go wrong, would we be able to avoid them? - Less Wrong Discussion

1 Post author: the-citizen 23 November 2014 10:30AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (15)

You are viewing a single comment's thread.

Comment author: [deleted] 30 November 2014 12:03:33PM -1 points [-]

Answer: clearly, no. If you know all the ways things can go wrong, but don't know how to make them go right, then your knowledge is useless for anything except worrying.

Comment author: Lumifer 30 November 2014 10:15:38PM 1 point [-]

If you know all the ways things can go wrong, but don't know how to make them go right, then your knowledge is useless for anything except worrying.

May I recommend the concept of risk management to you? It's very useful.

Comment author: Gondolinian 30 November 2014 02:46:26PM 1 point [-]

It's generally easier to gain the knowledge of how to make things go right when your research is anchored by potential problems.

Comment author: the-citizen 02 December 2014 08:05:17AM 0 points [-]

Thanks for comment. I will reply as follows:

  • Knowing how things could go wrong gives useful knowledge about scenarios/pathways to avoid
  • Our knowledge of how to make things go right is not zero

My intention with the article is to draw attention to some broader non-technical difficulties in implementing FAI. One worrying theme in the reponses I've gotten is a conflation between knowledge of AGI risk and building a FAI. I think they are separate projects, and that success of the second relies on comprehensive prior knowledge of the first. Apparently MIRI's approach doesn't really acknowledge the two as separate.