eli_sennesh comments on [Link] If we knew about all the ways an Intelligence Explosion could go wrong, would we be able to avoid them? - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Loading…
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Comments (15)
Answer: clearly, no. If you know all the ways things can go wrong, but don't know how to make them go right, then your knowledge is useless for anything except worrying.
May I recommend the concept of risk management to you? It's very useful.
It's generally easier to gain the knowledge of how to make things go right when your research is anchored by potential problems.
Thanks for comment. I will reply as follows:
My intention with the article is to draw attention to some broader non-technical difficulties in implementing FAI. One worrying theme in the reponses I've gotten is a conflation between knowledge of AGI risk and building a FAI. I think they are separate projects, and that success of the second relies on comprehensive prior knowledge of the first. Apparently MIRI's approach doesn't really acknowledge the two as separate.