https://www.reddit.com/r/LessWrong/comments/2icm8m/if_we_knew_about_all_the_ways_an_intelligence/
I submitted this a while back to the lesswrong subreddit, but it occurs to me now that most LWers probably don't actually check the sub. So here it is again in case anyone that's interested didn't see it.
Answer: clearly, no. If you know all the ways things can go wrong, but don't know how to make them go right, then your knowledge is useless for anything except worrying.
Thanks for comment. I will reply as follows:
My intention with the article is to draw attention to some broader non-technical difficulties in implementing FAI. One worrying theme in the reponses I've gotten is a conflation between knowledge of AGI risk and building a FAI. I think they are separate projects, and that success of the second relies on comprehensive prior knowledge of the first. Apparently MIRI's approach doesn't really acknowledge the two as separate.