Where's the arrow from "Creation of Strong benevolent AI" to "Agent becomes malicious". No matter how smart, your AI can make mistakes.
Also, I'm sure someone will point out that your chart makes a large number of assumptions and assumes that ours is the only answer.
Would you find this chart convincing if it was written by someone who believed that we should switch to a low-tech agrarian society like the amish and instead of everything in yellow had a chain down the side reading "reject technology", "turn to god" etc
Or instead of most of the boxes after the current day, a chain which reads something like "culture stagnates"->"technological progress slows"-> "technological progress regresses but not all the way"
Also your "roadmap" sets some mental alarm bells ringing since such massive nests of arrows rarely accompany good positions.
You've really got the LHC in there under "unintended consequences" leading to "agent becomes malicious". Really?
"Where's the arrow from "Creation of Strong benevolent AI" to "Agent becomes malicious". No matter how smart, your AI can make mistakes." Yes it is true, and I have another map about risks of AI where this possibility is shown. I can't add too many arrows as the map is already too complex ))
"Also, I'm sure someone will point out that your chart makes a large number of assumptions and assumes that ours is the only answer.
Would you find this chart convincing if it was written by someone who believed that we should switch to...
Pdf: http://immortality-roadmap.com/levelglobcat.pdf