Pdf: http://immortality-roadmap.com/levelglobcat.pdf

New to LessWrong?

New Comment
2 comments, sorted by Click to highlight new comments since: Today at 3:54 PM

Where's the arrow from "Creation of Strong benevolent AI" to "Agent becomes malicious". No matter how smart, your AI can make mistakes.

Also, I'm sure someone will point out that your chart makes a large number of assumptions and assumes that ours is the only answer.

Would you find this chart convincing if it was written by someone who believed that we should switch to a low-tech agrarian society like the amish and instead of everything in yellow had a chain down the side reading "reject technology", "turn to god" etc

Or instead of most of the boxes after the current day, a chain which reads something like "culture stagnates"->"technological progress slows"-> "technological progress regresses but not all the way"

Also your "roadmap" sets some mental alarm bells ringing since such massive nests of arrows rarely accompany good positions.

You've really got the LHC in there under "unintended consequences" leading to "agent becomes malicious". Really?

"Where's the arrow from "Creation of Strong benevolent AI" to "Agent becomes malicious". No matter how smart, your AI can make mistakes." Yes it is true, and I have another map about risks of AI where this possibility is shown. I can't add too many arrows as the map is already too complex ))

"Also, I'm sure someone will point out that your chart makes a large number of assumptions and assumes that ours is the only answer.

Would you find this chart convincing if it was written by someone who believed that we should switch to a low-tech agrarian society like the amish and instead of everything in yellow had a chain down the side reading "reject technology", "turn to god" etc"

The part of this map which is dedicated to x-risks prevention deliberately made small as I have another map about x-risks prevention ways, where all known ideas about it presented in logical order. But it seems to me that your argument is more general, and basically you ask "Are you sure in what you are sure". I think that any map (or even article) may have a lot of assumptions, and it is impossible to list all of them in advance. Anyway it is not clear how to present assumptions in the map and keep it readable.

"You've really got the LHC in there under "unintended consequences" leading to "agent becomes malicious". Really?"

Yes. Unintended consequence of Large hadron collider may be creation of small black hole. The small black hole may become "malicious" if it starts to eat surrounding matter and grow exponentially. While term "malicious" may be strange in case of black hole, it in my opinion explains the difference between "simple" black hole and black hole that is able to eat matter. I think that to explain each arrow is needed a whole page of text (which I partly did in the book "Risks of human extinction" but its translation into English is still in draft.)