XiXiDu comments on Ben Goertzel: The Singularity Institute's Scary Idea (and Why I Don't Buy It) - Less Wrong

32 Post author: ciphergoth 30 October 2010 09:31AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (432)

You are viewing a single comment's thread. Show more comments above.

Comment author: XiXiDu 30 October 2010 11:58:04AM 0 points [-]

Algorithmic trading is indeed an example for the kind of risks posed by complication (unmanageable) systems but also shows that we evolve our security measures with each small-scale catastrophe. There is no example of some existential risk from true runaway technological development yet although many people believe there are such risks, e.g. nuclear weapons. Unstoppable recursive self-improvement is just a hypothesis that you shouldn't take as a foundation for a whole lot of further inductions.

Dispelling Stupid Myths About Nuclear War

An all-out nuclear war between Russia and the United States would be the worst catastrophe in history, a tragedy so huge it is difficult to comprehend. Even so, it would be far from the end of human life on earth. The dangers from nuclear weapons have been distorted and exaggerated, for varied reasons. These exaggerations have become demoralizing myths, believed by millions of Americans.