Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Satoshi_Nakamoto comments on Roadmap: Plan of Action to Prevent Human Extinction Risks - Less Wrong

13 Post author: turchin 01 June 2015 09:58AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (88)

You are viewing a single comment's thread. Show more comments above.

Comment author: Satoshi_Nakamoto 14 June 2015 07:35:33AM 0 points [-]

Don’t worry about the money. Just like the comments if they are useful. In Technological precognition does this cover time travel in both directions? So, looking into the future and taking actions to change it and also sending messages into the past. Also, what about making people more compliant and less aggressive by either dulling or eliminating emotions in humans or making people more like a hive mind.

Comment author: turchin 14 June 2015 09:15:30PM 0 points [-]

I uploaded new version of the map with changes marked in blue. http://immortality-roadmap.com/globriskeng.pdf

Technological precognition does not cover time travel, because it too much fantastic. We may include scientific study of claims about precognitive dreams, as such study will become soon possible with live brain scans of sleeping people and dream recording. Time travel could have its own x-risks, like well known grandfather problem.

Lowering human intelligence is in bad plans.

I have been thinking about hive mind... It may be a way to create safe AI, which will be based on humans and use their brains as free and cheep supercomputers via some kind of neuro-interface. But in fact contemporary science as whole is an example of such distributed AI.

If a hive mind is enforced, it is like worst totalitarian state... If it does not include all humans, the rest will fight against it, and may use very powerful weapons to safe their identity. It is already happen as fight between globalists and anti-globalists.