Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

turchin comments on Roadmap: Plan of Action to Prevent Human Extinction Risks - Less Wrong

13 Post author: turchin 01 June 2015 09:58AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (88)

You are viewing a single comment's thread. Show more comments above.

Comment author: turchin 13 June 2015 10:03:08AM 0 points [-]

Sent 150 USD to Against Malaria foundation.

The idea of dumbing people is also present in Bad plan section, "limitation of human or collective intelligence"... But the main idea of preventing human extinction is, by definition to ensure that at least several examples of Homo sapienses are still alive in any given point of time. It is not the best possible definition. It should also include posthumans if they based on humans and share a lot of their properties (and as Bostrom said: could realise full human potential). In fact, we can't said what is really good before we solve Friendly AI problem. And if we know what is good, we could also said what is worst outcome, and so constitute existential catastrophe. But real catastrophe which could happen in 21 century is far from such sophisticated problems of determination ultimate good, human nature and full human potential. It is clear visible physical process of destruction.

There are some ideas of down to top solving problems of control, like idea of transparent society by David Brin, where vigilants will scan the web and video sensors searching for terrorists. So it would be not hierarchical control but net based, pr peer to peer.

I like two extra boxes, but for now I already spent my prize budget two times, which unexpectedly put me in controversial situation: as author of the map I want to make the best and most inclusive map, but as a owner of prize fund (which I pay from personal money earned selling art) I feel my self more screwy :)

Comment author: Satoshi_Nakamoto 14 June 2015 07:35:33AM 0 points [-]

Don’t worry about the money. Just like the comments if they are useful. In Technological precognition does this cover time travel in both directions? So, looking into the future and taking actions to change it and also sending messages into the past. Also, what about making people more compliant and less aggressive by either dulling or eliminating emotions in humans or making people more like a hive mind.

Comment author: turchin 14 June 2015 09:15:30PM 0 points [-]

I uploaded new version of the map with changes marked in blue. http://immortality-roadmap.com/globriskeng.pdf

Technological precognition does not cover time travel, because it too much fantastic. We may include scientific study of claims about precognitive dreams, as such study will become soon possible with live brain scans of sleeping people and dream recording. Time travel could have its own x-risks, like well known grandfather problem.

Lowering human intelligence is in bad plans.

I have been thinking about hive mind... It may be a way to create safe AI, which will be based on humans and use their brains as free and cheep supercomputers via some kind of neuro-interface. But in fact contemporary science as whole is an example of such distributed AI.

If a hive mind is enforced, it is like worst totalitarian state... If it does not include all humans, the rest will fight against it, and may use very powerful weapons to safe their identity. It is already happen as fight between globalists and anti-globalists.