ChristianKl comments on Open thread, Jan. 18 - Jan. 24, 2016 - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (201)
Rant mode on:
Whenever Hawking blurts something out, mass media spread it around straight away. While he is probably OK with black holes, when it comes to global risks, his statements are not only false, but, one could say, harmful.
So, today he has said that within the millennia to come we’ll face the threat of creating artificial viruses and a nuclear war. This statement brings all the problems to about the same distance as that to the nearest black hole.
In fact, both a nuclear war and artificial viruses are realistic right now and can be used during our lifetime with probability as high as tens percent.
Feel the difference between chances for an artificial flu virus to exterminate 90% of population within 5 years (the rest would be finished off by other viruses) and suppositions regarding dangers over thousands of years.
The first thing is mobilizing, while the second one causes enjoyable relaxation.
He said: ‘Chances that a catastrophe on the Earth can emerge this year are rather low. However, they grow with time; so this undoubtedly will happen within the nearest one thousand or ten thousand years’
The scientist believes that the catastrophe will be the result of human activity: people can be destroyed by nuclear disaster or artificial virus spread. However, according to the physicist, the mankind still can save itself. For this end, colonization of other planets is needed. Reportedly, earlier Stephen Hawking stated that the artificial intelligence would be able to surpass the human one as soon as in 100 years.”
Also, the statement that migration to other planets automatically means salvation is false. What catastrophe can we escape if we have a colony on Mars? It will die off without supplies. If a world war started, nuclear missiles would reach it as well. In case of a slow global pandemia, people would bring it there like they bring AIDS virus now or used to bring plague on ships in the past. If hostile AI appeared, it would instantly penetrate to Mars via communication channels. Even gray goo can fly from one planet to another. Even if the Earth was hit by a 20-km asteroid, the amount of debris thrown into the space would be so great that they would reach Mars and fall there in the form of a meteorite shower.
I understand that simple solutions are luring, and a Mars colony is a romantic thing, but its usefulness would be negative. Even if we learned to build starships travelling at speeds close to that of light, they would primarily become a perfect kinetic weapon: collision of such a starship with a planet would mean death of the planet’s biosphere.
Finally, some words about AI. Why namely 100 years? Talking about risks, we have to consider a lower time limit, rather than a median. And the lower limit of estimated time to create some dangerous AI is 5 to 15 years, not 100. http://www.sciencealert.com/stephen-hawking-says-a-planetary-disaster-on-earth-is-a-near-certainty
Rant mode off
Do you really think that it's Hawking's position that at the moment there's no threat of nuclear war?
I don't think that he think so, I comment on what impression he translate to public, that risks are remote and space colonies will save us. He may secretly have other thoughts on the topic, but this does not matter.
I don't think that it makes sense to give the full responsibility for a message to a person that's distinct from the author of an article.
That said I don't think that saying:
Although the chance of a disaster on planet Earth in a given year may be quite low, it adds up over time, becoming a near certainty in the next thousand or ten thousand years.makes any reader update to believe that the chance of nuclear war or genetically engineered viruses are lower than they previously expected.Talking with mainstream media inherently requires simplying your message.Focuses the message compounding of risk over time doesn't seem wrong to me.
If he write an article about his understanding of x-risks timeframe and prevention measures timeframe with all fidelity he use to describe black holes we could concentrate on it.
But now it may be wise to said that the media is wrongly interoperated his words and that he (probably) meant exactly opposite: that we must invest in x-risks prevention now. The media publications is only thing with which we could argue. Also I think that he may take more responsibility while talking to media, because he is guru and everything he said may be understood uncritically.
Even the article says we have to be extra careful with x-risk prevention in the next 100 years because we don't have a self sustaining Mars base. I think you are misreading the article when you say it argues against investing in x-risk prevention now.