Whilst I really, really like the last picture - it seems a little odd to include it in the article.
Isn't this meant to seem like a hard-nosed introduction to non-transhumanist/sci-fi people? And doesn't the picture sort of act against that - by being slightly sci-fi and weird?
Actually, both that and the Earth image at the beginning of the article seem a little out of place. At least the latter would fit well into a print article (where you can devote half a page or a page to thematic images and still have plenty of text for your eyes to seek to), but online it forces scrolling on mid-sized windows before you can read comfortably. I think it'd read more smoothly if it was smaller, along the lines of the header images in "Philosophy by Humans" or (as an extreme on the high end) "The Cognitive Science of Rationality".
the image is self-explanatory.
I didn't understand it. It didn't self-explain to me.
Non-transhumanists ought to open up their eyes to the potential of the light cone, and introducing them to that potential, whether directly or indirectly, is one of the big tasks we have.
Woah! That's quite a leap! But hold on a second! This isn't meant to be literature, is it? It doesn't seem to me that an explanation of this kind benefits from having hidden meanings and whatnot, especially ideological ones like that.
Nitpicking on ringworld vs. stanford torus is not relevant, or interesting.
Agreed.
Suggesting that the Earth picture itself doesn't belong in the post shows some kind of general bias against visuals, or something.
This is a Fully General Counterargument that you could use on objections to any image, no matter what the image is, and no matter what the objection is.
As for me, I'm not really Blue or Green on whether to keep the image. It's really pretty, but the relevance is dubious at best and nonexistent at worst.
On September 26, 1983, Soviet officer Stanislav Petrov saved the world.
Allegedly saved the world. It actually seems pretty unlikely that the world was saved by Petrov. For one thing, Wikipedia says:
There are varying reports whether Petrov actually reported the alert to his superiors and questions over the part his decision played in preventing nuclear war, because, according to the Permanent Mission of the Russian Federation, nuclear retaliation is based on multiple sources that confirm an actual attack.{2}.
because, according to the Permanent Mission of the Russian Federation, nuclear retaliation is based on multiple sources that confirm an actual attack.
Given that this is coming from the sort of people who thought that setting up the Dead Hand was a good idea, and given that ass-covering and telling the public less than the truth was standard operating procedure in Russia, and given everything we know about the American government's incompetence, paranoia, greed, destructive experiments & actions (like setting PAL locks to zero, to pick a nuclear example) and that nuclear authority really was delegated to individual officers (this and other scandalous aspects came up recently in the New Yorker, actually: http://www.newyorker.com/online/blogs/newsdesk/2014/01/strangelove-for-real.html )...
I see zero reason to place any credence in their claims. This is fruit of the poisonous tree. They have reason to lie. I have no more reason to disbelieve Petrov than other similar incidents (like the Cuban Missile Crisis's submarine incident).
There aren't enough nuclear weapons to destroy the world, not by a long shot. There aren't even enough nuclear weapons to constitute an existential risk in and off themselves, though they might still contribute strongly to the end of humanity.
EDIT: I reconsidered, and yes, there is a chance that a nuclear war and its aftereffects permanently cripples the potential of humanity (maybe by extinction), which makes it an existential risk. The point I want to make, which was more clearly made by Pfft in a child post, is that this is still something very different from what Luke's choice of words suggests.
How many people will die is of course somewhat speculative, but I think if the war itself killed 10%, that would be a lot. More links on the subject: The effects of a Global Thermonuclear War Nuclear Warfare 101, 102 and 103
Hi there I'm the artist who's image you've used to illustrate this article. Good article and I've learned a thing or two. Thankyou for using my image and placing it as a link back to my page, all links are good etc. I don't have a problem with my work being used and indeed its pleasant to come across it like this. In future however could you please ask me first and provide a written acknowledgement in the text.
Cheers
Richard
Oh and for those who were discussing the rings exactitudes ... Its a 200km diameter torus with a width of 10km. The atmosphere is "held in" by the strange alien structures looping about the outside of the ring. Probably some sort of induced electro statics. I made this image with the idea of showing a culture that was simultaneously extremely technically advanced and also quite blasé about the existence of the technology. The inhabitants in the towns below may not even glance at the structures above that protect their world. There was no social comment intended, imply what you will :) It was originally intended to be animated, maybe I'll have another attempt at that, but I think it could do with some finishing work first.
These two are among the largest donors to Singularity Institute, an organization focused on the reduction of existential risks from artificial intelligence.
Should this be the Singularity Institute?
Indeed.
It's as if people are being deliberately mischievous by writing both "the SIAI" (which should be "SIAI"), and on the other hand, "Singularity Institute" (which should be "the Singularity Institute").
Luke is probably confused by the fact that the organization is often called "Singinst" by its members. But that expression grammatically functions as a name, like "SIAI" (or, now, "SI"), and thus does not take the definite article.
The full name, however, ("the Singularity Institute") functions grammatically as a description, and thus does take the definite article. Compare: the United Nations, the Brookings Institution, the Institute for Advanced Study, the London School of Economics, the Center for Inquiry, the National Football League.
Abbrevations differ as to whether they function as names or descriptions: IAS, but the UN. SI(AI) is like the former, not the latter.
If the abbreviation is an acronym (i.e. pronounced as a word rather than a string of letter names), then it will function as a name: ACORN, not "the ACORN" (even though, in full, it's "the Association...").
That only changes the target of my criticism (now all of you, instead of just Luke), not the criticism itself, obviously.
The "the" isn't droppable, because it was never part of the name in the first place: it was never "The Singularity Institute"; but rather "the Singularity Institute". That is, the article is a part of the contextual grammar. Attempting to "drop" it would be like me declaring that "komponisto" must always be followed by plural verb forms.
(Some organizations do have "The" in the name itself, e.g. The Heritage Foundation. They could decide to drop the "The", and then their logo would say "Heritage Foundation". But one would still write "at the Heritage Foundation"; one just wouldn't write "at The Heritage Foundation".)
I don't know of any example of an "Institute" where people don't use an article in such a context -- which suggests that any such example that might exist isn't high-status enough for me to have heard of it. Even the one that I thought might be an example -- the Mathematical Sciences Research Institute -- also has a grammatical "the"!
You guys should want to be like IAS and MSRI (after all, you'd rather the people at those places be working for you instead!) I don't understand the rationale for this gratuitous eccentricity.
Did you miss this comment? Abbreviations are treated separately from the corresponding full names. One doesn't say "the ABC", but one does say "the American Broadcasting Company". Et cetera.
Likewise, "SIAI" (not "the SIAI"), but "the Singularity Institute for Artificial Intelligence".
One may be either "at CIA" (especially if you're an insider) or "at the CIA", but as far as I know one is always "at the Central Intelligence Agency".
It's worth noting that "Humanity" /= "Human-like (or better) intelligences that largely share our values" /= "Civilization." This gives us three different kinds of existential risk.
Robin Hanson, as I understand him, seems to expect that only the third will survive, and seems to be okay with that. Many Less Wrongers, on the other hand, seem not so concerned with humanity per se, but would care about the survival of human-like intelligences sharing our values. And someone could care an awful lot about humanity per se, and want t...
His screen would have flashed "ракетное нападение." What you wrote is correct but in a grammatical form which suggests it was taken from inside a larger sentence involving words like "about a rocket attack"... Russian words change depending on their use within the sentence.
This is a good intro to human extinction, but Bostrom coined "existential risk" specifically to include more subtle ways of losing the universe's potential value. If you're not going to mention those, might as well not use the term.
I'd like to point out some lukeprog fatigue here, if anyone else wrote this article it would have way more points by now.
If I had been one of those persons with the missile warning and red button, I wouldn't have pressed it even if I knew the warning was real. What use would it be to launch a barrage of nuclear weapons against normal citizens simply because their foolish leaders did so to you? It would only make things worse, and certainly wouldn't save anyone. Primitive needs to revenge can be extremely dangerous with todays technology.
Mutually assured destruction is essentially a precommitment strategy: if you use nuclear weapons on me I commit to destroying you and your allies, a larger downside than any gains achievable from first use of nuclear weapons.
With this in mind, it's not clear to me that it'd be wrong (in the decision-theoretic sense, not the moral) to launch on a known-good missile warning. TDT states that we shouldn't differentiate between actions in an actual and a simulated or abstracted world: if we don't make this distinction, following through with a launch on warning functions to screen off counterfactual one-sided nuclear attacks, and ought to ripple back through the causal graph to screen off all nuclear attacks (a world without a nuclear war in it is better along most dimensions than the alternative). It's not a decision I'd enjoy making, but every increment of uncertainty increases the weighting of the unilateral option, and that's something we really really don't want. Revenge needn't enter into it.
(This assumes a no-first-use strategy, which the USSR at Petrov's time claimed to follow; the US claimed a more ambiguous policy leaving open tactical nuclear options following conventi...
The "discussion" of existential risk does occur in the mainstream media, sort of, it's mainly block buster movie's like Independence Day, War of the Worlds, 2012, The Day After Tomorrow and so on. I am confident that people understand the concept, probably however not the phrase. I respectfully suggest that the author amend the original post to include revelation that discussion of existential risk does occur, perhaps mentioning that the discussion is often trivial or often for entertainment purposes.
Whilst there have been a wide abundance of ex...
A quote relevant to the final section of this post...
The Earth is the cradle of the mind, but one cannot live forever in a cradle.
Konstantin Tsiolkovsky
There are some points that I dislike about this introduction: The first one is the implicit speciesism resulting from the focus on extinction of Homo Sapiens as a species. It would have made sense to use Bostrom's definition of existential risk, which focuses on earth-originating intelligent life instead. Replacement of humans by posthumans is not existential risk. Transhumanism usually advocates the well- being of all sentience, not just humans. This can refer to both non-human animals (e.g. in natural ecosystems) and posthumans spreading into space.
Maybe...
If you are writing for a general audience, I think you lose most of that audience here:
But it's not just nuclear risks we have to worry about. As Sun Microsystems’ co-founder Bill Joy warned in his much-discussed article Why the Future Doesn’t Need Us, emerging technologies like synthetic biology, nanotechnology, and artificial intelligence may quickly become even more powerful than nuclear bombs, and even greater threats to the human species. Perhaps the International Union for Conservation of Nature will need to reclassify Homo sapiens as an endangered species.
This is a "basics" article, intended for introducing people to the concept of existential risk.
On September 26, 1983, Soviet officer Stanislav Petrov saved the world.
Three weeks earlier, Soviet interceptors had shot down a commercial jet, thinking it was on a spy mission. All 269 passengers were killed, including active U.S. senator Lawrence McDonald. President Reagan called the Soviet Union an “evil empire" in response. It was one of the most intense periods of the Cold War.
Just after midnight on September 26, Petrov sat in a secret bunker, monitoring early warning systems. He did this only twice a month, and it wasn’t his usual shift; he was filling in for the shift crew leader.
One after another, five missiles from the USA appeared on the screen. A siren wailed, and the words "ракетном нападении" ("Missile Attack") appeared in red letters. Petrov checked with his crew, who reported that all systems were operating properly. The missiles would reach their targets in Russia in mere minutes.
Protocol dictated that he press the flashing red button before him to inform his superiors of the attack so they could decide whether to launch a nuclear counterattack. More than 100 crew members stood in silence behind him, awaiting his decision.
"I thought for about a minute," Petrov recalled. "I thought I’d go crazy... It was as if I was sitting on a bed of hot coals."
Petrov broke protocol and went with his gut. He refused to believe what the early warning system was telling him.
His gut was right. Russian satellites had misinterpreted shiny reflections on the Earth’s surface as missile launches. Russia was not under attack.
If Petrov had pressed the red button, and his superiors had launched a counterattack, the USA would have detected the incoming Russian missiles and launched their own missiles before they could be destroyed in the ground. Soviet and American missiles would have passed in the night sky over the still, silent Arctic before detonating over hundreds of targets — each detonation more destructive than all the bombs dropped in World War II combined, including the atomic bombs that vaporized Hiroshima and Nagasaki. Most of the Northern Hemisphere would have been destroyed.
Petrov was reprimanded and offered early retirement. To pay his bills, he took jobs as a taxi driver and a security guard. The biggest award he ever received for saving the world was a "World Citizen Award" and $1000 from a small organization based in San Francisco. He spent half the award on a new vacuum cleaner.
During his talk at Singularity Summit 2011 in New York City, hacker Jaan Tallinn drew an important lesson from the story of Stanislav Petrov:
Tallinn knows a thing or two about powerful technologies making global impact. Kazaa, the file-sharing program he co-developed, was once responsible for half of all Internet traffic. He went on to develop the internet calling program Skype, which in 2010 accounted for 13% of all international calls.
Where could he go from there? After reading dozens of articles about the cognitive science of rationality, Tallinn realized:
Tallinn found the biggest pool of underappreciated concerns in the domain of “existential risks": things that might go horribly wrong and wipe out our entire species, like nuclear war.
The documentary Countdown to Zero shows how serious the nuclear threat is. At least 8 nations have their own nuclear weapons, and the USA has given nuclear weapons to 5 others. There are enough nuclear weapons around to destroy the world several times over, and the risk of a mistake remains even after the cold war. In 1995, Russian president Boris Yeltsin had the “nuclear suitcase" — capable of launching a barrage of nuclear missiles — open in front of him. Russian radar had mistaken a weather rocket for a US submarine-launched ballistic missile. Like Petrov before him, Yeltsin disbelieved his equipment and refused to press the red button. Next time we might not be so lucky.
But it's not just nuclear risks we have to worry about. As Sun Microsystems’ co-founder Bill Joy warned in his much-discussed article Why the Future Doesn’t Need Us, emerging technologies like synthetic biology, nanotechnology, and artificial intelligence may quickly become even more powerful than nuclear bombs, and even greater threats to the human species. Perhaps the International Union for Conservation of Nature will need to reclassify Homo sapiens as an endangered species.
Academics are beginning to accept that humanity lives on a knife’s edge. The famous physicists Martin Rees and John Leslie have written books about existential risk, titled Our Final Hour: A Scientist’s Warning and The End of the World: The Science and Ethics of Human Extinction. In 2008, Oxford University Press published Global Catastrophic Risks, inviting experts to summarize what we know about a variety of existential risks. New research institutes have been formed to investigate the subject, including the Singularity Institute in San Francisco and the Future of Humanity Institute at Oxford University.
Governments, too, are taking notice. In the USA, NASA was given a congressional mandate to catalogue all near-earth objects that are one kilometer or more in diameter, because an impact with such a large object would be catastrophic. President Bush established the National Nanotechnology Initiative to ensure the safe development of molecule-sized materials and machines. (Precisely self-replicating molecular machines could multiply themselves out of control, consuming resources required for human survival.) Many nations are working to reduce nuclear armaments, which pose the risk of human extinction by global nuclear war.
The public, however, remains mostly unaware of the risks. Existential risk is an unpleasant and scary topic, and may sound too distant or complicated to discuss in the mainstream media. For now, discussion of existential risk remains largely constrained to academia and a few government agencies.
The concern for existential risks may appeal to one other group: analytically-minded "social entrepreneurs" who want to have a positive impact on the world, and are accustomed to making decisions based on calculation. Tallinn fits this description, as does Paypal co-founder Peter Thiel. These two are among the largest donors to Singularity Institute, an organization focused on the reduction of existential risks from artificial intelligence.
What is it about the topic of existential risk that appeals to people who act by calculation? The analytic case for doing good by reducing existential risk was laid out decades ago by moral philosopher Derek Parfit:
Our technology gives us great power. If we can avoid using this power to destroy ourselves, then we can use it to spread throughout the galaxy and create structures and experiences of value on an unprecedented scale.
Reducing existential risk — that is, carefully and thoughtfully preparing to not kill ourselves — may be the greatest moral imperative we have.