Google has AI safety protocol.
Citation?
Three known attempts to make a map of x-risks prevention in the field of science exist:
1. First is the list from the Global Catastrophic Risks Institute in 2012-2013, and many links there are already not working:
2. The second was done by S. Armstrong in 2014
3. And the most beautiful and useful map was created by Andrew Critch. But its ecosystem ignores organizations which have a different view of the nature of global risks (that is, they share the value of x-risks prevention, but have another world view).
In my map I have tried to add all currently active organizations which share the value of global risks prevention.
It also regards some active independent people as organizations, if they have an important blog or field of research, but not all people are mentioned in the map. If you think that you (or someone) should be in it, please write to me at alexei.turchin@gmail.com
I used only open sources and public statements to learn about people and organizations, so I can’t provide information on the underlying net of relations.
I tried to give all organizations a short description based on its public statement and also my opinion about its activity.
In general it seems that all small organizations are focused on their collaboration with larger ones, that is MIRI and FHI, and small organizations tend to ignore each other; this is easily explainable from the social singnaling theory. Another explanation is that larger organizations have a great ability to make contacts.
It also appears that there are several organizations with similar goal statements.
It looks like the most cooperation exists in the field of AI safety, but most of the structure of this cooperation is not visible to the external viewer, in contrast to Wikipedia, where contributions of all individuals are visible.
It seems that the community in general lacks three things: a united internet forum for public discussion, an x-risks wikipedia and an x-risks related scientific journal.
Ideally, a forum should be used to brainstorm ideas, a scientific journal to publish the best ideas, peer review them and present them to the outer scientific community, and a wiki to collect results.
Currently it seems more like each organization is interested in creating its own research and hoping that someone will read it. Each small organization seems to want to be the only one to present the solutions to global problems and gain full attention from the UN and governments. It raises the problem of noise and rivalry; and also raises the problem of possible incompatible solutions, especially in AI safety.
The pdf is here: http://immortality-roadmap.com/riskorg5.pdf

Google has AI safety protocol.
Citation?
I meant these two news: http://uk.businessinsider.com/google-deepmind-develops-a-big-red-button-to-stop-dangerous-ais-causing-harm-2016-6
http://uk.businessinsider.com/google-ai-ethics-board-remains-a-mystery-2016-3
Red button and ethics board
I would also contemplate the scenario that the human species might turn out to be less impressive than it currently appears, and is actually a fairly typical example of a successful Earth species. Most achievements that distinguish humans from eg plankton are in the future (eg space industry), not the past or present.
This might sound strange. Arguments in favor of this perspective:
• Homo sapiens is not the greatest species in terms of population or total biomass.
• Homo sapiens is not the only species to make tools, use agriculture, build buildings, or adapt to a variety of terrestrial habitats.
• Homo sapiens is not the first species to have a catastrophic impact on the atmosphere.
Arguments against this perspective:
• The human economy is currently doubling in scale every couple decades.
• No species (probably) ever reached the edge of the atmosphere before Homo sapiens.
(To clarify, i think this question is far from settled. But i think the idea that Homo sapiens will be smaller-impact than expected is more likely than the scenario that historical gods are representations of unknown prosperous civilizations.)
If we look on humans as on typical species, we could use typical estimate of species life expectancy, which is several million years, and use it as human life expectancy. It is not bad.
But humans are definitely in the special point of their history and they could create a competitor soon (post humans or AI) and doesn't look good. Competitors are one of the main ways how species go extinct.
While it is known that AI could be catastrophic, the only organisation (MIRI) which is doing most serios research on its prevention is underfunded. Providing finding to them could dramatically change probability of human survival, and we could estimate that 1 USD donated to them will save 10 human lives.
Is any of this true? "Most serious"? "Dramatically change probability of human survival"? 10 lives per $1?
I just provided an example of possible pitch, and I think that some people in Miri thinks in this way. I wanted to show that the pitch must have new information and be actionable.
My thoughts:
Google has (is) the biggest computer program = 3 bln lines of code
Google has world biggest database, including Youtube, 23andme, Gmail, Google books, all internet content
Google is the world biggest computer, which includes something like 1 per cent of total world computing power
Google did most impressive AI demonstartion recently that is win in Go.
Google is clearly interested in creating AI.
Google has AI safety protocol.
Google has money to buy needed parts, including people.
So it looks like Google is in winning position. How may be its main competitors? Military AIs in NSA. Other large companies.
Is that the canon explanation? I thought Skynet was acting out of self-preservation.
It is not exactly canon explanation, but (the following is my speculation which could be used in discussion about AI values if terminator was mentioned) the decision to preserve it self must follow from its main task: win nuclear war.
Winning nuclear war includes as it subgoal a very high priority one: to ensure survival of command center. Basically, a country, which was able to preserve its command center is wining nuclear war. So it seems rational to programmers of skynet to put preserving the skynet as a main goal, as it is the same as winning nuclear war (but only in a situation when nuclear war has started).
But skynet concluded that in peaceful time the main risks to its goal of command center survival is people and decided to kill them all. So it worked as paperclip maximaser for the goal of command center preservation.
It also probably started self improvement only after it kills most people, as it was already powerful system. So it escaped the main problem of chicken and the egg in case of SeedAI - what happens first? - self-improvement or malicious decision to kill people.
I think that most people already heard about the fact that AI could be catastrophic risk, and they already has their opinion about it.
In our circle that might be true but many people don't have an opinion that goes beyond terminator.
Yes. So we have to utilise this knowledge. We could said something like: Terminator appear because its progenitor, Skynet computer, received a command to protect US, and concluded that the best way to do it is to prevent humans from switching him off, and so he decided to exterminate humans. So Terminator appear because of unsolved problem of value alignment.
I've been thinking about what seems to be the standard LW pitch on AI risk. It goes like this: "Consider an AI that is given a goal by humans. Since 'convert the planet into computronium' is a subgoal of most goals, it does this and kills humanity."
The problem, which various people have pointed out, is that this implies an intelligence capable of taking over the world, but not capable of working out that when a human says pursue a certain goal, they would not want this goal to be pursued in a way that leads to the destruction of the world.
Worse, the argument can then be made that this idea that an AI will interpret goals so literally without modelling a human mind constitutes an "autistic AI" and that only autistic people would assume that AI would be similarly autistic. I do not endorse this argument in any way, but I guess its still better to avoid arguments that signal low social skills, all other things being equal.
Is there any consensus on what the best 'elevator pitch' argument for AI risk is? Instead of focusing on any one failure mode, I would go with something like this:
"Most philosophers agree that there is no reason why superintelligence is not possible. Anything which is possible will eventually be achieved, and so will superintelligence, perhaps in the far future, perhaps in the next few decades. At some point, superintelligences will be as far above humans as we are above ants. I do not know what will happen at this point, but the only reference case we have is humans and ants, and if superintelligences decide that humans are an infestation, we will be exterminated."
Incidentally, this is the sort of thing I mean by painting LW style ideas as autistic (via David Pierce)
As far as we can tell, digital computers are still zombies. Our machines are becoming autistically intelligent, but not supersentient - nor even conscious. [...] Full-Spectrum Superintelligence entails: [...] social intelligence [...] a metric to distinguish the important from the trivial [...] a capacity to navigate, reason logically about, and solve problems in multiple state-spaces of consciousness [e.g. dreaming states (cf. lucid dreaming), waking consciousness, echolocatory competence, visual discrimination, synaesthesia in all its existing and potential guises, humour, introspection, the different realms of psychedelia [...] and finally "Autistic", pattern-matching, rule-following, mathematico-linguistic intelligence, i.e. the standard, mind-blind cognitive tool-kit scored by existing IQ tests. High-functioning "autistic" intelligence is indispensable to higher mathematics, computer science and the natural sciences. High-functioning autistic intelligence is necessary - but not sufficient - for a civilisation capable of advanced technology that can cure ageing and disease, systematically phase out the biology of suffering, and take us to the stars. And for programming artificial intelligence.
Sometimes David Pierce seems very smart. And sometimes he seems to imply that the ability to think logically while on psychedelic drugs is as important as 'autistic intelligence'. I don't think he thinks that autistic people are zombies that do not experience subjective experience, but that also does seem implied.
I think that most people already heard about the fact that AI could be catastrophic risk, and they already has their opinion about it. May be their opinions are wrong.
What is the goal of such elevator pitch?
I think that the message should be following: While it is known that AI could be catastrophic, the only organisation (MIRI) which is doing most serios research on its prevention is underfunded. Providing finding to them could dramatically change probability of human survival, and we could estimate that 1 USD donated to them will save 10 human lives.
My favorite crazy unlikely idea about that is that the Paleocene-Eocene Thermal Maximum 50 megayears ago - a 200k year pulse of high CO2 levels and temperatures in which the CO2 was added over a timescale of less than 10k years (potentially much less) and had an isotopic composition consistent with having been liberated from biogenic deposits - could theoretically be explained by all the coal and oil deposits of Antarctica being burned followed by some positive feedbacks kicking in.
(Most land of Antarctica never having been investigated geologically in any detail at all due to being under kilometers of ice) (And Antarctica at that time being completely unglaciated and relatively temperate despite being where it is now by then) (And subsequent glaciation having scraped most of the surface clean of anything that was on it at the time)
We have an advantage in that we evolved in the tropics - you can take a tropical animal and keep it warm near the poles by wrapping it in clothes. It's much more difficult to take a cold-adapted polar animal and keep it alive in the tropics...
In the Trent's article even mentioned possible species of Dinos who may be able have intelligent explosion. http://www.strangehorizons.com/2009/20090713/trent-a.shtml
It means that we could find really interesting (and dangerous) things during excavations in Antarctica?
If we knew that AI will be created by Google, and that it will happen in next 5 years, what should we do?