Film about Stanislav Petrov
I searched around but didn't see any mention of this. There's a film being released next week about Stanislav Petrov, the man who saved the world.
The Man Who Saved the World
http://www.imdb.com/title/tt2277106/
Due for limited theatrical release in the USA on 18 September 2015.
http://themanwhosavedtheworldmovie.com/#seethemovie
Will show in New York, Los Angeles, Detroit, Portland.
Previous discussion of Stanislav Petrov:
http://lesswrong.com/lw/jq/926_is_petrov_day/
Future of Life Institute existential risk news site
I'm excited to announce that the Future of Life Institute has just launched an existential risk news site!
The site will have regular articles on topics related to existential risk, written by journalists, and a community blog written by existential risk researchers from around the world as well as FLI volunteers. Enjoy!
Slides online from "The Future of AI: Opportunities and Challenges"
In the first weekend of this year, the Future of Life institute hosted a landmark conference in Puerto Rico: "The Future of AI: Opportunities and Challenges". The conference was unusual in that it was not made public until it was over, and the discussions were under Chatham House rules. The slides from the conference are now available. The list of attenders includes a great many famous names as well as lots of names familiar to those of us on Less Wrong: Elon Musk, Sam Harris, Margaret Boden, Thomas Dietterich, all three DeepMind founders, and many more.
This is shaping up to be another extraordinary year for AI risk concerns going mainstream!
[LINK] Steven Hawking warns of the dangers of AI
[Hawking] told the BBC:"The development of full artificial intelligence could spell the end of the human race."
...
"It would take off on its own, and re-design itself at an ever increasing rate," he said. "Humans, who are limited by slow biological evolution, couldn't compete, and would be superseded."
There is, however, no mention of Friendly AI or similar principles.
In my opinion, this is particularly notable for the coverage this story is getting within the mainstream media. At the current time, this is the most-read and most-shared news story on the BBC website.
Anthropic signature: strange anti-correlations
Imagine that the only way that civilization could be destroyed was by a large pandemic that occurred at the same time as a large recession, so that governments and other organisations were too weakened to address the pandemic properly.
Then if we looked at the past, as observers in a non-destroyed civilization, what would we expect to see? We could see years with no pandemics or no recessions; we could see mild pandemics, mild recessions, or combinations of the two; we could see large pandemics with no or mild recessions; or we could see large recessions with no or mild pandemics. We wouldn't see large pandemics combined with large recessions, as that would have caused us to never come into existence. These are the only things ruled out by anthropic effects.
Assume that pandemics and recessions are independent (at least, in any given year) in terms of "objective" (non-anthropic) probabilities. Then what would we see? We would see that pandemics and recessions appear to be independent when either of them are of small intensity. But as the intensity rose, they would start to become anti-correlated, with a large version of one completely precluding a large version of the other.
The effect is even clearer if we have a probabilistic relation between pandemics, recessions and extinction (something like: extinction risk proportional to product of recession size times pandemic size). Then we would see an anti-correlation rising smoothly with intensity.
Thus one way of looking for anthropic effects in humanity's past is to look for different classes of incidents that are uncorrelated at small magnitude, and anti-correlated at large magnitudes. More generally, to look for different classes of incidents where the correlation changes at different magnitudes - without any obvious reasons. Than might be the signature of an anthropic disaster we missed - or rather, that missed us.
Do Earths with slower economic growth have a better chance at FAI?
I was raised as a good and proper child of the Enlightenment who grew up reading The Incredible Bread Machine and A Step Farther Out, taking for granted that economic growth was a huge in-practice component of human utility (plausibly the majority component if you asked yourself what was the major difference between the 21st century and the Middle Ages) and that the "Small is Beautiful" / "Sustainable Growth" crowds were living in impossible dreamworlds that rejected quantitative thinking in favor of protesting against nuclear power plants.
And so far as I know, such a view would still be an excellent first-order approximation if we were going to carry on into the future by steady technological progress: Economic growth = good.
But suppose my main-line projection is correct and the "probability of an OK outcome" / "astronomical benefit" scenario essentially comes down to a race between Friendly AI and unFriendly AI. So far as I can tell, the most likely reason we wouldn't get Friendly AI is the total serial research depth required to develop and implement a strong-enough theory of stable self-improvement with a possible side order of failing to solve the goal transfer problem. Relative to UFAI, FAI work seems like it would be mathier and more insight-based, where UFAI can more easily cobble together lots of pieces. This means that UFAI parallelizes better than FAI. UFAI also probably benefits from brute-force computing power more than FAI. Both of these imply, so far as I can tell, that slower economic growth is good news for FAI; it lengthens the deadline to UFAI and gives us more time to get the job done. I have sometimes thought half-jokingly and half-anthropically that I ought to try to find investment scenarios based on a continued Great Stagnation and an indefinite Great Recession where the whole developed world slowly goes the way of Spain, because these scenarios would account for a majority of surviving Everett branches.
Roughly, it seems to me like higher economic growth speeds up time and this is not a good thing. I wish I had more time, not less, in which to work on FAI; I would prefer worlds in which this research can proceed at a relatively less frenzied pace and still succeed, worlds in which the default timelines to UFAI terminate in 2055 instead of 2035.
I have various cute ideas for things which could improve a country's economic growth. The chance of these things eventuating seems small, the chance that they eventuate because I write about them seems tiny, and they would be good mainly for entertainment, links from econblogs, and possibly marginally impressing some people. I was thinking about collecting them into a post called "The Nice Things We Can't Have" based on my prediction that various forces will block, e.g., the all-robotic all-electric car grid which could be relatively trivial to build using present-day technology - that we are too far into the Great Stagnation and the bureaucratic maturity of developed countries to get nice things anymore. However I have a certain inhibition against trying things that would make everyone worse off if they actually succeeded, even if the probability of success is tiny. And it's not completely impossible that we'll see some actual experiments with small nation-states in the next few decades, that some of the people doing those experiments will have read Less Wrong, or that successful experiments will spread (if the US ever legalizes robotic cars or tries a city with an all-robotic fleet, it'll be because China or Dubai or New Zealand tried it first). Other EAs (effective altruists) care much more strongly about economic growth directly and are trying to increase it directly. (An extremely understandable position which would typically be taken by good and virtuous people).
Throwing out remote, contrived scenarios where something accomplishes the opposite of its intended effect is cheap and meaningless (vide "But what if MIRI accomplishes the opposite of its purpose due to blah") but in this case I feel impelled to ask because my mainline visualization has the Great Stagnation being good news. I certainly wish that economic growth would align with FAI because then my virtues would align and my optimal policies have fewer downsides, but I am also aware that wishing does not make something more likely (or less likely) in reality.
To head off some obvious types of bad reasoning in advance: Yes, higher economic growth frees up resources for effective altruism and thereby increases resources going to FAI, but it also increases resources going to the AI field generally which is mostly pushing UFAI, and the problem arguendo is that UFAI parallelizes more easily.
Similarly, a planet with generally higher economic growth might develop intelligence amplification (IA) technology earlier. But this general advancement of science will also accelerate UFAI, so you might just be decreasing the amount of FAI research that gets done before IA and decreasing the amount of time available after IA before UFAI. Similarly to the more mundane idea that increased economic growth will produce more geniuses some of whom can work on FAI; there'd also be more geniuses working on UFAI, and UFAI probably parallelizes better and requires less serial depth of research. If you concentrate on some single good effect on blah and neglect the corresponding speeding-up of UFAI timelines, you will obviously be able to generate spurious arguments for economic growth having a positive effect on the balance.
So I pose the question: "Is slower economic growth good news?" or "Do you think Everett branches with 4% or 1% RGDP growth have a better chance of getting FAI before UFAI"? So far as I can tell, my current mainline guesses imply, "Everett branches with slower economic growth contain more serial depth of cognitive causality and have more effective time left on the clock before they end due to UFAI, which favors FAI research over UFAI research".
This seems like a good parameter to have a grasp on for any number of reasons, and I can't recall it previously being debated in the x-risk / EA community.
EDIT: To be clear, the idea is not that trying to deliberately slow world economic growth would be a maximally effective use of EA resources and better than current top targets; this seems likely to have very small marginal effects, and many such courses are risky. The question is whether a good and virtuous person ought to avoid, or alternatively seize, any opportunities which come their way to help out on world economic growth.
EDIT 2: Carl Shulman's opinion can be found on the Facebook discussion here.
Tegmark's talk at Oxford
Max Tegmark, from the Massachusetts Institute of Technology and the Foundational Questions Institute (FQXi), presents a cosmic perspective on the future of life, covering our increasing scientific knowledge, the cosmic background radiation, the ultimate fate of the universe, and what we need to do to ensure the human race's survival and flourishing in the short and long term. He's strongly into the importance of xrisk reduction.
[LINK] Scatter, Adapt, and Remember: How Humans Will Survive a Mass Extinction
A new popular science book on existential risks and mass extinctions from Annalee Newitz, the founding editor of io9.com
It probably won't display the same rigour as Global Catastrophic Risks (Bostrom, Cirkovic et al.), but that was published five years ago and is a bit academic. A new book written in a popular, journalistic way seems pretty appealing - it might even be a good introduction for family/friends. Anyway I'm looking forward to reading it, and I expect enough other LWers will be interested in this news to warrant the post.
If anyone has any other existential risk book recommendations, please comment.
Does Existential Risk Justify Murder? -or- I Don't Want To Be A Supervillain
A few days ago I was rereading one of my favourite graphic novels. In it the supervillain commits mass murder to prevent nuclear war - he kills millions to save billions. This got me thinking about how a lot of LessWrong/Effective Altruism people approach existential risks (xrisks). An existential risk is one that threatens the premature extinction of Earth-originating intelligent life or the permanent and drastic destruction of its potential for desirable future development (Bostrom 2002). I'm going to point out an implication of this approach, show how this conflicts with a number of intuitions, and then try to clarify the conflict.
I. Implication:
If murder would reduce xrisk, one should commit the murder. The argument for this is that compared to billions or even trillions of future people, and/or the amount of valuable things they could instantiate (by experiencing happiness or pleasure, performing acts of kindness, creating great artworks, etc) the importance of one present person, and/or the badness of commiting (mass) murder is quite small. The large number on the 'future' side outweighs or cancels the far smaller number on the 'present' side.
I can think of a number of scenarios in which murder of one or more people could quite clearly reduce existential risk, such as the people who know the location of some secret refuge
Indeed at the extreme it would seem that reducing xrisk would justify some truly terrible things, like a preemptive nuclear strike on a rogue country.
This implication does not just hold for simplistic act-utilitarians, or consequentialists more broadly - it affects any moral theory that accords moral weight to future people and doesn't forbid murder.
This implication is implicitly endorsed in a common choice many of us make between focusing our resources on xrisk reduction as opposed to extreme poverty reduction. This is sometimes phrased as being about choosing to save one life now or far more future lives. While bearing in mind some complications (such as the debate over doing vs allowing and the Doctrine of Double Effect), it seems that 'letting several people die from extreme poverty to try to reduce xrisk' is in an important way similar to 'killing several people to try to reduce xrisk'.
II. Simple Objection:
A natural reaction to this implication is that this is wrong, one shouldn't commit murder to reduce xrisk. To evade some simple objections let us assume that we can be highly sure that the (mass) murder will indeed reduce xrisk: maybe no-one will find out about the murder, or it won't open a position for someone even worse.
Let us try and explain this reaction, and offer an objection: The idea that we should commit (mass) murder conflicts with some deeply held intuitions, such as the intuition that one shouldn't kill, and the intuition that one shouldn't punish a wrong-doer before she/he commits a crime.
One response - the most prominent advocate of which is probably Peter Singer - is to cast doubt onto our intuitions. We may have these intuitions, but they may have been induced by various means i.e. by evolution or society. Racist views were common in past societies. Moreover there is some evidence that humans may have a evolutionary predisposition to be racist. Nevertheless we reject racism, and therefore (so the argument goes) we should reject a number of other intuitions. So perhaps we should reject the intuitions we have, shrug off the squeamishness and agree that (mass) murder to reduce xrisk is justified.
[NB: I'm unsure about how convincing this response is. Two articles in Philosophy and Public Affairs dispute Singer's argument (Berker 2009) (Kamm 2009). One must also take into account the problem of applying our everyday intuitions to very unusual situations - see 'How Outlandish Can Imaginary Cases Be?' (Elster 2011)]
The trope of the supervillain justifying his or her crimes by claiming it had to be done for 'the greater good' (or similar) is well established. Tv tropes calls it Utopia Justifies The Means. I find myself slightly troubled when my moral beliefs lead me to agree with fictional supervillains. Nevertheless, is the best option to bite the bullet and side with the supervillains?
III. Complex Objection:
Let us return to the fictional example with which we started. Part of the reason his act seems wrong is that, in real life, the supervillain's mass murder was not necessary to prevent nuclear war - the Cold War ended without large-scale direct conflict between the USA and USSR. This seems to point the way to (some) clarification.
I find my intuitions change when the risk seems higher. While I'm unsure that murder is the right answer in the examples given above, it seems clearer in a situation where the disaster is in the midst of occurring, and murder or mass murder is the only way to prevent an existential disaster. The hypothetical that works for me is imagining some incredibly virulent disease or 'grey-goo' nano-replicator that has swept over Australia and is about to spread, and the only way to stop it is a nuclear strike.
One possibility is that my having a different intuition is simply because the situation is similar to hypotheticals that seem more familiar, such as shooting a hostage-taker or terrorist if that was the only way to prevent loss of innocent life.
But I'd like to suggest that it perhaps reflects a problem with xrisks, that it is the idea of doing something awful for a very uncertain benefit. The problem is the uncertainty. If a (mass) murder would prevent an existential disaster, then one should do it, but when it merely reduces xrisk it is less clear. Perhaps there should be some sort of probability threshold - if one has good reason to think the probability is over certain limits (10%, 50%, etc) then one is justified in committing gradually more heinous acts.
IV. Conclusion
In this post I've been trying to explain a troubling worry - to lay out my thinking - more than I have been trying to argue for or against an explicit claim. I have a problem with the claim that xrisk reduction is the most important task for humanity and/or me. On the one hand it seems convincing, yet on the other it seems to lead to some troubling implications - like justifying not focusing on extreme poverty reduction, or justifying (mass) murder.
Comments and criticism of the argument are welcomed. Also, I would be very interested in hearing people's opinions on this topic. Do you think that 'reducing xrisk' can justify murder? At what scale? Perhaps more importantly, does that bother you?
DISCLAIMER: I am in no way encouraging murder. Please do not commit murder.
The Center for Sustainable Nanotechnology
Those concerned about existential risks may be interested to learn that, as of last September, the National Science Foundation is funding a Center for Sustainable Nanotechnology. Though I haven't yet seen anywhere where they explicitly characterize nanotechnology as an existential threat to humanity (they seem mostly to be concerned with the potential hazards of nanoparticle pollution, rather than any kind of grey goo scenario), I was still pleased to discover that this group exists.
Here is how they describe themselves on their main page:
The Center for Sustainable Nanotechnology is a multi-institutional partnership devoted to investigating the fundamental molecular mechanisms by which nanoparticles interact with biological systems.
...
While nanoparticles have a great potential to improve our society, relatively little is yet known about how nanoparticles interact with organisms, and how the unintentional release of nanoparticles from consumer or industrial products might impact the environment.
The goal of the Center for Sustainable Nanotechnology is to develop and utilize a molecular-level understanding of nanomaterial-biological interactions to enable development of sustainable, societally beneficial nanotechnologies. In effect, we aim to understand the molecular-level chemical and physical principles that govern how nanoparticles interact with living systems, in order to provide the scientific foundations that are needed to ensure that continued developments in nanotechnology can take place with the minimal environmental footprint and maximum benefit to society.
...
Funding for the CSN comes from the National Science Foundation Division of Chemistry through the Centers for Chemical Innovation Program.
And on their public outreach website:
Our “center” is actually a group of people who care about our environment and are doing collaborative research to help ensure that our planet will be habitable hundreds of years from now – in other words, that the things we do every day as humans will be sustainable in the long run.
Now you’re probably wondering what that has to do with nanotechnology, right? Well, it turns out that nanoparticles – chunks of materials around 10,000 times smaller than the width of a human hair – may provide new and important solutions to many of the world’s problems. For example, new kinds of nanoparticle-based solar cells are being made that could, in the future, be painted onto the sides of buildings.
...
What’s the (potential) problem? Well, these tiny little chunks of materials are so small that they can move around and do things in ways that we don’t fully understand. For example, really tiny particles could potentially be absorbed through skin. In the environment, nanoparticles might be able to be absorbed into insects or fish that are at the bottom of the food chain for larger animals, including us.
Before nanoparticles get incorporated into consumer products on a large scale, it’s our responsibility to figure out what the downsides could be if nanoparticles were accidentally released into the environment. However, this is a huge challenge because nanoparticles can be made out of different stuff and come in many different sizes, shapes, and even internal structures.
Because there are so many different types of nanoparticles that could be used in the future, it’s not practical to do a lot of testing of each kind. Instead, the people within our center are working to understand what the “rules of behavior” are for nanoparticles in general. If we understand the rules, then we should be able to predict what different types of nanoparticles might do, and we should be able to use this information to design and make new, safer nanoparticles.
In the end, it’s all about people working together, using science to create a better, safer, more sustainable world. We hope you will join us!
Mini advent calendar of Xrisks: Artificial Intelligence
The FHI's mini advent calendar: counting down through the big five existential risks. As people on this list would have suspected, the last one is the most fearsome, should it come to pass: Artificial Intelligence.
And the FHI is starting the AGI-12/AGI-impacts conference tomorrow, on this very subject.
Artificial intelligence
Current understanding: very low
Most worrying aspect: likely to cause total (not partial) human extinction
Humans have trod upon the moon, number over seven billion, and have created nuclear weapons and a planet spanning technological economy. We also have the potential to destroy ourselves and entire ecosystems. These achievements have been made possible through the tiny difference in brain size between us and the other greater apes; what further achievements could come from an artificial intelligence at or above our own level?
It is very hard to predict when or if such an intelligence could be built, but it is certain to be utterly disruptive if it were. Even a human-level intelligence, trained and copied again and again, could substitute for human labour in most industries, causing (at minimum) mass unemployment. But this disruption is minor compared with the power that an above-human AI could accumulate, through technological innovation, social manipulation, or careful planning. Such super-powered entities would be hard to control, pursuing their own goals, and considering humans as an annoying obstacle to overcome. Making them safe would require very careful, bug-free programming, as well as an understanding of how to cast key human concepts (such as love and human rights) into code. All solutions proposed so far have turned out to be very inadequate. Unlike other existential risks, AIs could really “finish the job”: an AI bent on removing humanity would be able to eradicate the last remaining members of our species.
Mini advent calendar of Xrisks: Pandemics
The FHI's mini advent calendar: counting down through the big five existential risks. The fourth one is an ancient risk, still with us today: pandemics and plagues.
Pandemics
Current understanding: high
Most worrying aspect: the past evidence points to a risky future
The deathrates from infectious diseases follow a power law with a very low exponent. In layman’s terms: there is a reasonable possibility for a plague with an absolutely huge casualty rate. We’ve had close calls in the past: the black death killed around half the population of Europe, while Spanish Influenza infected 27% of all humans and killed one in ten of those, mostly healthy young adults. All the characteristics of an ultimately deadly infection already exist in the wild: anything that combined the deadliness and incubation period of AIDS with the transmissibility of the common cold.
Moreover, we know that we are going to be seeing new diseases and new infections in the future: the only question is how deadly they will be. With modern global travel and transport, these diseases will spread far and wide. Against this, we have better communication and better trans-national institutions and cooperation – but these institutions could easily be overwhelmed, and countries aren’t nearly as well prepared as they need to be.
Mini advent calendar of Xrisks: nanotechnology
The FHI's mini advent calendar: counting down through the big five existential risks. The third one is a also a novel risk: nanotechnology.
Nanotechnology
Current understanding: low
Most worrying aspect: the good stuff and the bad stuff are the same thing
The potential of nanotechnology is its ability to completely transform and revolutionise manufacturing and materials. The peril of nanotechnology is its ability to completely transform and revolutionise manufacturing and materials. And it’s hard to separate the two. Nanotech manufacturing promises to be extremely disruptive to existing trade arrangements and to the balance of economic power: small organisations could produce as many goods as much as whole countries today, collapsing standard trade relationships and causing sudden unemployment and poverty in places not expecting this.
And in this suddenly unstable world, nanotechnology will also permit the mass production of many new tools of war – from microscopic spy drones to large scale weapons with exotic properties. It will also weaken trust in disarmament agreements, as a completely disarmed country would have the potential to assemble an entire arsenal – say of cruise missiles – in the span of a day or less.
Mini advent calendar of Xrisks: synthetic biology
The FHI's mini advent calendar: counting down through the big five existential risks. The second one is a new, exciting risk: synthetic biology.
Synthetic biology
Current understanding: medium-low
Most worrying aspect: hackers experimenting with our basic biology
Synthetic biology covers many inter-related fields, all concerned with the construction and control of new biological systems. This area has already attracted the attention of bio-hackers, experimenting with DNA and other biological systems to perform novel tasks – and gaining kudos for exotic accomplishments. The biosphere is filled with many organisms accomplishing specific tasks; combining these and controlling them could allow the construction of extremely deadly bioweapons, targeted very narrowly (at all those possessing a certain gene, for instance). Virulent virus with long incubation periods could be constructed, or common human bacteria could be hacked to perform a variety of roles in the body. And humans are not the only potential targets: whole swaths of the ecosystem could be taken down, either to gain commercial or economic advantages, for terrorist purposes, or simply by accident.
Moreover, the medical miracles promised by synthetic biology are not easily separated from the danger: the targeted control needed to, for instance, kill cancer cells, could also be used to target brain cells or the immune system. This would not be so frightening if the field implemented safety measures commensurate with the risks; but synthetic biology has been extremely lax in its precautions and culturally resistant to regulations.
Mini advent calendar of Xrisks: nuclear war
The FHI's mini advent calendar: counting down through the big five existential risks. The first one is an old favourite, forgotten but not gone: nuclear war.
Nuclear War
Current understanding: medium-high
Most worrying aspect: the missiles and bombs are already out there
It was a great fear during the fifties and sixties; but the weapons that could destroy our species lie dormant, not destroyed.
But nuclear weapons still remain the easiest method for our species to destroy itself. Recent modelling have confirmed the old idea of nuclear winter: soot rising from burning human cities destroyed by nuclear weapons could envelop the world in a dark cloud, disrupting agriculture and the food supplies, and causing mass starvation and death far beyond the areas directly hit. And a creeping proliferation has spread these weapons to smaller states in unstable areas of the world, increasing the probability that nuclear weapons could get used, leading to potential escalation. The risks are not new, and several times (the Cuban missile crisis, the Petrov incident) our species has been saved from annihilation by the slimmest of margins. And yet the risk seems to have slipped off the radar for many governments: emergency food and fuel reserves are diminishing, and we have few “refuges” designed to ensure that the human species could endure a major nuclear conflict.
Any existential risk angles to the US presidential election?
Don't let your minds be killed, but I was wondering if there were any existential risk angles to the coming American election (if there isn't, then I'll simply retreat to raw, enjoyable and empty tribalism).
I can see three (quite tenuous) angles:
- Obama seems more likely to attempt to get some sort of global warming agreement. While not directly related to Xrisks per se, this would lead to better global coordination and agreement, which improves the outlook for a lot of other Xrisks. However, pretty unlikely to succeed.
- I have a mental image that Republicans would be more likely to invest in space exploration. This is a lot due to Newt Gingrich, I have to admit, and to the closeness between civilian and military space projects, the last of which are more likely to get boosts in Republican governments.
- If we are holding out for increased population rationality as being a helping factor for some Xrisks, then the fact the the Republicans have gone so strongly anti-science is certainly a bad sign. But on the other hand, its not clear whether them winning or losing the election is more likely to improve the general environment for science among their supporters.
But these all seem weak factors. So, less wronger, let me know: are the things I should care about in the election, or can I just lie back and enjoy it as a piece of interesting theatre?
[LINK] Nuclear winter: a reminder
Just a reminder that some of the old threats are still around (and hence that AI is not only something that can go hideously badly, but also some thing that could help us with the other existential risks as well):
EDIT: as should have been made clear in that post (but wasn't!), the existential risks doesn't come from the full fledged nuclear winter directly, but from the collapse of human society and fragmentation of the species into small, vulnerable subgroups, with no guarantee that they'd survive or ever climb back to a technological society.
= 783df68a0f980790206b9ea87794c5b6)

Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)