Trying to predict the future is hazardous, not only because of the conjunction fallacy, but because there are so many factors involved. Even if you're careful to avoid the mistake of thinking that A, B, and C happening are more likely than A and C, it's not easy to estimate what will happen when you've got factors A through Z involved, and complicated chains of relationships like "if D and E, but not F, and G is stronger than expected, then H". Misjudging the likelihood of a factor, or misunderstanding a relationship, or omitting factors or relationships, can make an apparently solid set of predictions completely worthless.
That's not to say that it's impossible, of course. If you restrict yourself to asking what will happen if we push on a physical system, and throw the world's best scientists at the problem for decades with millions of dollars and powerful supercomputers at their disposal, then they can actually model what inputs will cause what outputs with probabilities attached. The existence of anthropogenic global warming is a fact, just as the existence of evolution and atoms is a fact, and it's clear that all else being equal (note 1), basically everyone would prefer for the Earth's climate to be original recipe instead of extra crispy. The problem is, we don't know what the inputs will be, and it's trying to guess what the inputs will be that's especially hazardous. It's a lot harder to model what one Congress will do, let alone many. (Modeling the mind-killer is a headache.)
When faced with this sort of problem, I find it useful to instead think about possible end states, which are typically easier to envision and enumerate, and ask how likely it is we'll end up there, through any path whatsoever. And Stein's Law is usually helpful: "If something cannot go on forever, it will stop."
Anthropogenic global warming can't go on forever, so it'll stop. How will it stop? I can think of several ways - this is a non-exhaustive list:
We discover a novel feedback loop, Earth's atmosphere becomes like Venus's, and everyone dies. It's pretty clear that this isn't a possibility, we hope, although it should be considered before being rejected (note 2).
We collectively come to our senses, and do all the right things right now to keep the problem from getting any worse, and to fix as much as physically possible of the damage that's already been done. It's also pretty clear that this will not happen.
A technological breakthrough substantially solves the problem for us. For example, we solve a bunch of engineering problems, and leapfrog from ITER to cheap and plentiful commercial nuclear fusion in just a couple of decades, without having thrown 100 billion dollars at the problem (as that would be shading into the "come to our senses" scenario). The probability of this one is hard to judge - we get stuck by some problems for a while before eventually solving them - but hope is not really a plan.
The nasty consequences of global warming keep getting worse and worse, until advanced civilizations are wrecked back into more primitive states, where they're unable to keep dumping carbon into the atmosphere. It looks to me like this one is unlikely too - advanced nations will be able to cope at significant cost. It's just poor nations that are boned.
We run out of coal (note 3), oil, and natural gas to burn. They're finite, so this is guaranteed to happen - the question is whether it happens before anything else. A more precise question is, when will our rates of production stop increasing - combined with inelastic demand, this will cause significant price increases that force us to consider previously more expensive (or ionizing), but non-carbon-emitting, sources of energy. This is the scenario that I judge as most likely. Unfortunately, it looks like the result will be extra crispy at a minimum.
Something else - increasing food/water/resource scarcity leads to increasing conflict, and eventually to global thermonuclear war - we know that one is perfectly capable of wrecking technological civilization. Hopefully unlikely (there's that word again).
My conclusion is that because many people are already working on both sides of this issue, this community's time would be better spent elsewhere.
Note 1: The "all else being equal" part is key. The ultimate problem isn't that some people want to seriously modify the Earth's climate in and of itself, or for the lulz, or because they're supervillains. It's because fucking money is at stake, and like Mafia bosses in movies, people want their fucking money and they want it now. This wouldn't even be a problem, except that carbon is an unpriced negative externality.
Note 2: Just as igniting the Earth's atmosphere was considered and rejected before the Trinity test. Note that many popular accounts of how this possibility was considered are completely wrong. The worry was never that a nuclear weapon could ignite a global chemical fire in the atmosphere - it was that it could ignite a global nuclear fire. (Follow Wikipedia's citation.) Fortunately for us, the physics don't work out that way.
Note 3: Fucking coal.
The nasty consequences of global warming keep getting worse and worse, until advanced civilizations are wrecked back into more primitive states, where they're unable to keep dumping carbon into the atmosphere. It looks to me like this one is unlikely too - advanced nations will be able to cope at significant cost. It's just poor nations that are boned.
One thing I noticed about predictions of nasty consequences of global warming, is that they're always about 5 to 10 years from the present, with the date always being updated. See here, for a discussion of a recent example.
It all started back in October 2005 when the U.N. flatly stated, “by 2010 the world will need to cope with as many as 50 million people escaping the effects of creeping environmental deterioration.” They forecast “this new category of ‘refugee.’” In 2008 the Srgjan Kerim, president of the U.N. General Assembly, upped the doomsday prediction, saying there would “between 50 million and 200 million environmental migrants by 2010.” Environmental activist Norman Myers, a professor at Oxford University predicted that climate change could force to 200 million climate refugees.
The U.N. specifically identified Pacific and Caribbean populations that would be ravaged by climate change. Gavin Atkins, writing for Asiancorrespondent.com reports “a very cursory look at the first available evidence seems to show that the places identified by the UNEP as most at risk of having climate refugees are not only not losing people, they are actually among the fastest growing regions in the world.” Atkins reports that all of China’s “threatened” cities –Shenzzen, Dongguan, Foshan, Zhuhai, Puning and Jinjiang — are the fastest growing cities in the world.
Atkins also looks at other endangered locations, the Bahamas, St. Lucia, the Seychelles and the Solomon Islands. None have refugees and all have enjoyed healthy population growths.
This is by no means the only example of a global warming doomsday prediction failing to come true and being quietly forgotten.
I'm surprised to find statements here such as "the existence of anthropogenic global warming is a fact". I'm new here, haven't read all the sequences, and this may seem obnoxious. But I'm testing my beliefs and willing to change my mind.
Let's start with the article linked to by the OP. It says that 4 degrees of warming is likely and bad. I'll concentrate on likely.
The argument for athropogenic global warming goes something like this:
1 and 2 are uncontroversial.
3 is difficult to measure. For example land based thermometers are only accurate to, say, 1C, and we are extracting a signal that varies by tenths of a degree. Or we are using proxies like tree rings that are difficult or impossible to calibrate. The signal we do extract is not linear, to say the least. For example warming stops at times even as CO2 increases, so we know there are large variations not accounted for by CO2 which makes it hard to determine the influence of CO2 alone. The signal looks very different depending on what timescale one looks at, hence arguments about natural variation, decadal oscillations, the medieval warm period and so on.
The feedback effects in (4) are not well understood, which means that the models in (5) do not necessarily reflect how the real climate system works.
And if a model does not include important parts of the system, even if it correctly predicts past events that does not mean it can predict future events. I could build psuedo-random-number generators until I find one that happens to match closely the observed past temperature signal, but it will not predict future temperatures.
All this does not add up to a high level of certainty that additional CO2 will lead to any particular amount of warming. The sensitivity of climate to doubling of CO2 is not known with any degree of certainty. In short, climate is not as well understood as evolution and atoms.
The OP speaks of "meaningful climate reform and legislation" which means redirecting lots of resources to change the amount of CO2 emitted. Resources that could generate more utility elsewhere.
So what is going wrong? Possibilities:
run out of coal (note 3), oil, and natural gas to burn. They're finite, so this is guaranteed to happen - the question is whether it happens before anything else
Running out is not likely to happen any time soon at expected usage growth rates. The cost of extracting coal, oil, and natural gas will (possibly) increase over time but new technologies may depress the price of each (see the current state of natural gas). The estimates for potentially extractable reserves gives a figure such that if ways are found to extract such reserves oil and coal will be in continual usage (at present growth rates) for over another hundred years in the future.
Further, there are ways of producing oil substitutes that become feasible as oil hits certain price points. As these get implemented large scale it is likely that economies of scale and incremental improvements will kick in so that even though there will still be oil being produced most ways that we currently use oil will be changed over to the new technologies. Oil just happens to be the lowest cost alternative currently but as demand increases and cost of production increase then the other alternatives will be used.
An "existential risk" is an extinction risk. Climate change is not an extinction risk - not for the human race, anyway. It would just mean suffering. Pick any horrible historical event you can think of - plague, war, whatever - as lethal as you like - and clearly it wasn't enough to end the species, because the present generations are here. Things can get incredibly bad without actually killing us off. 6 billion people could die and the other 1 billion would continue the struggle amidst the ruins.
I am convinced that in any case, climate change is too slow to matter, when compared to the development of technology. We are going to have our AI/nanotech crisis long before we even have 2 more degrees, let alone 4. If we survive AI/nanotech, then we can solve the problem, and if we don't survive AI/nanotech, then the future is out of our hands anyway.
I'm aware of what an existential risk is. While I don't think that climate change is likely to destroy mankind on its own, I consider the potential for runaway climate change to provoke massive instability to be truly worrisome on an existential level, especially in the long run.
I hope that you're right with climate change being too slow to matter, but I also think that hope is not exactly a reliable strategy.
It is the way of extinction that what kills the last individual commonly has nothing to do with the underlying factors that doomed the species. Could climate change by itself kill everyone on earth? No. Could it be a significant contributing factor to a downward spiral that ends in extinction? I hope not, but I don't really know. Leaving aside the purely fictional versions of AI and nanotech to which you refer, could real-life versions of those technologies help us develop sustainable energy sources? Yes. Will they do so fast enough? I hope so, but I don't really know.
As for whether there's anything we here can usefully do about it, I don't think it would be useful for us to get bogged down in the sort of bickering about politics that all too often goes with this kind of territory, but LW does have a good track record of avoiding that; and perhaps it would be useful for us to explore potential solutions.
We don't need help finding sustainable resources. We already have nuclear power. We just need to convince everyone that nuclear isn't bad.
Nuclear's hardly sustainable long-term though. It's a temporary patch that might help for fifty years or so.
At $130/kg, there's enough for 80 years at current consumption (about 10 years if we use it for all our electricity), but if we're willing to use ore with a tenth as much uranium, there's 300 times as much. Also, there's ways of using uranium 238, which is about 140 times as abundant. It's still a temporary patch, in the sense that we can't just keep using it until the sun goes out, but it will last long enough for fusion power to become economically feasible.
Any ideas about how capable computer programs would need to be to give significant help to researchers with hypothesis generation and with whether research programs make sense? With seeing whether abstracts match experimental results?
It seems to me that some of this could be done without even having full natural language.
As long as we're talking about, as you say, significant help rather than solving the whole problem, about what can be done without having full natural language - then I think this is one of the more promising areas of AI research for the next couple of decades.
I talked a few months ago to somebody who's doing biomedical research - one of the smartest guys I know - asking what AI might be able to do to make his job easier, and his answer was that the one thing likely to be feasible in the near future that would really help would be better text mining, something that could do better than just keyword matching for e.g. flagging papers likely to be relevant to a particular problem.
I'm graduating with a major in Environmental Science this week, and my take on it is that realistically, we're probably going to face large scale environmental disaster, which social and legislative action will be considerably too little, too late to prevent, but which sophisticated technological interventions may significantly mitigate, and which will probably constitute a fairly major mass extinction event. The impacts on human society will be significant, but it's not a major existential risk for us, per se.
James Hansen argues there's a risk of a "Venus syndrome" in which runaway climate change makes the planet hostile to all life, which certainly puts it in the existential risk column. However, I think this is markedly less likely than UFAI doing us all in, and even if it wasn't, there are so many people working on global warming that the marginal contribution I can make is much less.
It's something I'm increasingly worried about recently.
Some articles I read recently:
http://www.monbiot.com/2011/05/05/our-crushing-dilemmas/ http://thearchdruidreport.blogspot.com/2011/05/downside-of-dependence.html
that seem to relate.
My basic worries:
What does the community here think when it comes to climate change as a potential existential risk? [...] the potential consequences of ignoring this issue look to be quite severe indeed!
There is a world of difference between "severe" consequences, even world-changing and civilization-crippling severe, and existential risk. Existential risk means no future, astronomical waste. Even 99% of population dying doesn't come close to that.
I think climate change, coupled with different war-over-resources type scenarios imply non-negligible existential risk. The most compelling argument I'm aware of for switching from environmental science (or military tech/strategy/intelligence, since those also impact the existential risk associated with climate change) is the marginal utility argument -- FAI research is so much smaller, making the relative contribution of one person so much larger (assuming you believe the law of diminishing returns should be applied to AI research and climate change).
This is a big issue, there's a lot of bright young people doing the equivalent of cleaning oil off seabirds because they feel they gotta do something about environmental risk. On the other hand, theres a lot of greens arguing against nuclear power and for growing tomatoes in greenhouses to save on aviation fuel, so maybe there isn't enough brainpower being dedicated to the topic....
Anyway the flipside of marginal utility here is comparative advantage: what skills do you have that can help with AI research?
There's a related claim over at PredictionBook:
By 2100 it’s obvious to the average human that (1) Global warming is true and (2) it’s actually a good thing: Resulting in a better climate then we have had in recorded history. A stable Wet-Sahara/Warm-Siberia/No Rainforest scenario.
Out of four people that have expressed an opinion, I'm currently the only person there against this claim.
Not to be the only one expressing the opinion that it is more than just a "wild card scenario" --- but I could construct a scenario where the majority of the population of the earth is exterminated by extreme climate change, i.e. global warming.
There's already billions of dollars per year being spent on reducing carbon impacts, lower-CO2 electrical power generation, etc etc.
My guess would be that, if you wanted to reduce the risks of climate change, the right place to put resources is biotechnology. Bioengineered green plants seems like the most promising way to get CO2 out of the atmosphere (or equivalently, to produce carbon-neutral chemical fuels).
Here again, though, there's already a lot of smart people and billion dollar research budgets devoted to relevant problems. So I don't know if the marginal value of more work on the topic is high.
Climate change is a) happening and b) anthropogenic (there are plenty of natural forces involved but we're the ones with our thumb on the scales). Yes, it's an existential threat to any complex, keystone
species that depends on a complex web of life for survival (ie us). The various talking points of those who have reasons not to accept this, are mostly strawmen, calculated to miss the point or argue against claims nobody is making. The rest are just contradicted by the evidence. All of the talking points are summed up here (https://skepticalscience.com/argument.php) and rigorously debunked, with links to the relevant scientific papers. If anyone can come up with an argument not dealt with there, or solid evidence for why one of them is wrong, I'd be very interested to see it.
Carbon is an unpriced negative externality, and most of its forms cause more problems than just contributing to climate change (eg look at the stats for people dying of respiratory conditions
in cities with bad fossil fuel pollution). Unfortunately, trying to fix that means taking money away from corporations who will fight tooth and nail, like the artificial psychopaths they are, to avoid paying to clean up their own mess (see: the story of Bhopal in the documentary The Corporation: https://archive.org/details/The_Corporation_). So while I support a revenue-neutral carbon tax, paid out equally to everyone as a citizen's dividend, I'm not holding my breath.
I tend to agree with George Monbiot that the most effective things individuals can do are stop eating animal products and stop flying in jet planes. Between them, aviation and animal farming produce the vast majority of greenhouse gas emissions. So growing salad in greenhouses (or just eating produce in season) instead of flying it around the world is sensible. Massively reducing (or ideally abolishing) animal farming, and the massive grain monocrops most of it relies on for feed, and putting most of that land back into market gardens, food forests, and wild reserves, would help a
lot too.
asr:
if you wanted to reduce the risks of climate change, the right place to put resources is biotechnology. Bioengineered green plants seems like the most promising way to get CO2 out of the atmosphere (or equivalently, to produce carbon-neutral chemical fuels).
Bioengineering is just as likely to produce plants that soak up less carbon, breed uncontrollably, and spread their transgenic properties to other species of plant. I agree that green plants are definitely the most promising way to sink C02, since the C02 we're releasing back into the atmosphere was soaked up by them in the first place. But why not just use natural ones? If for no other reason
than we don't have time to wait for novel ones to be developed.
If you massively increase the amount of land reserved for wild nature, the plants that grow there will soak up heaps of carbon, and the forest floor that builds up under them will soak up anywhere from twice to ten times that amount as it gets denser and more bio-diverse. As well as becoming increasingly better habitat for non-humans and providing more nature immersion opportunities for humans, which have been shown in multiple studies to have both psychological and physical benefits. It's win-win-win.
BenAlbahari:
nuclear isn't bad.
The problem is that nuclear (fission) is bad. The waste problem is totally unsolved. The decommissioning problem is totally unsolved, meaning that the retired reactors we already have need to be kept contained for thousands of years, potentially longer than our civilization will last. Another problem there is that containment is also a problem that remains totally unsolved, as we saw with Fukushima. Any time we build a new nuclear fission plant anywhere near the coast or geologically active area, we're basically creating a giant time bomb that can kill and poison people for thousands of generations to come. They will not thank us for this.
Thorium reactors are an improvement, in that they can't produce another 3 Mile Island or Chernobyl. But they still produce life-destroying waste and a decommissioned reactor, both with a half-life in the thousands of years. They are, at best, a stop-gap solution for countries that have already made the mistake of building nuclear fission plants and need something to feed the waste into, to make it marginally less dangerous.
Then there is the carbon emissions involved in construction and operation of each plant (thousands of litres of concrete, extraction and transportation of the feedstocks etc). There is no reason for countries that don't already have a nuclear problem to start making one for themselves now. Especially when there are a multitude of barely explored possibilities in:
None of this is to say that there isn't snake oil being pushed out there as "climate change solutions". Of course there's greenwashing going on, and people trying to cash in on the grants and investments that are increasingly being directed into this area. The bioengineering pitch is a good example, as is the nuclear zombie's attempt to become become Great Again.
A few heuristics for assessing whether something is really about fixing climate change:
If the answer to all of these questions is "no", it might actually be about trying to address climate change. I'm sure folks here could add to this list and it may be the most useful thing a group of critical thinkers can contribute to the debate right now.
What does the community here think when it comes to climate change as a potential existential risk? While strategies for combating climate change are fairly straightforward, the seeming lack of political capital behind meaningful climate reform and legislation seems to indicate that the problem is going to get substantially worse before it gets better, and the potential consequences of ignoring this issue look to be quite severe indeed!
Should the rationality/x-risks community be spending more effort on evaluating this idea and exploring potential solutions? It certainly seems like a big problem, and the current trajectory is quite worrisome. On the other hand, the issue is a political minefield and could risk entangling the community in political squabbling, potentially jeopardizing its ability to act on other threats. What do you guys think?