Due to my colleague, Anders Sandberg:

 

New to LessWrong?

New Comment
21 comments, sorted by Click to highlight new comments since: Today at 10:02 AM

Can you expand on what dynamical laws and deterministic dynamics mean? Also, how is one checking if there is a research community? My understanding is that a fair number of geologists study Yellowstone for example but supervolcanoes have a "no" next to them. Also, what does "obsolete" mean in the third column?

You may be correct for super-volcanoes; I'm not sure about that.

Obsolete probably means that all data we have on past global computer failures is already out of date as far as predicting future ones is concerned.

And deterministic dynamics?

Do they obey known deterministic laws?

I really don't understand the row for climate change. What exactly is meant by "inference" in the data column? I don't know what you want to count as data, but it seems to me that the data with respect to climate change include increasingly good direct measurements of temperature and greenhouse gas concentrations over the last hundred years or so, whatever goes into the basis of relevant physical and chemical theories (like theories of heat transfer, cloud formation, solar dynamics, and so forth), and measurements of proxies for temperature and greenhouse gas concentrations in the distant past (maybe this is what "inference" is supposed to mean?).

I also don't understand the "?" under probability distribution. Are the probability distributions at stake here distributions over credences? If so, then they can be estimated for most any scientist, at least. Are the distributions over frequencies? Then frequencies of what? I suspect we could estimate distributions for lots of climate related things, like severe storms or droughts or record high temperatures. I would be somewhat surprised if such distributions have not already been estimated by climate scientists. Is the issue about calibration? Then the answer seems to be a qualified yes. Groups like the IPCC give probabilistic statements based on their climate models. The climate models could be checked at least on past predictions, e.g. by looking at what the models from 2000 predicted for the period 2001-2011. We might not get a very good sense of how well calibrated the models are, but if the average temperature for each month, say, is a separate datum, then we could check the models by seeing how many of the months fall into the claimed 95% confidence bands, for example. (And just putting down confidence bands in the models should tell you that the climate scientists think that the distribution can be estimated for some sense of probability.)

I also don't understand the "?" under probability distribution

The uncertainties within the models are swamped by uncertainties outside the model - ie whether feedbacks are properly accounted for or not.

I agree that "inference" on its own is very odd. I would have put "inference and observations (delayed feedback)".

That's an interesting point. How precise do you think we have to be with respect to feedbacks in the climate system if we are interested in an existential risk question? And do you have other uncertainties in mind or just uncertainties about feedbacks?

The first thing I thought on reading your reply was that insofar as the evidence supports positive feedbacks, the evidence also supports the claim that there is existential risk from climate change. But then I thought maybe we need to know more about how far away the next equilibrium is -- assuming there is one. If we are in or might reach a region where temperature feedback is net positive and we run away to a new equilibrium, how far away will the equilibrium be? Is that the sort of uncertainty you had in mind?

Good work!

Perhaps you could put this into a google doc so that you can comment on each of the cells.

Why does the table indicate that we haven't observed pandemics the same way we've observed wars, famines, and earth impactors?

We can observe past pandemics and past meteor impacts. But we can also observe current and future meteors, predict their trajectories, and see if they're going to be a threat. We can't really do this with pandemic.

ie with meteors, we can use the past events and the present observations to predict the future; for pandemics, we can only use past events (to a large extent).

Why does superintelligence require global coordination? Apparently all one needs to do is to develop an FAI, and the rest will take care of itself.

E.g. AI regulation (like most technology regulation) is only effective if you get the whole world on board, and without global coordination there's the potential for arms races.

"Only develop an FAI" also presumes a hard takeoff, and it's not exactly established beyond all doubt that we'll have one.

Preventing UFAI or dealing safely with Oracles or using reduced impact AIs requires global coordination. Only the "FAI in a basement" approach doesn't.

Because FAI is a hard problem. If it were easy then we would not still be paying people $70 trillion per year worldwide to do work that machines aren't smart enough to do yet.

Because FAI is a hard problem.

Almost all of these are hard problems. That seems insufficient.

[-][anonymous]9y00

Is there an updated version of this table?

How are "global computer failures" an existential risk? Sure, it would suck, but it wouldn't be the end of the world.

And what are "physics threats"?

I would also like to see a column with strategies for mitigating the thread, beyond "requires global coordination". For example the solution against bioweapons would be regulation and maybe countermeasure research, while against supernovae there isn't much we can do.

How are "global computer failures" an existential risk? Sure, it would suck, but it wouldn't be the end of the world.

Global trade depends on computers these days, and the human population depends on global trade to get food, medicine, building materials, technology parts, etc.; even if all humans would not be instantly killed by a global computer failure, it could stall or stop expansion.

And what are "physics threats"?

Vacuum metastability event, for instance?

I can see a global computer catastrophe rising to the level of civilization-ending, and 90-99% fatality rate, if I squint hard enough. I could see the fatality rate being even higher if it happens farther in the future. I'm having trouble seeing it as an existential risk, that literally kills enough people that there is no viable population remaining anywhere. Even in the case of computer catastrophe as malicious event, I'm having trouble envisioning an existential risk that doesn't also include one of the other options.

Are there papers that make the case for computer catastrophe as X-risk?

Rather than considering it in terms of fatality rate, consider it in terms of curtailing humanity's possible expansion into the universe. The Industrial Revolution was possible because of abundant coal, and the 20th century's expansion of technology was possible because of petroleum. The easy-access coal and oil are used up; the resources being used today would not be accessible to a preindustrial or newly industrial civilization. So if our civilization falls and humanity reverts to preindustrial conditions, it stays there.