I'm curious who is the target audience for this scale...
People who have an interest in global risks will find it simplistic--normally I would think of the use of a color scale as aimed at the general public, but in this case it may be too simple even for the curious layman. The second picture you linked, on the other hand, seems like a much more useful way to categorize risks (two dimensions, severity vs urgency).
I think this scale may have some use in trying to communicate to policy makers who are unfamiliar with the landscape of GCRs, and in particularly to try to get them to focus on the red and orange risks that currently get little interest. But where is the platform for that communication to happen? It seems like currently the key conversations would be happening at a more technical level, in DoD, DHS, or FEMA. A focus on interventions would be helpful there. I couldn't get the whole paper, but from what you wrote above it sounds like you have some interesting ideas about ranking risks based on a combination of probability and possible interventions. If that could be formalized, I think it would make the whole idea a lot stronger. Like you say, people are reasonably skeptical about probabilities (even if they're just an order of magnitude) but if you can show that the severity of the risk isn't very sensitive to probability, maybe it would help to overcome that obstacle.
You could download the prepribt here: https://philpapers.org/rec/TURGCA
It has a section of who could use the scale: that is communication to public, to policy-makers and between reserchers of different risks. The still don't have global platphorm for communication about global catastrophic and existential risks, but I think that something like a "Global risk prevention" commettee inside UN will be evntually created, which will work on global coordination of risk prevention. The commettee will use the scale and other instruments the same way other organisations use their 5 - 10 levels scales, including DEFCON, hurricane scale, asteroids scale, VEI (volcanic scale) etc.
You can definitely add pictures. Just use markdown syntax, or select a piece of text and click the image button in the hover menu.
Further to communicate the procedure to use Torino Scale utilizes a whole number scale from 0 to 10. A 0 shows a question has a possibility changes of impact contrasted and the standard thing shows that a crash and underground test mass destruction affecting item is sufficiently expansive to encourage disaster. As a result, cause dangerous any mishaps so all protecting measures taken before the experiment and reasonable protection requires before testing.
We (Alexey Turchin and David Denkenberger) have a new paper out where we suggest a scale to communicate the size of global catastrophic and existential risks.
For impact risks, we have the Torino scale of asteroid danger which has five color-coded levels. For hurricanes we have the Saffir-Simpson scale of five categories. Here we present similar scale for communicating the size of the global catastrophic and existential risks.
Typically, some vague claims about probability are used as a communication tool for existential risks, for example, some may say, “There is a 10 per cent chance that humanity will be exterminated by nuclear war”. But the probability of the most serious global risks is difficult to measure, and the probability estimate doesn’t take into account other aspects of risks, for example, preventability, uncertainty, timing, relation to other risks etc. As a result, claims about probability could be misleading or produce reasonable skepticism.
To escape these difficulties, we suggested creating a scale to communicate existential risks, similar to the Torino scale of asteroid danger.
In our scale, there are six color codes, from white to purple. If hard probabilities are known, the color corresponds to probability intervals for a fixed timeframe of 100 years, which helps to solve uncertainty and timing.
However, for most serious risks, like AI, their probabilities are not known, but the required levels of prevention action are known. For these cases, the scale communicates the risk’s size through the required level of prevention action. In some sense, it is similar to Updateless Decision Theory, where an event’s significance is measured, not by observable probabilities, but by the utility of corresponding actions. The system would work, because in many cases of x-risks, the required prevention actions are not very sensitive to the probability.
How should the scale be implemented in practice? If probabilities are not known, a group of experts should aggregate available information and communicate it to the public and policymakers, saying something like: "We think that AI is a red risk, a pandemic is a yellow risk and asteroid danger is a green risk.” It would help to bring some order to the public perception of each risk—where, currently, asteroid danger is clearly overestimated compared to the risks of AI risk—without making unsustainable claims about unmeasurable probabilities.
In the article we have already given some estimates for the most well-known existential risks, but clearly they are open to debate.
Here's the abstract:
Existential risks threaten the future of humanity, but they are difficult to measure. However, to communicate, prioritize and mitigate such risks it is important to estimate their relative significance. Risk probabilities are typically used, but for existential risks they are problematic due to ambiguity, and because quantitative probabilities do not represent some aspects of these risks. Thus, a standardized and easily comprehensible instrument is called for, to communicate dangers from various global catastrophic and existential risks. In this article, inspired by the Torino scale of asteroid danger, we suggest a color-coded scale to communicate the magnitude of global catastrophic and existential risks. The scale is based on the probability intervals of risks in the next century if they are available. The risks’ estimations could be adjusted based on their severities and other factors. The scale covers not only existential risks, but smaller size global catastrophic risks. It consists of six color levels, which correspond to previously suggested levels of prevention activity. We estimate artificial intelligence risks as “red”, while “orange” risks include nanotechnology, synthetic biology, full-scale nuclear war and a large global agricultural shortfall (caused by regional nuclear war, coincident extreme weather, etc.) The risks of natural pandemic, supervolcanic eruption and global warming are marked as “yellow” and the danger from asteroids is “green”.
The paper is published in Futures https://www.sciencedirect.com/science/article/pii/S001632871730112X
If you want to read the full paper, here's a link on preprint: https://philpapers.org/rec/TURGCA
Two main pictures from the paper (I think that lesserwrong still not allow embedding pictures):
http://immortality-roadmap.com/xriskstab1.jpg
http://immortality-roadmap.com/xriskstab2.jpg