Aberrancy vs normalcy maybe?
I haven't read Taleb; I guess I should. I think 'power law world' has the same problems that 'heavy-tailedness' has, as a name -- seems too specific, and the connection between statistical distributions and the properties I'm gesturing at seems too hand-wavy. But maybe it only seems hand-wavy to me because I haven't read his explanation.
You really should read Taleb; you can probably start with The Black Swan. His terms for these are "Mediocristan," domains that are described by Gaussian distributions, and "Extremistan," domains that are described by power laws.
OK. But isn't power law too specific? There are other distributions with heavy tails, e.g. log-normal.
I feel like this is related to the distinction between domains with efficient markets and domains with lots of thousand-dollar bills lying around to be picked up.
There is an important variable (or cluster of similar correlated variables) that I need a better name for. I also appreciate feedback on whether or not this variable is even a thing, and if so how I should characterize it. I have two possible names and two attempts at explaining it so far.
Name 1: "Heavy-tailedness of the world."
Name 2: "Great Man Theory vs. Psychohistory"
Attempt 1: Sometimes history hinges on the deliberate actions of small groups, or even individuals. Other times the course of history cannot be altered by anything any small group might do. Relatedly, sometimes potential impact of an individual or group follows a heavy-tailed distribution, and other times it doesn’t.
Some examples of things which could make the world heavier-tailed in this sense:
Attempt 2: Consider these three fictional worlds; I claim they form a spectrum, and it's important for us to figure out where on this spectrum our world is:
World One: How well the future goes depends on how effectively world governments regulate advanced AI. The best plan is to contribute to the respected academic literature on what sorts of regulations would be helpful while also doing activism and lobbying to convince world governments to pay attention to the literature.
World Two: How well the future goes depends on whether the first corporation to build AGI obeys proper safety protocols or not in the first week or so after they build it. Which safety protocols work is a hard problem which requires unusually smart people working for years to solve. Figuring out which corporation will build AGI and when is a complex forecasting task that can be done but only by the right sort of person, and likewise for the task of convincing them to follow safety protocols. The best plan is to try to assemble the right community of people so that all of these tasks get done.
World Three: How well the future goes depends on whether AGI of architecture A or B is built first. By default, AGI-A will be built first. The best plan involves assembling a team of unusually rational geniuses, founding a startup that makes billions of dollars, fleeing to New Zealand before World War Three erupts, and using the money and geniuses to invent and build AGI-B in secret.
Help?
For some other discussion of (facets of) this variable and its implications, see this talk and this post.