The above narratives seem to be extremely focused into a tiny part of narrative-space, and it's actually a fairly good representation of what makes LessWrong a memetic tribe. I will try to give some examples of narratives that are... fundamentally different, from the outside view; or weird and stupid, from the inside view. (I'll also try to do some translation between conceptual frameworks.) Some of these narratives you already know - just look around the political spectrum, and notice what narratives people live in. There are aslo some narratives I find better than useless:
This list is by no means comprehensive, but this is taking way too much time, so I'll stop now, lest it should become a babble challenge.
I don't understand your post. Why are memetic tribes relevant to the discussion of potential existential risks; which is the basis of the original post? Is your argument that all communities have some sort of shared existential threat, that is contradictory to the other existential threats of other communities? It seems to me the point of a rationalist community should be to find the greatest existential threats and focus on finding solutions.
The basis of the original post isn't existential threats, but narratives - ways of organizing the exponential complexity of all the events in the world into a comparatively simple story-like structure.
Here’s a list of alternative high level narratives about what is importantly going on in the world—the central plot, as it were—for the purpose of thinking about what role in a plot to take
Memetic tribes are only tangentially relevant here. I didn't really intend to present any argument, just a set of narratives present in some other communities you probably haven't encountered.
Strongly disagree about the "great filter" point.
Any sane understanding of our prior on how many alien civilizations we should have expected to see is structured (or at least, has much of its structure that is) more or less like the Drake equation: a series of terms, each with more or less prior uncertainty around it, that multiply together to get an outcome. Furthermore, that point is, to some degree, fractal; the terms themselves can be — often and substantially, though not always and completely — understood as the products of sub-terms.
By the Central Limit Theorem, as the number of such terms and sub-terms increases, this prior approaches a log-normal distribution; that is, if you take the inverse (proportional to the amount of work we'd expect to have to do to find the first extraterrestrial civilization), the mean much higher than the median, dominated by a long upper tail. That point applies not just to the prior, but to the posterior after conditioning on evidence. (In fact, as we come to have less uncertainty about the basic structure of the Drake-type equation — which terms it comprises, even though we may still have substantial uncertainty about the values of those terms — the argument that the posterior must be approximately log-normal only grows stronger than it was for the prior.)
In this situation, given the substantial initial uncertainty about the value of the terms associated with steps that have already happened, the evidence we can draw from the Great Silence about any steps in the future is very, very weak.
As a statistics PhD, experienced professionally with Bayesian inference, my confidence on the above is pretty high. That is, I would be willing to bet on this at basically any odds, as long as the potential payoff was high enough to compensate me for the time it would take to do due diligence on the bet (that is, make sure I wasn't going to get "cider in my ear", as Sky Masterson says). That's not to say that I'd bet strongly against any future "Great Filter"; I'd just bet strongly against the idea that a sufficiently well-informed observer would conclude, post-hoc, that the bullet point above about the "great filter" was at all well-justified based on the evidence implicitly cited.
True, the typical argument for the great silence implying a late filter is weak, because an early filter is not all that a priori implausible.
However, the OP (Katja Grace) specifically mentioned "anthropic reasoning".
As she previously pointed out, an early filter makes our present existence much less probable than a late filter. So, given our current experience , we should weight the probability of a late filter much higher than the prior would be without anthropic considerations.
Thanks for pointing that out. My arguments above do not apply.
I'm still skeptical. I buy anthropic reasoning as valid in cases where we share an observation across subjects and time (eg, "we live on a planet orbiting a G2V-type star", "we inhabit a universe that appears to run on quantum mechanics"), but not in cases where each observation is unique (eg, "it's the year 2021, and there have been about 107,123,456,789 (plus or minus a lot) people like me ever"). I am far less confident of this than I stated for the arguments above, but I'm still reasonably confident, and my expertise does still apply (I've thought about it more than just what you see here).
This could mean you would also have to reject thirding in the famous Sleeping Beauty problem. Which contradicts a straightforward frequentist interpretation of the setup: If the SB experiment was repeated many times, one third of the awakenings would be Monday Heads, so if SB was guessing after awakening "the coin came up heads" she would be right with frequentist probability 1/3.
Of course there are possible responses to this. My point is just that: rejecting Katja's doomsday argument by rejecting SIA style anthropic reasoning may come with implausible consequences in other areas.
You’re giving us a convenient set of narratives, and asking us to explain out major life decisions in terms of them.
I think a better question is whether any of these narratives, or some combination of them, are the primary conscious reasons for any of our major life decisions. Also, to what degree the narratives as stated accurately and unambiguously match the versions we believe ourselves. Also, which narratives we think are important or irrelevant.
Otherwise, you get a dynamic where the inconvenience of making these distinctions and providing nuance gives the impression that people actually believe this stuff as stated.
You get a few points of “supporting evidence” per respondent, but no “negative evidence” since you’re not asking for it. It starts to look like every narrative has at least a few smart people taking to really seriously, so we should take them all seriously. As opposed to every theory having the majority of smart people not taking them seriously.
Then of course you’re targeting this question to a forum where we all know there’s a higher proportion of people who DO take these things seriously than in the general population of Smart People, so you’re also cherry picking.
I don’t know what you’re planning on doing with the responses you get, but I hope you’ll take these issues into consideration.
Not saying I endorse these fully, certainly not to the extent of them being the "whole plot" and making other considerations irrelevant, but I think they both contain enough of a kernel of truth to be worth mentioning:
1) While not quite an existential threat, climate change seems posed to cause death and suffering on a considerable, perhaps unprecedented, scale within this century, and will likely also act as a "badness multiplier", making pre-existing issues like disease, political instability and international conflicts worse. Absent technological advances to offset these problems, the destruction of arable land and increasing scarcity of drinking water will likely increase zero-sum competition and make mutually beneficial cooperation more difficult.
2) More speculatively: due to the interconnectedness of the modern world, our increased technological capabilities, and the sheer speed of technological, cultural and political change, the world is becoming more complex in a way that makes it increasingly hard to accurately understand and act rationally in - the "causal graphs" in which people are embedded are becoming both larger and denser (both upstream and downstream of any one actor), and unpredictable, non-linear interaction between distant nodes more often have an outsized effect - black swans are becoming both larger and more common. The central plot is that everybody has lost the plot, and we might not be cognitively equipped to recover it.
Trump remains popular "in spite of it all" because those who despise his supporters refuse to understand why they support him. Why refuse? Equal measures of spite. To better understand, change the word "partner" to "political enemy" in your comment guideline "If you disagree, try getting curious about what your partner is thinking".
You could make that less parochial by rephrasing to something like:
There is a worldwide rise in nationalism and populism and corresponding rejection of globalism, leading to worse leaders. This rise is poorly understood by elites, which lessens hope that this trend is going away soon.
This rise is poorly understood by elites
The incentives for elites seem to prevent understanding. By (being known as) understanding non-elites, you lose your elite status.
A king is allowed to express certain amount of empathy with starving peasants, and he still remains king. For political elites, expressing empathy with their opponents would probably be a career suicide. (Instead, the proper form of "understanding" your opponents is to attack their strawmen.)
The world is controlled by governments, and really awesome governance seems to be scarce and terrible governance common
Or...liberal democracy has spread , as other systems have failed. But maybe liberal democracy isnt good enough to count as really awesome.
Though I've posted 3 more-or-less-strong disagreements with this list, I don't want to give the impression that I think it has no merit. Most specifically: I strongly agree that "Institutions could be way better across the board", and I've decided to devote much of my spare cognitive and physical resources to gaining a better handle on that question specifically in regards to democracy and voting.
Maybe something about the collapse of sensemaking and the ability of people to build a shared understanding of what's going on, partly due to rapid changes in communications technology transforming the memetic landscape?
I'm don't think the universe is obliged to follow any "high level narratives". I'm afraid I don't understand how thinking of events in these terms is helpful.
These narratives are frameworks, or models. There's the famous saying that all models are wrong, but some are useful. Here, the narratives take the complex world and try to simplify it by essentially factoring out "what matters". Insofar as such models are correct or useful, they can aid in decision-making, e.g. for career choice, prioritisation, etc.
Even Less Wrong itself was founded on such a narrative, one developed over many years. Here's EY's current Twitter bio, for instance:
Ours is the era of inadequate AI alignment theory. Any other facts about this era are relatively unimportant, but sometimes I tweet about them anyway.
Similarly, a political science professor or historian might conclude a narrative about trends in Western democracies, or something. And the narrative that "Everyone is going to die, the way things stand." (from aging, if nothing else) is as simple as it is underappreciated by the general public. If we took it remotely seriously, we would use our resources differently.
Finally, another use of the narratives in the OP is to provide a contrast to ubiquitous but wrong narratives, e.g. the doomed middle-class narrative that endless one-upmanship towards neighbors and colleagues will somehow make one happy.
A few other narratives:
If reactor grade plutonium could be used to make nuclear weapons, there is enough material in the world to make million nukes and it is dispersed through many actors.
Only arctic methane eruption matters, as it could trigger runaway global warming.
Only Peak oil matters, and in next 10 years we will see shortages of it and other raw materials.
Only coronavirus mutations matters, as they could become more deadly.
Only reports about UFO matters, as they imply that our world model is significantly wrong.
Here's mine: a large portion of the things that matter most in human life, including particularly most of the ways of life we originally evolved for, are swiftly becoming rare luxuries throughout the West, primarily at the behest of liberalism (which otherwise has produced many positives). Examples:
The reason I see the loss of these things as a terrible part of the "central plot" is because they are for the most part ignored, yet deeply important aspects of what it means to be human, which we are in danger of permanently losing even if ALL those other problems are solved. If people forget where we came from, and wholesale let go of the past and traditional values in favor of "progress" for its own sake, I think it will be a net loss regardless of how happy the abhuman things that we become will be. And the evidence is in my favor that these problems are making people miserable - just look at conservatives, who still are trying to hold on to these aspects of being human and seeing them threatened from every direction.
Third, separate disagreement: This list states that "vastly more is at stake in [existential risks] than in anything else going on". This seems to reflect a model in which "everything else going on" — including power struggles whose overt stakes are much much lower — does not substantially or predictably causally impact outcomes of existential risk questions. I think I disagree with that model, though my confidence in this is far, far less than for the other two disagreements I've posted.
Almost all of these could have been said 50 years ago with no or minor (e.g. change Trump to Nixon) change with pretty much the same emphasis. Even those that not (e.g. Pandemic), could be easily replaced with other things similar in nature in absolute outcome (famine in China, massive limitation of mobility (and other freedoms) in the Eastern Block etc.).
Even 100 years ago you could make similar cases for most things (except A.I., that is a newer concept, yet there could have been similar issues in those times for which people had the same hope for that I am not aware of).
Yet, here we are, better off than before. Was this the expected outcome?
I think it would have been way less popular to say "Western Civilization is declining on the scale of half a century"; I think they were clearly much better off than 1920. I think they could have told stories about moral decline, or viewed the West as not rising to the challenge of the cold war or so on, but they would have been comparing themselves to the last 20-30 years instead of the last 60-100 years.
Separate point: I also strongly disagree with the idea that "there's a strong chance we live in a simulation". Any such simulation must be either:
Unlike my separate point about the great filter, I can claim no special expertise on this; though both my parents have PhDs in physics, I couldn't even write the Dirac equation without looking it up (though, given a week to work through things, I could probably do a passable job reconstructing Shor's algorithm with nothing more than access to Wikipedia articles on non-quantum FFT). Still, I'm decently confident about this point, too.
As someone who mostly expects to be in a simulation, this is the clearest and most plausible anti-simulation-hypothesis argument I've seen, thanks.
How does it hold up against the point that the universe looks large enough to support a large number of even fully-quantum single-world simulations (with a low-resolution approximation of the rest of reality), even if it costs many orders of magnitude more resources to run them?
Perhaps would-be simulators would tend not to value the extra information from full-quantum simulations enough to build many or even any of them? My guess is that many purposes for simulations would want to explore a bunch of the possibility tree, but depending on how costly very large quantum computers are to mature civilizations maybe they'd just get by with a bunch of low-branching factor simulations instead?
I think both your question and self-response are pertinent. I have nothing to add to either, save a personal intuition that large-scale fully-quantum simulators are probably highly impractical. (I have no particular opinion about partially-quantum simulators — even possibly using quantum subcomponents larger than today's computers — but they wouldn't change the substance of my not-in-a-sim argument.)
hm, that intuition seems plausible.
The other point that comes to mind is that if you have a classical simulation running on a quantum world, maybe that counts as branching for the purposes of where we expect to find ourselves? I'm still somewhat confused about whether exact duplicates 'count', but if they do then maybe the branching factor of the underlying reality carries over to sims running further down the stack?
It seems to me that exact duplicate timelines don't "count", but duplicates that split and/or rejoin do. YMMV.
I don't think the branching factor of the simulation matters, since the weight of each individual branch decreases as the number of branches increases. The Born measure is conserved by branching.
This is certainly a cogent counterargument. Either side of this debate relies on a theory of "measure of consciousness" that is, as far as I can tell, not obviously self-contradictory. We won't work out the details here.
In other words: this is a point on which I think we can respectfully agree to disagree.
Fair, although I do think your theory might be ultimately self-contradictory ;)
Instead or arguing that here, I'll link an identical argument I had somewhere else and let you judge if I was persuasive.
I don't think the point you were arguing against is the same as the one I'm making here, though I understand why you think so.
My understanding of your model is that, simplifying relativistic issues so that "simultaneous" has a single unambiguous meaning, total measure across quantum branches of a simultaneous time slice is preserved; and your argument is that, otherwise, we'd have to assign equal measure to each unique moment of consciousness, which would lead to ridiculous "Bolzmann brain" scenarios. I'd agree that your argument is convincing that different simultaneous branches have different weight according to the rules of QM, but that does not at all imply that total weight across branches is constant across time.
The argument I made there was that we should consider observer-moments to be 'real' according to their Hilbert measure, since that is what we use to predict our own sense-experiences. This does imply that observer-weight will be preserved over time, since unitary evolution preserves the measure(as you say, this also proves it is conserved by splitting into branches, since you can consider that to be projecting onto different subspaces)
Even without unitarity, you shouldn't expect the total amount of observer-weight to increase exponentially in time, since that would cause the total amount of observer-weight to diverge, giving undefined predictions.
Our sense-experiences are "unitary" (in some sense which I hope we can agree on without defining rigorously), so of course we use unitary measure to predict them. Branching worlds are not unitary in that sense, so carrying over unitarity from the former to the latter seems an entirely arbitrary assumption.
A finite number (say, the number of particles in the known universe), raised to a finite number (say, the number of Planck time intervals before dark energy tears the universe apart), gives a finite number. No need for divergence. (I think both of those are severe overestimates for the actual possible branching, but they are reasonable as handwavy demonstrations of the existence of finite upper bounds)
Ah, by 'unitary' I mean a unitary operator, that is an operator which preserves the Hilbert measure. It's an axiom of quantum mechanics that time evolution is represented by a unitary operator.
Fair point about the probable finitude of time(but wouldn't it be better if our theory could handle the possibility of infinite time as well?)
Bold claims about objective reality, unbacked by evidence, seemingly not very useful or interesting (though this is subjective) and backed by appeal to tribal values (declaring AI alignment as a core issue, blue tribe neoliberalism assumed as being the status quo...etc).
This seems to go against the kind of things I ever assumed could make it to less wrong sans-covid, yet here it is heavily upvoted.
Is my take here overly critical ?
I think it is. The post is not intended to be a list of things the author believes, but rather a collection of high-level narratives that various people use when thinking about the impact of their decisions on the world. As such, you wouldn't really expect extensive evidence supporting them, since the post isn't trying to claim that they are correct.
Interesting that about half of these "narratives" or "worldviews" are suffixed with "-ism": Malthusianism, Marxism, Georgism, effective altruism, transhumanism. But most of the (newer and less popular) rationalist narratives haven't yet been suchly named. This would be one heuristic for finding other worldviews.
More generally, if you want people to know and contrast a lot of these worldviews, it'd be useful to name them all in 1-2 words each.
This list seems to largely exclude positive narratives. What about the Steven Pinker/ Optimists narrative that the world is basically getting better all the time?
Perhaps to see the true high level narrative, we should focus on science, technology and prosperity, only considering politics in so far as it changes the direction of their long term trends.
Seems like a number of the items fall under a common theme/area. I wonder if focusing on them separately rather than perhaps seeing them as different context/representations of a common underlying source is best.
Basically, all the bits about failing governments, societies/cultures/institutions seem to be a rejection of the old "Private Vices, Public Virtues" idea and Smith's Invisible Hand metaphor. So perhaps the questions might be what's changed that make those types of superior outcomes from the individual actions that never aimed at such results no longer as effective.
Is there a common gear that is now broken or are these all really independent issues?
There is a strong chance that we live in a simulation
Is there a version of simulation theory that is falsifiable?
Uncontrolled population growth in Africa, India, the Middle East, and other developing countries
Here’s a list of alternative high level narratives about what is importantly going on in the world—the central plot, as it were—for the purpose of thinking about what role in a plot to take:
It’s a draft. What should I add? (If, in life, you’ve chosen among ways to improve the world, is there a simple story within which your choices make particular sense?)