I was raised as a good and proper child of the Enlightenment who grew up reading The Incredible Bread Machine and A Step Farther Out, taking for granted that economic growth was a huge in-practice component of human utility (plausibly the majority component if you asked yourself what was the major difference between the 21st century and the Middle Ages) and that the "Small is Beautiful" / "Sustainable Growth" crowds were living in impossible dreamworlds that rejected quantitative thinking in favor of protesting against nuclear power plants.
And so far as I know, such a view would still be an excellent first-order approximation if we were going to carry on into the future by steady technological progress: Economic growth = good.
But suppose my main-line projection is correct and the "probability of an OK outcome" / "astronomical benefit" scenario essentially comes down to a race between Friendly AI and unFriendly AI. So far as I can tell, the most likely reason we wouldn't get Friendly AI is the total serial research depth required to develop and implement a strong-enough theory of stable self-improvement with a possible side order of failing to solve the goal transfer problem. Relative to UFAI, FAI work seems like it would be mathier and more insight-based, where UFAI can more easily cobble together lots of pieces. This means that UFAI parallelizes better than FAI. UFAI also probably benefits from brute-force computing power more than FAI. Both of these imply, so far as I can tell, that slower economic growth is good news for FAI; it lengthens the deadline to UFAI and gives us more time to get the job done. I have sometimes thought half-jokingly and half-anthropically that I ought to try to find investment scenarios based on a continued Great Stagnation and an indefinite Great Recession where the whole developed world slowly goes the way of Spain, because these scenarios would account for a majority of surviving Everett branches.
Roughly, it seems to me like higher economic growth speeds up time and this is not a good thing. I wish I had more time, not less, in which to work on FAI; I would prefer worlds in which this research can proceed at a relatively less frenzied pace and still succeed, worlds in which the default timelines to UFAI terminate in 2055 instead of 2035.
I have various cute ideas for things which could improve a country's economic growth. The chance of these things eventuating seems small, the chance that they eventuate because I write about them seems tiny, and they would be good mainly for entertainment, links from econblogs, and possibly marginally impressing some people. I was thinking about collecting them into a post called "The Nice Things We Can't Have" based on my prediction that various forces will block, e.g., the all-robotic all-electric car grid which could be relatively trivial to build using present-day technology - that we are too far into the Great Stagnation and the bureaucratic maturity of developed countries to get nice things anymore. However I have a certain inhibition against trying things that would make everyone worse off if they actually succeeded, even if the probability of success is tiny. And it's not completely impossible that we'll see some actual experiments with small nation-states in the next few decades, that some of the people doing those experiments will have read Less Wrong, or that successful experiments will spread (if the US ever legalizes robotic cars or tries a city with an all-robotic fleet, it'll be because China or Dubai or New Zealand tried it first). Other EAs (effective altruists) care much more strongly about economic growth directly and are trying to increase it directly. (An extremely understandable position which would typically be taken by good and virtuous people).
Throwing out remote, contrived scenarios where something accomplishes the opposite of its intended effect is cheap and meaningless (vide "But what if MIRI accomplishes the opposite of its purpose due to blah") but in this case I feel impelled to ask because my mainline visualization has the Great Stagnation being good news. I certainly wish that economic growth would align with FAI because then my virtues would align and my optimal policies have fewer downsides, but I am also aware that wishing does not make something more likely (or less likely) in reality.
To head off some obvious types of bad reasoning in advance: Yes, higher economic growth frees up resources for effective altruism and thereby increases resources going to FAI, but it also increases resources going to the AI field generally which is mostly pushing UFAI, and the problem arguendo is that UFAI parallelizes more easily.
Similarly, a planet with generally higher economic growth might develop intelligence amplification (IA) technology earlier. But this general advancement of science will also accelerate UFAI, so you might just be decreasing the amount of FAI research that gets done before IA and decreasing the amount of time available after IA before UFAI. Similarly to the more mundane idea that increased economic growth will produce more geniuses some of whom can work on FAI; there'd also be more geniuses working on UFAI, and UFAI probably parallelizes better and requires less serial depth of research. If you concentrate on some single good effect on blah and neglect the corresponding speeding-up of UFAI timelines, you will obviously be able to generate spurious arguments for economic growth having a positive effect on the balance.
So I pose the question: "Is slower economic growth good news?" or "Do you think Everett branches with 4% or 1% RGDP growth have a better chance of getting FAI before UFAI"? So far as I can tell, my current mainline guesses imply, "Everett branches with slower economic growth contain more serial depth of cognitive causality and have more effective time left on the clock before they end due to UFAI, which favors FAI research over UFAI research".
This seems like a good parameter to have a grasp on for any number of reasons, and I can't recall it previously being debated in the x-risk / EA community.
EDIT: To be clear, the idea is not that trying to deliberately slow world economic growth would be a maximally effective use of EA resources and better than current top targets; this seems likely to have very small marginal effects, and many such courses are risky. The question is whether a good and virtuous person ought to avoid, or alternatively seize, any opportunities which come their way to help out on world economic growth.
EDIT 2: Carl Shulman's opinion can be found on the Facebook discussion here.
Regarding the "unpacking fallacy": I don't think you've pointed to a fallacy here. You have pointed to a particular causal pathway which seems to be quite specific, and I've claimed that this particular causal pathway has a tiny expected effect by virtue of its unlikeliness. The negation of this sequence of events simply can't be unpacked as a conjunction in any natural way, it really is fundamentally a disjunction. You might point out that the competing arguments are weak, but they can be much stronger in the cases were they aren't predicated on detailed stories about the future.
As you say, even events that actually happened can also be made to look quite unlikely. But those events were, for the most part, unlikely ex ante. This is like saying "This argument can suggest that any lottery number probably wouldn't win the lottery, even the lottery numbers that actually won!"
If you had a track record of successful predictions, or if anyone who embraced this view had a track record of successful predictions, maybe you could say "all of these successful predictions could be unpacked, so you shouldn't be so skeptical of unpackable arguments." But I don't know of anyone with a reasonably good predictive record who takes this view, and most smart people seem to find it ridiculous.
I don't understand your argument here. Yes, future civilization builds AI. It doesn't follow that the value of the future is first determined by what type of AI they build (they also build nanotech, but the value of the future isn't determined by the type of nanotech they build, and you haven't offered a substantial argument that discriminates between the cases). There could be any number of important events beforehand or afterwards; there could be any number of other important characteristics surrounding how they build AI which influence whether the outcome is positive or negative.
Do you think the main effects of economic progress in 1600 were on the degree of parallelization in AI work? 1800? The magnitude of the direct effects of economic progress on AI work depends on how close the economic progress is to the AI work; as the time involved gets larger, indirect effects come to dominate.
You have a specific view, that there is a set of problems which need to be solved in order to make AI friendly, and that these problems have some kind of principled relationship to the problems that seem important to you now. This is as opposed to e.g. "there are two random approaches to AI, one of which leads to good outcomes and one of which leads to bad outcomes," or "there are many approaches to AI, and you have to think about it in advance to figure out which lead to good outcomes" or "there is a specific problem that you can't have solved by the time you get to AI if you want to to have a positive outcome" or an incredible variety of alternative models. The "parallelization is bad" argument doesn't apply to most of these models, and in some you have "parallelization is good."
Even granting that your picture of AI vs. FAI is correct, and there are these particular theoretical problems that need to be solved, it is completely unclear that more people working in the field makes things worse. I don't know why you think this follows from 3 or can be sensibly lumped with 3, and you don't provide an argument. Suppose I said "The most important thing about dam safety is whether you have a good theoretical understanding the dam before building it" and you said "Yes, and if you increase the number of people working on the dam you are less likely to understand it by the time it gets built, because someone will stumble across an ad hoc way to build a dam." This seems ridiculous both a priori and based on the empirical evidence. There are many possible models for the way that important problems in AI get solved, and you seem to be assuming a particular one.
Suppose that I airdrop in a million knowledge workers this year and they leave next year, corresponding to an exogenous boost in productivity this year. You are claiming that this obviously increases the degree of parallelization on relevant AI work. This isn't obvious, unless a big part of the relevant work is being done today (which seems unlikely, casually?)
I agree that I've only argued that your argument has a tiny impact; it could still dominate if there was literally nothing else going on. But even granting 1-5 there seem to be other big effects from economic growth.
The case in favor of growth seems to be pretty straightforward; I linked to a blog post in the last comment. Let me try to make the point more clearly:
Increasing economic activity speeds up a lot of things. Speeding up everything is neutral, so the important point is the difference between what it speeds up and what it doesn't speed up. Most things people are actually trying to do get sped up, while a bunch of random things (aging and disease, natural disasters, mood changes) don't get sped up. Lots of other things get sped up but significantly less than 1-for-1, because they have some inputs that get sped up and some that don't (accidents of all kinds, conflicts of all kinds, resource depletion). Given that things people are trying to do get sped up, and the things that happen which they aren't trying to do get sped up less, we should expect the effect to be to positive, as long as people are trying to do good things.
What's a specific relevant example of something people are trying to speed up / not speed up besides AGI (= UFAI) and FAI? You pick out aging, disease, and natural disasters as not-sped-up but these seem very loosely coupled to astronomical benefits.