To stay with the lingo (also, is "arguendo" your new catchphrase?): There are worlds in which slower economic growth is good news, and worlds in which it's not. As to which of these contribute more probability mass, that's hard -- because the actual measure would be technological growth, for which economic growth can be a proxy.
However, I find it hard to weigh scenarios such as "because of stagnant and insufficient growth, more resources are devoted to exploiting the remaining inefficiencies using more advanced tech" versus "the worldwide economic upswing caused a flurry of research activities".
R&D, especially foundational work, is such a small part of worldwide GDP that any old effect can dominate it. For example, a "cold war"-ish scenario between China and the US would slow economic growth -- but strongly speedup research in high-tech dual-use technologies.
While we often think "Google" when we think tech research, we should mostly think DoD in terms of resources spent -- state actors traditionally dwarf even multinational corporations in research investments, and whether their investements are spurned or spurred by a slowdown in growth (depending on the non-specified cause of said slowdown) is anyone's guess.
R&D, especially foundational work, is such a small part of worldwide GDP that any old effect can dominate it.
(Note: I agree with this point.)
Related questions:
1) Do Earths with dumber politicians have a better chance at FAI?
2) Do Earths with anti-intellectual culture have a better chance at FAI?
3) Do Earths with less missionary rationalism have a better chance at FAI?
4) How much time should we spend pondering questions like (1)-(3)?
5) How much time should we spend pondering questions like (4)?
I don't have an answer for the question, but I note that the hypothetical raises the possibility of an anthropic explanation for twenty-first century recessions. So if you believe that the Fed is run by idiots who should have , consider the possibility that in branches where the Fed did in fact , the world now consists of computronium.
I find this especially compelling in light of Japan's two "lost decades" combined with all the robotics research for which Japan is famous. Obviously the anthropic hypothesis requires the most stagnation in nations which are good at robots and AI.
I don't have an answer for the question
I hope we can all agree that in discussions on LW this should by no means be regarded as a bad thing.
Can we put a lid on this conflation of subjective probability with objective quantum branching please? A deterministic fair coin does not split the world, and neither would a deterministic economic cycle. Or are we taking seriously the possibility that the course of the economy is largely driven by quantum randomness?
EDIT: actually I just noticed that small quantum fluctuations from long ago can result in large differences between branches today. At that point I'm confused about what the anthropics implies we should see, so please excuse my overconfidence above.
Or are we taking seriously the possibility that the course of the economy is largely driven by quantum randomness?
Isn't everything?
It unfortunately also explains
If Charles Babbage had built his analytic engine, then that would seem to me to have gotten programming started long earlier, such that FAI work would in turn start much sooner, and so we'd have no hardware overhang to worry about. Imagine if this conversation were taking place with 1970's technology.
So far as I can tell, the most likely reason we wouldn't get Friendly AI is the total serial research depth required to develop and implement a strong-enough theory of stable self-improvement with a possible side order of failing to solve the goal transfer problem.
I'm curious that you seem to think the former problem is harder or less likely to be solved than the latter. I've been thinking the opposite, and one reason is that the latter problem seems more philosophical and the former more technical, and humanity seems to have a lot of technical talent that we can eventually recruit to do FAI research, but much less untapped philosophical talent.
Also as another side note, I don't think we should be focusing purely on the "we come up with a value-stable architecture and then the FAI will make a billion self-modifications within the same general architecture" scenario. Another possibility might be that we don't solve the stable self-improvement problem at all, but instead solve the value transfer problem in a general enough way that the FAI we build immediately creates an entirely new architecture for the next generation FAI and transfer its values to its creation usin...
This position seems unlikely to me at face value. It relies on a very long list of claims, and given the apparently massive improbability of the conjunction, there is no way this consideration is going to be the biggest impact of economic progress:
I don't see how you can defend giving any of those points more than 1/2 probability, and I would give the conjunction less than 1% probability. Moreover, even in this scenario, the negative effect from economic progress is quite small. (Perhaps a 1% increase in sustained economic productivity makes the ...
General remark: At some point I need to write a post about how I'm worried that there's an "unpacking fallacy" or "conjunction fallacy fallacy" practiced by people who have heard about the conjunction fallacy but don't realize how easy it is to take any event, including events which have already happened, and make it look very improbable by turning one pathway to it into a large series of conjunctions. E.g. I could produce a long list of things which allegedly have to happen for a moon landing to occur, some of which turned out to not be necessary but would look plausible if added to the list ante facto, with no disjunctive paths to the same destination, and thereby make it look impossible. Generally this manifests when somebody writes a list of alleged conjunctive necessities, and I look over the list and some of the items seem unnecessary (my model doesn't go through them at all), obvious disjunctive paths have been omitted, the person has assigned sub-50% probability to things that I see as mainline 90% probabilities, and conditional probabilities when you assume the theory was right about 1-N would be significantly higher for N+1. Most of all, if you ima...
past events as I have read about them in history books, where jaw-dropping stupidity usually plays a much stronger role.
How sure are you that this isn't hindsight bias, that if various involved historical figures had been smarter they would have understood the situation and not done things that look unbelievably stupid looking back?
Do you have particular historical events in mind?
Some Facebook discussion here including Carl's opinion:
Something to take into account:
Speed of economic growth affects the duration of the demographic transition from high-birth-rate-and-high-death-rate to low-birth-rate-and-low-death-rate, for individual countries; and thus affects the total world population.
A high population world, full of low-education people desperately struggling to survive (ie low on Maslow's hierarchy), might be more likely to support making bad decisions about AI development for short term nationalistic reasons.
Great Stagnation being good news
Per Thiel the computer industry is the exception to the Great Stagnation, so not sure how much it really helps. You can claim that building flying cars would take resources away from UFAI progress, though intelligence research (i.e. machine learning) is so intertwined with every industry that this is a weak argument.
A key step in your argument is the importance of the parallel/serial distinction. However we already have some reasonably effective institutions for making naturally serial work parallelizable (e.g. peer review), and more are arising. This has allowed new areas of mathematics to be explored pretty quickly. These provide a valve which should mean that extra work on FAI is only slightly less effective than you'd initially think.
You could still think that this was the dominant point if economic growth would increase the speed of both AI and AI safety work to ...
What kinds of changes to the economy would disproportionately help FAI over UFAI? I gather that your first-order answer is "slowing down", but how much slower? (In the limit, both kinds of research grind to a halt, perhaps only to resume where they left off when the economy picks up.) Are there particular ways in which the economy could slow down (or even speed up) that would especially help FAI over UFAI?
I would also expect socialist economic policies to increase chances of successful FAI, for two reasons. First, it would decrease incentives to produce technological advancements that could lead to UFAI. Second, it would make it easier to devote resources to activities that do not result in a short-term personal profit, such as FAI research.
It's easy to see why rationalists shouldn't help develop technologies that speed AI. (On paper, even an innovation that speeds FAI twice as much as it speeds AI itself would probably be a bad idea if it weren't completely indispensable to FAI. On the other hand, the FAI field is so small right now that even a small absolute increase in money, influence, or intellectual power for FAI should have a much larger impact on our future than a relatively large absolute increase or decrease in the rate of progress of the rest of AI research. So we should be more in...
Motivated reasoning warning: I notice that want it to be the case that economic growth improves the FAI win rate, or at least doesn't reduce it. I am not convinced of either side, but here are my thoughts.
Moore's Law, as originally formulated, was that the unit dollar cost per processor element halves in each interval. I am more convinced that this is serially limited, than I am that FAI research is serially limited. In particular, semiconductor research is saturated with money, and FAI research isn't; this makes it much more likely to have used up any gai...
This question is quite loaded, so maybe it's good to figure out which part of the economic or technological growth is potentially too fast. For example, would the rate of the Moore's law matching the rate of the economic growth, say, 4-5% annual, instead of exceeding it by an order of magnitude, make a difference?
This question is broader than just AI. Economic growth is closely tied to technological advancement, and technological advancement in general carries great risks and great benefits.
Consider nuclear weapons, for instance: Was humanity ready for them? They are now something that could destroy us at any time. But on the other hand, they might be the solution to an oncoming asteroid, which could have destroyed us for millions of years.
Likewise, nanotechnology could create a grey goo event that kills us all; or it could lead to a world without poverty, without...
One countervailing thought: I want AGI to be developed in a high trust, low-scarcity, social-pyshoclogical context, because that seems like it matters a lot for safety.
Slow growth enough and society as a whole becomes a lot more bitter and cutthroat?
Relative to UFAI, FAI work seems like it would be mathier and more insight-based, where UFAI can more easily cobble together lots of pieces. This means that UFAI parallelizes better than FAI. UFAI also probably benefits from brute-force computing power more than FAI. Both of these imply, so far as I can tell, that slower economic growth is good news for FAI; it lengthens the deadline to UFAI and gives us more time to get the job done.
Forgive me if this is a stupid question, but wouldn't UFAI and FAI have identical or near-identical computational abilities/methods/limits and differ only by goals/values?
The Great Stagnation has come with increasing wealth and income disparity.
This is to say: A smaller and smaller number of people are increasingly free to spend an increasing fraction of humanity's productive capacity on the projects they choose. Meanwhile, a vastly larger number of people are increasingly restricted to spend more of their personal productive capacity on projects they would not choose (i.e. increasing labor hours), and in exchange receive less and less control of humanity's productive capacity (i.e. diminishing real wages) to spend on projects that they do choose.
How does this affect the situation with respect to FAI?
I think we're past the point where it matters. If we had a few lost decades in the mid-twentieth century, maybe, (and just to be cognitively polite here, this is just my intuition talking) the intelligence explosion could have been delayed significantly. We are just a decade off from home computers with >100 teraflops, not to mention the distressing trend of neuromorphic hardware (Here's Ben Chandler of the SyNAPSE project talking about his work on HackerNews)With all this inertia, it would take an extremely large downturn to slow us now. Engineering a...
Any ideas to make FAI parallelize better? Or make there be less UFAI resources without reducing economic growth?
Could you be confusing the direction of causality here? I suspect that technological growth tends to lead to economic growth rather than the reverse.
I'm not convinced that slowing economic growth would result in FAI developing faster than UFAI and I think your main point of leverage for getting an advantage lies elsewhere (explained). The key is obviously the proportion between the two, not just slowing down the one or speeding up the other, so I suggest a brainstorm to consider all of the possible ways that slow economic growth could also slow FAI. For one thought: do non-profit organizations do disproportionately poorly during recessions?
The major point of leverage, I think, is people, not the econ...
I think this depends on how much you think you have the ability to cash in on any given opportunity. eg, you gaining a ton of money is probably going to help the cause of FAI more than whatever amount of economic growth is generated helps bring about AI. So basically either put your money where your theories are or don't publicly theorize?
This might come down to eugenics. Imagine that in 15 years, with the help of genetic engineering, lots of extremely high IQ people are born, and their superior intelligence means that in another 15 or so years (absent a singularity) they will totally dominate AGI software development. The faster the economic growth rate the more likely that AGI will be developed before these super-geniuses come of age.
For FAI to beat UAI, sufficient work on FAI needs to be done before sufficient work on AI is done.
If slowing the world economy doesn't change the proportion of work done on things, then a slower world economy doesn't increase the chance of FAI over UAI, it merely delays the time at which one or the other happens. Without specifying how the worlds production is turned down, wouldn't we need to assume that EY's productivity is turned down along with the rest of the world's?
If we assume all of humanity except EY slows down, AND that EY is turning the FAI knob harder than the other knobs relative to the rest of humanity, then we increase the chance of FAI preceding UAI.
I'm not sure that humane values would survive in a world that rewards cooperation weakly. Azathoth grinds slow, but grinds fine.
To oversimplify, there seem to be 2 main factors that increase cooperation, 2 basic foundations for law. Religion and Economic growth. Of this, religion seems to be far more prone-to-volatility. It is possible to get some marginally more intelligent people to point out the absurdity of the entire doctrine and along with the religion, all the other societal values collapse.
Economic growth seems to be a far more promising foundatio...
If we were perpetually stuck at Roman Empire levels of technology, we'd never have to worry about UFAI at all. That doesn't make it a good thing.
If we all got superuniversal-sized computers with halting oracles, we'd die within hours. I'm not sure the implausible extremes are a good way to argue here.
If you are pessimistic about global catastrophic risk from future technology and you are most concerned with people alive today rather than future folk, slower growth is better unless the effects of growth are so good that they outweigh time discounting.
But growth in the poorest countries is good because it contributes negligibly to research and national economies are relatively self-contained, and more growth there means more human lives lived before maybe the end.
Also, while more focused efforts are obviously better in general than trying to affect growth, there is (at least) one situation where you might face an all-or-nothing decision: voting. I'm afraid the ~my solution here~ candidate will not be available.
As long as we are in a world where billions are still living in absolute poverty, low economic growth is politically radicalizing and destabilizing. This can prune world branches quite well on its own, no AI needed. Remember, the armory of apocalypse is already unlocked. It is not important which project succeeds first, if the world gets radiatively sterilized, poisoned, ect before either one succeeds. So, no. Not helpful.
Before you can answer this question, I think you have to look at a more fundamental question, which is simply: why are so few people interested in supporting FAI research/ concerned about the possibility of UFAI?
It seems like there are a lot of factors involved here. In times of economic stress, short-term survival tends to dominate over long-term thinking. For people who are doing long-term thinking, there are a number of other problems that many of them are more focused on, such as resource depletion, global warming, ect; even if you don't think ...
To be clear, the idea is not that trying to deliberately slow world economic growth would be a maximally effective use of EA resources and better than current top targets; this seems likely to have very small marginal effects, and many such courses are risky. The question is whether a good and virtuous person ought to avoid, or alternatively seize, any opportunities which come their way to help out on world economic growth.
It sounds like status quo bias. If growth was currently 2% higher, should the person then seize on growth-slowing opportunities?
On...
For economic growth, don't focus on the total number. Ask about the distribution. Increasing world wealth by 20% would have minimal impact on making people's lives better if that increase is concentrated among the top 2%. It would have huge impact if it's concentrated in the bottom 50%.
So if you have a particular intervention in mind, ask yourself, "Is this just going to make the rich richer, or is it going to make the poor richer?" An intervention that eliminates malaria, or provides communication services in refugee camps, or otherwise assists the most disadvantaged, can be of great value without triggering your fears.
If a good outcome requires that influential people cooperate and have longer time-preferences, then slower economic growth than expected might increase the likelihood of a bad outcome.
It's true that periods of increasing economic growth haven't always lead to great technology decision-making (Cold War), but I'd expect an economic slowdown, especially in a democratic country, to make people more willingly to take technological risks (to restore economic growth), and less likely to cooperate with or listen to or fund cautious dissenters, (like people who say we should be worried about AI).
Will 1% and 4% RGDP growth worlds have the same levels of future shock? A world in which production doubles every 18 years and a world in which production doubles every 70 years seem like they will need very different abilities to deal with change.
I suspect that more future shock would lead to more interest in stable self-improvement, especially on the institutional level. But it's not clear what causes some institutions to do the important but not urgent work of future-proofing, and others to not- it may be the case that in the more sedate 1% growth world, more effort will be spent on future-proofing, which is good news for FAI relative to UFAI.
Eliezer, this post reeks of an ego trip.
"I wish I had more time, not less, in which to work on FAI"... Okay, world, lets slow right down for a while. And you, good and viruous people with good for technological or economic advancement: just keep quiet until it is safe.
I was raised as a good and proper child of the Enlightenment who grew up reading The Incredible Bread Machine and A Step Farther Out, taking for granted that economic growth was a huge in-practice component of human utility (plausibly the majority component if you asked yourself what was the major difference between the 21st century and the Middle Ages) and that the "Small is Beautiful" / "Sustainable Growth" crowds were living in impossible dreamworlds that rejected quantitative thinking in favor of protesting against nuclear power plants.
And so far as I know, such a view would still be an excellent first-order approximation if we were going to carry on into the future by steady technological progress: Economic growth = good.
But suppose my main-line projection is correct and the "probability of an OK outcome" / "astronomical benefit" scenario essentially comes down to a race between Friendly AI and unFriendly AI. So far as I can tell, the most likely reason we wouldn't get Friendly AI is the total serial research depth required to develop and implement a strong-enough theory of stable self-improvement with a possible side order of failing to solve the goal transfer problem. Relative to UFAI, FAI work seems like it would be mathier and more insight-based, where UFAI can more easily cobble together lots of pieces. This means that UFAI parallelizes better than FAI. UFAI also probably benefits from brute-force computing power more than FAI. Both of these imply, so far as I can tell, that slower economic growth is good news for FAI; it lengthens the deadline to UFAI and gives us more time to get the job done. I have sometimes thought half-jokingly and half-anthropically that I ought to try to find investment scenarios based on a continued Great Stagnation and an indefinite Great Recession where the whole developed world slowly goes the way of Spain, because these scenarios would account for a majority of surviving Everett branches.
Roughly, it seems to me like higher economic growth speeds up time and this is not a good thing. I wish I had more time, not less, in which to work on FAI; I would prefer worlds in which this research can proceed at a relatively less frenzied pace and still succeed, worlds in which the default timelines to UFAI terminate in 2055 instead of 2035.
I have various cute ideas for things which could improve a country's economic growth. The chance of these things eventuating seems small, the chance that they eventuate because I write about them seems tiny, and they would be good mainly for entertainment, links from econblogs, and possibly marginally impressing some people. I was thinking about collecting them into a post called "The Nice Things We Can't Have" based on my prediction that various forces will block, e.g., the all-robotic all-electric car grid which could be relatively trivial to build using present-day technology - that we are too far into the Great Stagnation and the bureaucratic maturity of developed countries to get nice things anymore. However I have a certain inhibition against trying things that would make everyone worse off if they actually succeeded, even if the probability of success is tiny. And it's not completely impossible that we'll see some actual experiments with small nation-states in the next few decades, that some of the people doing those experiments will have read Less Wrong, or that successful experiments will spread (if the US ever legalizes robotic cars or tries a city with an all-robotic fleet, it'll be because China or Dubai or New Zealand tried it first). Other EAs (effective altruists) care much more strongly about economic growth directly and are trying to increase it directly. (An extremely understandable position which would typically be taken by good and virtuous people).
Throwing out remote, contrived scenarios where something accomplishes the opposite of its intended effect is cheap and meaningless (vide "But what if MIRI accomplishes the opposite of its purpose due to blah") but in this case I feel impelled to ask because my mainline visualization has the Great Stagnation being good news. I certainly wish that economic growth would align with FAI because then my virtues would align and my optimal policies have fewer downsides, but I am also aware that wishing does not make something more likely (or less likely) in reality.
To head off some obvious types of bad reasoning in advance: Yes, higher economic growth frees up resources for effective altruism and thereby increases resources going to FAI, but it also increases resources going to the AI field generally which is mostly pushing UFAI, and the problem arguendo is that UFAI parallelizes more easily.
Similarly, a planet with generally higher economic growth might develop intelligence amplification (IA) technology earlier. But this general advancement of science will also accelerate UFAI, so you might just be decreasing the amount of FAI research that gets done before IA and decreasing the amount of time available after IA before UFAI. Similarly to the more mundane idea that increased economic growth will produce more geniuses some of whom can work on FAI; there'd also be more geniuses working on UFAI, and UFAI probably parallelizes better and requires less serial depth of research. If you concentrate on some single good effect on blah and neglect the corresponding speeding-up of UFAI timelines, you will obviously be able to generate spurious arguments for economic growth having a positive effect on the balance.
So I pose the question: "Is slower economic growth good news?" or "Do you think Everett branches with 4% or 1% RGDP growth have a better chance of getting FAI before UFAI"? So far as I can tell, my current mainline guesses imply, "Everett branches with slower economic growth contain more serial depth of cognitive causality and have more effective time left on the clock before they end due to UFAI, which favors FAI research over UFAI research".
This seems like a good parameter to have a grasp on for any number of reasons, and I can't recall it previously being debated in the x-risk / EA community.
EDIT: To be clear, the idea is not that trying to deliberately slow world economic growth would be a maximally effective use of EA resources and better than current top targets; this seems likely to have very small marginal effects, and many such courses are risky. The question is whether a good and virtuous person ought to avoid, or alternatively seize, any opportunities which come their way to help out on world economic growth.
EDIT 2: Carl Shulman's opinion can be found on the Facebook discussion here.