I was raised as a good and proper child of the Enlightenment who grew up reading The Incredible Bread Machine and A Step Farther Out, taking for granted that economic growth was a huge in-practice component of human utility (plausibly the majority component if you asked yourself what was the major difference between the 21st century and the Middle Ages) and that the "Small is Beautiful" / "Sustainable Growth" crowds were living in impossible dreamworlds that rejected quantitative thinking in favor of protesting against nuclear power plants.

And so far as I know, such a view would still be an excellent first-order approximation if we were going to carry on into the future by steady technological progress:  Economic growth = good.

But suppose my main-line projection is correct and the "probability of an OK outcome" / "astronomical benefit" scenario essentially comes down to a race between Friendly AI and unFriendly AI.  So far as I can tell, the most likely reason we wouldn't get Friendly AI is the total serial research depth required to develop and implement a strong-enough theory of stable self-improvement with a possible side order of failing to solve the goal transfer problem.  Relative to UFAI, FAI work seems like it would be mathier and more insight-based, where UFAI can more easily cobble together lots of pieces.  This means that UFAI parallelizes better than FAI.  UFAI also probably benefits from brute-force computing power more than FAI.  Both of these imply, so far as I can tell, that slower economic growth is good news for FAI; it lengthens the deadline to UFAI and gives us more time to get the job done.  I have sometimes thought half-jokingly and half-anthropically that I ought to try to find investment scenarios based on a continued Great Stagnation and an indefinite Great Recession where the whole developed world slowly goes the way of Spain, because these scenarios would account for a majority of surviving Everett branches.

Roughly, it seems to me like higher economic growth speeds up time and this is not a good thing.  I wish I had more time, not less, in which to work on FAI; I would prefer worlds in which this research can proceed at a relatively less frenzied pace and still succeed, worlds in which the default timelines to UFAI terminate in 2055 instead of 2035.

I have various cute ideas for things which could improve a country's economic growth.  The chance of these things eventuating seems small, the chance that they eventuate because I write about them seems tiny, and they would be good mainly for entertainment, links from econblogs, and possibly marginally impressing some people.  I was thinking about collecting them into a post called "The Nice Things We Can't Have" based on my prediction that various forces will block, e.g., the all-robotic all-electric car grid which could be relatively trivial to build using present-day technology - that we are too far into the Great Stagnation and the bureaucratic maturity of developed countries to get nice things anymore.  However I have a certain inhibition against trying things that would make everyone worse off if they actually succeeded, even if the probability of success is tiny.  And it's not completely impossible that we'll see some actual experiments with small nation-states in the next few decades, that some of the people doing those experiments will have read Less Wrong, or that successful experiments will spread (if the US ever legalizes robotic cars or tries a city with an all-robotic fleet, it'll be because China or Dubai or New Zealand tried it first).  Other EAs (effective altruists) care much more strongly about economic growth directly and are trying to increase it directly.  (An extremely understandable position which would typically be taken by good and virtuous people).

Throwing out remote, contrived scenarios where something accomplishes the opposite of its intended effect is cheap and meaningless (vide "But what if MIRI accomplishes the opposite of its purpose due to blah") but in this case I feel impelled to ask because my mainline visualization has the Great Stagnation being good news.  I certainly wish that economic growth would align with FAI because then my virtues would align and my optimal policies have fewer downsides, but I am also aware that wishing does not make something more likely (or less likely) in reality.

To head off some obvious types of bad reasoning in advance:  Yes, higher economic growth frees up resources for effective altruism and thereby increases resources going to FAI, but it also increases resources going to the AI field generally which is mostly pushing UFAI, and the problem arguendo is that UFAI parallelizes more easily.

Similarly, a planet with generally higher economic growth might develop intelligence amplification (IA) technology earlier.  But this general advancement of science will also accelerate UFAI, so you might just be decreasing the amount of FAI research that gets done before IA and decreasing the amount of time available after IA before UFAI.  Similarly to the more mundane idea that increased economic growth will produce more geniuses some of whom can work on FAI; there'd also be more geniuses working on UFAI, and UFAI probably parallelizes better and requires less serial depth of research.  If you concentrate on some single good effect on blah and neglect the corresponding speeding-up of UFAI timelines, you will obviously be able to generate spurious arguments for economic growth having a positive effect on the balance.

So I pose the question:  "Is slower economic growth good news?" or "Do you think Everett branches with 4% or 1% RGDP growth have a better chance of getting FAI before UFAI"?  So far as I can tell, my current mainline guesses imply, "Everett branches with slower economic growth contain more serial depth of cognitive causality and have more effective time left on the clock before they end due to UFAI, which favors FAI research over UFAI research".

This seems like a good parameter to have a grasp on for any number of reasons, and I can't recall it previously being debated in the x-risk / EA community.

EDIT:  To be clear, the idea is not that trying to deliberately slow world economic growth would be a maximally effective use of EA resources and better than current top targets; this seems likely to have very small marginal effects, and many such courses are risky.  The question is whether a good and virtuous person ought to avoid, or alternatively seize, any opportunities which come their way to help out on world economic growth.

EDIT 2:  Carl Shulman's opinion can be found on the Facebook discussion here.

Do Earths with slower economic growth have a better chance at FAI?
New Comment
175 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

To stay with the lingo (also, is "arguendo" your new catchphrase?): There are worlds in which slower economic growth is good news, and worlds in which it's not. As to which of these contribute more probability mass, that's hard -- because the actual measure would be technological growth, for which economic growth can be a proxy.

However, I find it hard to weigh scenarios such as "because of stagnant and insufficient growth, more resources are devoted to exploiting the remaining inefficiencies using more advanced tech" versus "the worldwide economic upswing caused a flurry of research activities".

R&D, especially foundational work, is such a small part of worldwide GDP that any old effect can dominate it. For example, a "cold war"-ish scenario between China and the US would slow economic growth -- but strongly speedup research in high-tech dual-use technologies.

While we often think "Google" when we think tech research, we should mostly think DoD in terms of resources spent -- state actors traditionally dwarf even multinational corporations in research investments, and whether their investements are spurned or spurred by a slowdown in growth (depending on the non-specified cause of said slowdown) is anyone's guess.

R&D, especially foundational work, is such a small part of worldwide GDP that any old effect can dominate it.

(Note: I agree with this point.)

4Luke_A_Somers
Yes - I think we'd be in much better shape with high growth and total peace than the other way around. Corporations seem rather more likely to be satisfied with tool AI (or at any rate AI with a fixed cognitive algorithm, even if it can learn facts) than, say, a nation at war.
3roystgnr
Indeed. The question of "would X be better" usually is shorthand for "would X be better, all else being equal", and since in this case X is an integrated quantity over basically all human activity it's impossible for all else to be equal. To make the question well defined you have to specify what other influences go into the change in economic growth. Even in the restricted question where we look at various ways that charity and activism might increase economic growth, it looks likely that different charities and different policy changes would have different effects on FAI development.
1John_Maxwell
So how would working to decrease US military spending rank as an effective altruist goal then? I'd guess most pro-economic-growth EAs are also in favor of it.

Related questions:

1) Do Earths with dumber politicians have a better chance at FAI?

2) Do Earths with anti-intellectual culture have a better chance at FAI?

3) Do Earths with less missionary rationalism have a better chance at FAI?

4) How much time should we spend pondering questions like (1)-(3)?

5) How much time should we spend pondering questions like (4)?

5Eliezer Yudkowsky
How much dumber? If we can make politicians marginally dumber in a way that slows down economic growth, or better yet decreases science funding while leaving economic growth intact, without this causing any other marginal change in stupid decisions relevant to FAI vs. UFAI, then sure. I can't think of any particular marginal changes I expect because I already expect almost all such decisions to be made incorrectly, but I worry that this is only a failure of my imagination on my part - that with even dumber politicians, things could always become unboundedly worse in ways I didn't even conceive. This seems like essentially the same question as above. No. Missionary rationalists are a tiny fraction of world population who contribute most of FAI research and support.

I don't have an answer for the question, but I note that the hypothetical raises the possibility of an anthropic explanation for twenty-first century recessions. So if you believe that the Fed is run by idiots who should have , consider the possibility that in branches where the Fed did in fact , the world now consists of computronium.

I find this especially compelling in light of Japan's two "lost decades" combined with all the robotics research for which Japan is famous. Obviously the anthropic hypothesis requires the most stagnation in nations which are good at robots and AI.

I don't have an answer for the question

I hope we can all agree that in discussions on LW this should by no means be regarded as a bad thing.

[-][anonymous]100

Can we put a lid on this conflation of subjective probability with objective quantum branching please? A deterministic fair coin does not split the world, and neither would a deterministic economic cycle. Or are we taking seriously the possibility that the course of the economy is largely driven by quantum randomness?

EDIT: actually I just noticed that small quantum fluctuations from long ago can result in large differences between branches today. At that point I'm confused about what the anthropics implies we should see, so please excuse my overconfidence above.

Or are we taking seriously the possibility that the course of the economy is largely driven by quantum randomness?

Isn't everything?

8Eliezer Yudkowsky
This comment was banned, which looked to me like a probable accident with a moderator click, so I unbanned it. If I am in error can whichever mod PM me after rebanning it? Naturally if this was an accident, it must have been a quantum random one.
9Emile
I'm certainly taking it seriously, and am somewhat surprised that you're not. Some ways small-sized effects (most likely to "depend" of quantum randomness) can eventually have large-scale impacts: * DNA Mutations * Which sperm gets to the egg * The weather * Soft errors from cosmic rays or thermal radiation
5Jack
Whether or not quantum randomness drives the course of the economy it's still a really good idea to stop conflating subjective probability and the corresponding notion of possible worlds with quantum/inflationary/whatever many world theories. Rolf's comment doesn't actually do this: I read him as speaking entirely about the anthropic issue. Eliezer, on the other hand, totally is conflating them in the original post. I understand that there are reasons to think anthropic issues play an essential role in the assignment of subjective probabilities, especially at a decision theoretic level. But given a) subjective uncertainty over whether or not many-worlds is correct, b) our ignorance of how the Born probability rule figures into the relationship and c) the way anthropics skews anticipated experiences I am really suspicious that anyone here is able to answer the question: People are actually answering which is not obviously the same thing.
1Kaj_Sotala
You don't need quantum many worlds for this kind of speculation: e.g. a spatially infinite universe would also do the trick.
0Jack
As I said:
0Nisan
I agree, subjective uncertainty isn't the same as quantum uncertainty! On the other hand, there have been rumors that coinflips are not deterministic. See here.
-6RolfAndreassen
6Larks
It also explains why the dot com boom had to burst,
9Viliam_Bur
why Charles Babbage never built his Analytical Engine, why Archimedes was killed, and Antikythera mechanism drowned in the sea, why most children in our culture hate maths, and why internet is mostly used for chatting, games, and porn.
[-]gjm440

It unfortunately also explains

  • why Alan Turing never published his work on the theory of computation
  • why all the projects in the 1950s aimed at making general-purpose computers got cancelled for complex political reasons no one understood
  • why that big earthquake killed everyone at the Dartmouth Conference, tragically wiping out almost the entire nascent field of AI
  • why all attempts at constructing integrated circuits mysteriously failed
  • why progress abruptly stopped following Moore's law in the early 1980s
  • why no one has ever been able to make computer systems capable of beating grandmasters at chess, questioning Jeopardy answers, searching huge databases of information, etc.
7RolfAndreassen
All of which are true in other possible worlds, which for all we know may have a greater amplitude than ours. That we are alive does not give us any information on how probable we are, because we can't observe the reference class. For all we know, we're one of those worlds that skate very, very close to the edge of disaster, and the two recessions of the aughts are the only thing that have kept us alive; but those recessions were actually extremely unlikely, and the "mainline" branches of humanity, the most probable ones, are alive because the Cuban War of 1963 set the economy back to steam and horses. (To be sure, they have their problems, but UFAI isn't among them.) Note that, if you take many-worlds seriously, then in branches where UFAI is developed, there will still be some probability of survival due to five cosmic rays with exactly the right energies hitting the central CPU at just the right times and places, causing SkyNet to divide by zero instead of three. But the ones who survive due to that event won't be very probable humans. :)
4Paul Crowley
If most copies of me died in the shooting but I survived, I should expect to find that I survived for only one reason, not for multiple independent reasons. Perhaps the killer's gun jammed at the crucial moment, or perhaps I found a good place to hide, but not both.
0gjm
On the other hand, if you are being shot at repeatedly and survive a long time, you should expect there to be lots of reasons (or one reason with very broad scope -- maybe everyone's guns were sabotaged in a single operation, or maybe they've been told to let you live, or a god is looking out for you). And it's only in that sort of situation that anthropic "explanations" would be in any way sensible. It's always true enough to say "well, of course I find myself still alive because if I weren't I wouldn't be contemplating the fact that I'm still alive". But most of the time this is really uninteresting. Perhaps it always is. The examples given in this thread seem to me to call out for anthropic explanations to much the same extent as does the fact that I'm over 40 years old and not dead yet.
2khafra
This just prompted me to try to set a subjective probability that quantum immortality works, so e.g. if I remember concluding that it was 5% likely at 35 and find myself still alive at 95, I will believe in quantum immortality (going by SSA tables). I'm currently finding this subjective probability too creepy to actually calculate.
0gjm
I suggest giving some thought first to exactly what "believing in quantum immortality" really amounts to.
0khafra
To me, it means expecting to experience the highest-weighted factorization of the hamiltonian that contains a conscious instantiation of me, no matter how worse-than-death that branch may be.
2gjm
I think you should analyse further. Expecting conditional on still being alive? Surely you expect that even without "quantum immortality". Expecting to find yourself still alive, and experience that? Again, what exactly do you mean by that? What does it mean to expect to find yourself still alive? (Presumably not that others will expect to find you still alive in any useful sense, because with that definition you don't get q.i.) I expect there are Everett branches in which you live to 120 as a result of a lot of good luck (or, depending on what state you're in, bad luck). Almost equivalently, I expect there's a small but nonzero probability that you live to 120 as a result of a lot of luck. { If you live to 120 / In those branches where you live to 120 } you will probably have experienced a lot of surprising things that enabled your survival. None of this is in any way dependent on quantum mechanics, still less on the many-worlds interpretation. It seems to me that "believing in quantum immortality" is a matter of one's own values and interpretive choices, much more than of any actual beliefs about how the world is. But I may be missing something.
0khafra
I should perhaps be more clear that I'm not distinguishing between "MWI and functionalism are true" and "quantum immortality works." That is, if "I" consciously experience dying, and my consciouness ceases, but "I" go on experiencing things in other everett branches, I'm counting that as QI. I'm currently making observations consistent with my own existence. If I stop making that kind of observation, I consider that no longer being alive. Going again with the example of a 35 year old: Conditional on having been born, I have a 96% chance of still being alive. So whatever my prior on QI, that's far less than a decibel of evidence in favor of it. Still, ceteris paribus, it's more likely than it was at age 5.
0gjm
Sure. But I'm not sure I made the point I was trying to make as clearly as I hoped, so I'll try again. Imagine two possible worlds. In one of them, QM works basically as currently believed, and the way it does this is exactly as described by MWI. In the other, there is at every time a single kinda-classical-ish state of the world, with Copenhagen-style collapses or something happening as required. In either universe it is possible that you will find yourself still alive at 120 (or much more) despite having had plenty of opportunities to be killed off by accident, illness, etc. In either universe, the probability of this is very low (which in the former case means most of the measure of where we are now ends up with you dead earlier, and in the latter means whatever exactly probability means in a non-MWI world). In either universe, every observation you make will show yourself alive, however improbable that may seem. How does observing yourself still alive at 150 count as evidence for MWI, given all that? What you mustn't say (so it seems to me): "The probability of finding myself alive is very low on collapse theories and high on MWI, so seeing myself still alive at 150 is evidence for MWI over collapse theories". If you mean the probability conditional on you making the observation at age 150, it's 1 in both cases. If you mean the probability not conditional on that, it's tiny in both cases. (Assuming arguendo that Pr(nanotech etc. makes lots of people live to be very old by then) is negligible.) The same applies if you try to go halfway and take the probability simply conditional on you making the observation: MWI or no MWI, only a tiny fraction of observations you make will be at age 150.
0khafra
In the MWI-universe, it is probable at near unity that I will find myself still alive at 120. In the objective collapse universe, there's only a small fraction of a percent chance that I'll find myself alive at 120. In the objective collapse universe, every observation I make will show myself alive--but there's only a fraction of a percent of a chance that I'll make an observation that shows my age as 120. The probability of my making the observation "I am 150 years old," given objective collapse, is one of those probabilities so small it's dominated by "stark raving mad" type scenarios. Nobody you've ever known has made that observation; neither has anybody they know. How can this not be evidence?
0gjm
What's the observation you're going to make that has probability near-1 on MWI and probabilty near-0 on collapse -- and probability given what? "I'm alive at 120, here and now" -- that has small probability either way. (On most branches of the wavefunction that include your present self, no version of you gets to say that. Ignoring, as usual, irrelevant details involving positive singularities, very large universes, etc.) "90 years from now I'll still be alive" (supposing arguendo that you're 30 now) -- that has small probability either way. "I'm alive at 120, conditional on my still being alive at 120" -- that obviously has probability 1 either way. "On some branch of the wavefunction I'm still alive at 120" -- sure, that's true on MWI and (more or less by definition) false on a collapse interpretation; but it's not something you can observe. It corresponds exactly to "With nonzero probability I'm still alive at 120", which is true on collapse.
0khafra
This is the closest one. However, that's not an observation, it's a prediction. The observation is "90 years ago, I was 30." That's an observation that almost certainly won't be made in a collapse-based world; but will be made somewhere in an MWI world. "small probability either way" only applies if I want to locate myself precisely, within a branch as well as within a possible world. If I only care about locating myself in one possible world or the other, the observation has a large probability in MWI.
0wedrifid
You are correct.

If Charles Babbage had built his analytic engine, then that would seem to me to have gotten programming started long earlier, such that FAI work would in turn start much sooner, and so we'd have no hardware overhang to worry about. Imagine if this conversation were taking place with 1970's technology.

0[anonymous]
Rolf, I don't mean to pick on you specifically, but the genre of vague "anthropic filter!" speculations whenever anything is said to be possibly linked even slightly to catastrophe needs to be dialed back on LW. Such speculations almost never a) designate a definite theoretical framework on which that makes sense b) make any serious effort to show a non-negligible odds ratio (e.g. more than 1%) on any (even implausible) account of anthropic reasoning. However, they do invite a lot of nonsense.

I responded here.

So far as I can tell, the most likely reason we wouldn't get Friendly AI is the total serial research depth required to develop and implement a strong-enough theory of stable self-improvement with a possible side order of failing to solve the goal transfer problem.

I'm curious that you seem to think the former problem is harder or less likely to be solved than the latter. I've been thinking the opposite, and one reason is that the latter problem seems more philosophical and the former more technical, and humanity seems to have a lot of technical talent that we can eventually recruit to do FAI research, but much less untapped philosophical talent.

Also as another side note, I don't think we should be focusing purely on the "we come up with a value-stable architecture and then the FAI will make a billion self-modifications within the same general architecture" scenario. Another possibility might be that we don't solve the stable self-improvement problem at all, but instead solve the value transfer problem in a general enough way that the FAI we build immediately creates an entirely new architecture for the next generation FAI and transfer its values to its creation usin... (read more)

4Wei Dai
Here's an attempt to verbalize why I think this, which is a bit different from Eliezer's argument (which I also buy to some extent). First I think UFAI is much easier than FAI and we are putting more resources into the former than the latter. To put this into numbers for clarity, let's say UFAI takes 1000 units of work, and FAI takes 2000 units of work, and we're currently putting 10 units of work into UFAI per year, and only 1 unit of work per year into FAI. If we had a completely stagnant economy, with 0% growth, we'd have 100 years to do something about this, or for something to happen to change this, before it's too late. If the economy was instead growing at 5% per year, and this increased both UFAI and FAI work by 5% per year, the window of time "for something to happen" shrinks to about 35 years. The economic growth might increase the probability per year of "something happening" but it doesn't seem like it would be enough to compensate for the shortened timeline.
4Eliezer Yudkowsky
Also: Many likely reasons for something to happen about this center around, in appropriate generality, the rationalist!EA movement. This movement is growing at a higher exponent than current economic growth.
2owencb
I think this is the strongest single argument that economic growth might currently be bad. However even then what matters is the elasticity of movement growth rates with economic growth rates. I don't know how we can measure this; I expect it's positive and less than one, but I'm rather more confident about that lower bound than the upper bound.

See here and here.

This position seems unlikely to me at face value. It relies on a very long list of claims, and given the apparently massive improbability of the conjunction, there is no way this consideration is going to be the biggest impact of economic progress:

  1. The most important determinant of future welfare is whether you get FAI or UFAI (this presupposes a relatively detailed model of how AI works, of what the danger looks like, etc.)
  2. This will happen quite soon, and relevant AI work is already underway.
  3. The main determinant of FAI vs. UFAI is whether an appropriate theoretical framework for goal-stability is in place.
  4. As compared to UFAI work, the main difficulty for developing such a framework for goal-stability is the serial depth of the problem.
  5. A 1% boost in economic activity this year has a non-negligible effect on the degree of parallelization of relevant AI work.

I don't see how you can defend giving any of those points more than 1/2 probability, and I would give the conjunction less than 1% probability. Moreover, even in this scenario, the negative effect from economic progress is quite small. (Perhaps a 1% increase in sustained economic productivity makes the ... (read more)

General remark: At some point I need to write a post about how I'm worried that there's an "unpacking fallacy" or "conjunction fallacy fallacy" practiced by people who have heard about the conjunction fallacy but don't realize how easy it is to take any event, including events which have already happened, and make it look very improbable by turning one pathway to it into a large series of conjunctions. E.g. I could produce a long list of things which allegedly have to happen for a moon landing to occur, some of which turned out to not be necessary but would look plausible if added to the list ante facto, with no disjunctive paths to the same destination, and thereby make it look impossible. Generally this manifests when somebody writes a list of alleged conjunctive necessities, and I look over the list and some of the items seem unnecessary (my model doesn't go through them at all), obvious disjunctive paths have been omitted, the person has assigned sub-50% probability to things that I see as mainline 90% probabilities, and conditional probabilities when you assume the theory was right about 1-N would be significantly higher for N+1. Most of all, if you ima... (read more)

7lukeprog
Related: There's a small literature on what Tversky called "support theory," which discusses packing and unpacking effects: Tversky & Koehler (1994); Ayton (1997); Rottenstreich & Tversky (1997); Macchi et al. (1997); Fox & Tversky (1998); Brenner & Koehler (1999); Chen et al. (2001); Boven & Epley (2003); Brenner et al. (2005); Bligin & Brenner (2008).
8ChrisHallquist
Luke asked me to look into this literature for a few hours. Here's what I found. The original paper (Tversky and Koehler 1994) is about disjunctions, and how unpacking them raises people’s estimate of the probaility. So for example, asking people to estimate the probability someone died of “heart disease, cancer, or other natural causes” yields a higher probability estimate than if you just ask about “natural causes.” They consider the hypothesis this might be because they take the researcher’s apparent emphasis as evidence that’s it’s more likely, but they tested & disconfirmed this hypothesis by telling people to take the last digit of their phone number and estimate the percentage of couples that have that many children. Percentages sum to greater than 1. Finally, they check whether experts are vulnerable to this bias by doing an experiment similar to the first experiment, but using physicians at Stanford University as the subjects and asking them about a hypothetical case of a woman admitted to an emergency room. They confirmed that yes, experts are vulnerable to this mistake too. This phenomenon is known as “subadditivity.” A subsequent study (RottenStreich and Tversky 1997) found that subadditivity can even occur when dealing with explicit conjunctions. Macci et al. (1999) found evidence of superadditivity: ask some people how probable it is that the freezing point of alcohol is below that of gasoline, other people how probable it is that the freezing point of gasoline is below that of alcohol, average answers sum to less than 1. Other studies try to refine the mathematical model of how people make judgements in these kinds of cases, but the experiments I’ve described are the most striking empirical results, I think. One experiment that talks about unpacking conjunctions (rather than disjunctions, like the experiments I’ve described so far) is Boven and Epley (2003, particularly their first experiment, where they ask people how much an oil refinery should
0Nick_Beckstead
What this shows is that people are inconsistent in a certain way. If you ask them the same question in two different ways (packed vs. unpacked) you get different answers. Is there any indication of which is the better way to ask the question, or whether asking it some other way is better still? Without an answer to this question, it's unclear to me whether we should talk about an "unpacking fallacy" or a "failure to unpack fallacy".
0lukeprog
Here's a handy example discussion of related conjunction issues from the Project Cyclops report:
6paulfchristiano
Regarding the "unpacking fallacy": I don't think you've pointed to a fallacy here. You have pointed to a particular causal pathway which seems to be quite specific, and I've claimed that this particular causal pathway has a tiny expected effect by virtue of its unlikeliness. The negation of this sequence of events simply can't be unpacked as a conjunction in any natural way, it really is fundamentally a disjunction. You might point out that the competing arguments are weak, but they can be much stronger in the cases were they aren't predicated on detailed stories about the future. As you say, even events that actually happened can also be made to look quite unlikely. But those events were, for the most part, unlikely ex ante. This is like saying "This argument can suggest that any lottery number probably wouldn't win the lottery, even the lottery numbers that actually won!" If you had a track record of successful predictions, or if anyone who embraced this view had a track record of successful predictions, maybe you could say "all of these successful predictions could be unpacked, so you shouldn't be so skeptical of unpackable arguments." But I don't know of anyone with a reasonably good predictive record who takes this view, and most smart people seem to find it ridiculous. 1. I don't understand your argument here. Yes, future civilization builds AI. It doesn't follow that the value of the future is first determined by what type of AI they build (they also build nanotech, but the value of the future isn't determined by the type of nanotech they build, and you haven't offered a substantial argument that discriminates between the cases). There could be any number of important events beforehand or afterwards; there could be any number of other important characteristics surrounding how they build AI which influence whether the outcome is positive or negative. 2. Do you think the main effects of economic progress in 1600 were on the degree of parallelization in AI
0Eliezer Yudkowsky
What's a specific relevant example of something people are trying to speed up / not speed up besides AGI (= UFAI) and FAI? You pick out aging, disease, and natural disasters as not-sped-up but these seem very loosely coupled to astronomical benefits.
6paulfchristiano
Increasing capital stocks, improving manufacturing, improving education, improving methodologies for discourse, figuring out important considerations. Making charity more efficient, ending poverty. Improving collective decision-making and governance. All of the social sciences. All of the hard sciences. Math and philosophy and computer science. Everything that everyone is working on, everywhere in the world. I picked out conflict, accidents, and resource depletion as not being sped up 1-for-1, i.e. such that a 1% boost in economic activity corresponds to a <1% boost in those processes. Most people would say that war and accidents account for many bad things that happen. War is basically defined by people making decisions that are unusually misaligned with aggregate welfare. Accidents are basically defined by people not getting what they want. I could have lumped in terrorism, and then accounted for basically all of the ways that we can see things going really badly in the present day. You have a particular story about how a bad thing might happen in the future. Maybe that's enough to conclude the future will be entirely unlike the present. But it seems like (1) that's a really brittle way to reason, however much you want to accuse its detractors of the "unpacking fallacy," and most smart people take this view, and (2) even granting almost all of your assumptions, it's pretty easy to think of scenarios where war, terrorism, or accidents are inputs into AI going badly, or where better education, more social stability, or better decision-making are inputs into AI going well. People promoting these positive changes are also working against forces that wouldn't be accelerated, like people growing old and dying and thereby throwing away their accumulated human capital, or infrastructure being stressed to keep people alive, etc. etc.
5Eliezer Yudkowsky
How is an increased capital stock supposed to improve our x-risk / astronomical benefit profile except by being an input into something else? Yes, computer science benefits, that's putatively the problem. We need certain types of math for FAI but does math benefit more from increased capital stocks compared to, say, computing power? Which of these other things are supposed to save the world faster than computer science destroys it, and how? How the heck would terrorism be a plausible input into AI going badly? Terrorists are not going to be the most-funded organizations with the smartest researchers working on AGI (= UFAI) as opposed to MIT, Google or Goldman Sachs. Does your argument primarily reduce to "If there's no local FOOM then economic growth is a good thing, and I believe much less than you do in local FOOM"? Or do you also think that in local FOOM scenarios higher economic growth now expectedly results in a better local FOOM? And if so is there at least one plausible specific scenario that we can sketch out now for how that works, as opposed to general hopes that a higher economic growth exponent has vague nice effects which will outweigh the shortening of time until the local FOOM with a correspondingly reduced opportunity to get FAI research done in time? When you sketch out a specific scenario, this makes it possible to point out fragile links which conjunctively decrease the probability of that scenario, and often these fragile links generalize, which is why it's a bad idea to keep things vague and not sketch out any concrete scenarios for fear of the conjunction fallacy. It seems to me that a lot of your reply, going by the mention of things like terrorism and poverty, must be either prioritizing near-term benefits over the astronomical future, or else being predicated on a very different model from local FOOM. We already have a known persistent disagreement on local FOOM. This is an important modular part of the disagreement on which other MIRIfolk
4ESRogs
I'm confused by the logic of this sentence (in particular how the 'though' and 'like me' fit together). Are you saying that you and Wei both at first accepted that faster econ growth meant less chance of FAI, but then were both caused to doubt this conclusion by the fact that others debated the claim?
3Eliezer Yudkowsky
Yep.
3ESRogs
This was one of those cases where precisely stating the question helps you get to the answer. Thanks for the confirmation!
3paulfchristiano
Even given a very fast local foom (to which I do assign a pretty small probability, especially as we make the situation more detailed and conclude that fewer things are relevant), I would still expect higher education and better discourse to improve the probability that people handle the situation well. It's weird to cash this out as a concrete scenario, because that just doesn't seem like how reasonable reasoning works. But trying anyway: someone is deciding whether to run an AI or delay, and they correctly choose to delay. Someone is arguing that research direction X is safer than research direction Y, and others are more likely to respond selectively to correct arguments. Someone is more likely to notice there is a problem with a particular approach and they should do something differently, etc. etc. Similarly, I expect war or external stressors to make things worse, but it seems silly to try and break this down as very specific situations. In general, people are making decisions about what to do, and if they have big alternative motivations (like winning a war, or avoiding social collapse, or what have you), I expect them to make decisions that are less aligned with aggregate welfare. They choose to run a less safe AI, they pursue a research direction that is less safe, etc. Similarly, I expect competent behavior by policy-makers to improve the situation across a broad distribution of scenarios, and I think that is less likely given other pressing issues. We nationalize AI projects, we effectively encourage coordination of AI researchers, we fund more safety-conscious research, etc. Similarly, I expect that an improved understanding of forecasting and decision-making would improve outcomes, and improved understanding of social sciences would play a small role in this. And so on. But at any rate, my main question is how you can be so confident of local foom that you think this tiny effect given local foom scenarios dominates the effect given business as usual?
5Eliezer Yudkowsky
How did this happen as a result of economic growth having a marginally greater exponent? Doesn't that just take us to this point faster and give less time for serial thought, less time for deep theories, less time for the EA movement to spread faster than the exponent on economic growth, etcetera? This decision would ceteris paribus need to be made at some particular cumulative level of scientific development, which will involve relatively more parallel work and relatively less serial work if the exponent of econ growth is higher. How does that help it be made correctly? Exposing (and potentially answering) questions like this is very much the point of making the scenario concrete, and I have always held rather firmly on meta-level epistemic grounds that visualizing things out concretely is almost always a good idea in math, science, futurology and anywhere. You don't have to make all your predictions based on that example but you have to generate at least one concrete example and question it. I have espoused this principle widely and held to it myself in many cases apart from this particular dispute. Procedurally, we're not likely to resolve that particular persistent disagreement in this comment thread which is why I want to factor it out. I could make analogies about smart-people-will-then-decide and don't-worry-the-elite-wouldn't-be-that-stupid reasoning to various historical projections that failed, but I don't think we can get very much mileage out of nonspecifically arguing which of us would have been more wrong about 2000 if we had tried to project it out while living in 1800. I mean, obviously a major reason I don't trust your style of reasoning is that I think it wouldn't have worked historically, not that I think your reasoning mode would have worked well historically but I've decided to reject it because I'm stubborn. (If I were to be more specific, when I listen to your projections of future events they don't sound very much like recollections of pas
[-]jefftk170

past events as I have read about them in history books, where jaw-dropping stupidity usually plays a much stronger role.

How sure are you that this isn't hindsight bias, that if various involved historical figures had been smarter they would have understood the situation and not done things that look unbelievably stupid looking back?

Do you have particular historical events in mind?

8paulfchristiano
We are discussing the relative value of two different things: the stuff people do intentionally (and the byproducts thereof), and everything else. In the case of the negative scenarios I outlined this is hopefully clear: wars aren't sped up 1-for-1, so there will be fewer wars between here and any relevant technological milestones. And similarly for other stressors, etc. Regarding education: Suppose you made everything 1% more efficient. The amount of education a person gets over their life is 1% higher (because you didn't increase the pace of aging / turnover between people, which is the thing people were struggling against, and so people do better at getting what they want). Other cases seem to be similar: some things are a wash, but more things get better than worse, because systematically people are pushing on the positive direction. This discussion was useful for getting a more precise sense of what exactly it is you assign high probability to.
5lukeprog
I wish you two had the time for a full-blown adversarial collaboration on this topic, or perhaps on some sub-problem within the topic, with Carl Shulman as moderator.
2ModusPonies
Please do this. I really, really want to read that post. Also I think writing it would save you time, since you could then link to it instead of re-explaining it in comments. (I think this is the third time I've seen you say something about that post, and I don't read everything you write.) If there's anything I can do to help make this happen (such as digging through your old comments for previous explanations of this point, copyediting, or collecting a petition of people who want to see the post to provide motivation), please please please let me know.
3jefftk
My experience has been that asking people "let me know if I can help" doesn't result in requests for help. I'd suggest just going ahead and compiling a list of relevant comments (like this one) and sending them along. (If Eliezer doesn't end up writing the post, well, you now have a bunch of comments you could use to get started on a post yourself.)

The "normal view" is expressed by GiveWell here. Eliezer's post above can be seen as a counterpoint to that. GiveWell does acknowledge that "One of the most compelling cases for a way in which development and technology can cause harm revolves around global catastrophic risks..."

Some Facebook discussion here including Carl's opinion:

https://www.facebook.com/yudkowsky/posts/10151665252179228

2lukeprog
I'm reposting Carl's Facebook comments to LW, for convenience. Carl's comments were: Eliezer replied to Carl:
0John_Maxwell
It's worth noting that the relationship between economic growth and the expected quality of global outcomes is not necessarily a linear one. The optimal speed of economic growth may be neither super-slow nor super-fast, but some "just right" value in between that makes peace, cooperation, and long-term thinking commonplace while avoiding technological advancement substantially faster than what we see today.
0NancyLebovitz
The possibility of AI being invented to deal with climate change hadn't occurred to me, but now that it's mentioned, it doesn't seem impossible, especially if climate engineering is on the agenda. Any thoughts about whether climate is a sufficiently hard problem to inspire work on AIs?
0Shmi
Climate seems far easier. At least it's known what causes climate change, more or less. No one knows what it would take to make an AGI.
0NancyLebovitz
I didn't mean work on climate change might specifically be useful for developing an AI, I meant that people might develop AI to work on weather/climate prediction.
2Shmi
Right, and my reply was that AGI is much harder, so unlikely. Sorry about not being clear.

Any thoughts about what sort of society optimizes for insight into difficult problems?

5D_Alex
I have a few thoughts.... Naturally first question is what does "optimise for insight" mean. 1. A society which values leisure and prosperity, eg the current Scandinavians...? Evidence; They punch well above their weight economically, produce world class stuff (Volvo, Nokia, Ericsson, Bang&Olufsen spring to mind), but working pace from my experience could be described as "leisurely". Possibly best "insights/manhour" ratio. 2. A society which values education, but which somehow ended up with a screwed up economy, eg the USSR...? Evidence: first man in space, atomic power/weapons, numerous scientific breakthroughs... possibly best "Insight/stuff" ratio. 3. A wealthy modern capitalist democracy which values growth, eg the USA...? Evidence: more "science" and inventions produced in total than anywhere else. Possibly best "insights/time" ratio.

Something to take into account:

Speed of economic growth affects the duration of the demographic transition from high-birth-rate-and-high-death-rate to low-birth-rate-and-low-death-rate, for individual countries; and thus affects the total world population.

A high population world, full of low-education people desperately struggling to survive (ie low on Maslow's hierarchy), might be more likely to support making bad decisions about AI development for short term nationalistic reasons.

2NancyLebovitz
UFAI might be developed by a large company as well as by a country.
0Thomas
Or by a garage firm.
1[anonymous]
Is it plausible that UFAI (or any kind of strong AI) will be created by just one person? It seems like important mathematical discoveries have been made single-handedly, like calculus.
5JoshuaZ
Neither Newton nor Liebniz invented calculus single-handedly as is often described. There was a lot of precursor work. Newton for example credited the idea of the derivative to Fermat's prior work on drawing tangent lines (which itself was a generalization of ancient Greek ideas about tangents for conic sections). Others also discussed similar notions before Newton and Liebniz such as the mean speed theorem. After both of them, a lot of work still needed to be done to make calculus useful. The sections of calculus which Newton and Liebniz did is only about half of what is covered in a normal into calc class today. A better example might be Shannon's development of information theory which really did have almost no precursors and did leap from his brow fully formed like Athena.
4Randaly
UFAI is not likely to be a purely mathematical discovery. The most plausible early UFAI designs will require vast computational resources and huge amounts of code. In addition, UFAI has a minimum level of intelligence required before it becomes a threat; one might well say that UFAI is analogous not to calculus itself, but rather to solving a particular problem in calculus that uses tools not invented for hundreds of years after Newton and Leibniz.
1Thomas
AI is a math problem, yes. And almost all math problems have been solved by a single person. And several math theories were also build this way. Single-headedly. Abraham Lincoln invented another proof for Pythagorean Theorem. Excellent for a POTUS, more than most mathematicians ever accomplish. Not good enough for anything like AI. Could be, that the AI problem is not harder than Fermat Last Theorem. Could be that it is much harder. Harder than Riemann's conjecture, maybe. It is also possible that it is just hard enough for one dedicated (brilliant) human and will be solved suddenly.
0NancyLebovitz
I don't think it will be just one person, but I don't have a feeling for how large a team it would take. Opinions?
0knb
How likely is it that AI would be developed first in a poor, undeveloped country that hasn't gone through the demographic transition? My guess is: extremely low.
0Douglas_Reay
I'd agree. But point out the troubles originating with a country often don't stay within the borders of that country. If you are a rich but small country, with an advanced computer industry, and your neighbour is a large but poor country, with a strong military, this is going to affect your decisions.

Great Stagnation being good news

Per Thiel the computer industry is the exception to the Great Stagnation, so not sure how much it really helps. You can claim that building flying cars would take resources away from UFAI progress, though intelligence research (i.e. machine learning) is so intertwined with every industry that this is a weak argument.

0John_Maxwell
How likely is it that better growth prospects in non-software industries would lead to investment dollars being drawn away from the software industry to those industries and a decrease in UFAI progress on net?
0Dr_Manhattan
"Not likely", since = software is eating the world.

A key step in your argument is the importance of the parallel/serial distinction. However we already have some reasonably effective institutions for making naturally serial work parallelizable (e.g. peer review), and more are arising. This has allowed new areas of mathematics to be explored pretty quickly. These provide a valve which should mean that extra work on FAI is only slightly less effective than you'd initially think.

You could still think that this was the dominant point if economic growth would increase the speed of both AI and AI safety work to ... (read more)

What kinds of changes to the economy would disproportionately help FAI over UFAI? I gather that your first-order answer is "slowing down", but how much slower? (In the limit, both kinds of research grind to a halt, perhaps only to resume where they left off when the economy picks up.) Are there particular ways in which the economy could slow down (or even speed up) that would especially help FAI over UFAI?

I would also expect socialist economic policies to increase chances of successful FAI, for two reasons. First, it would decrease incentives to produce technological advancements that could lead to UFAI. Second, it would make it easier to devote resources to activities that do not result in a short-term personal profit, such as FAI research.

2Viliam_Bur
Socialist economic policies, perhaps yes. On the other hand, full-blown socialism... How likely would a socialist government insist that its party line must be hardcoded into the AI values, and what would be the likely consequences? How likely would the scientists working on the AI be selected by their rationality, as opposed to their loyalty to regime?
4AlexMennen
How does anything in my comment suggest that I think brutal dictatorships increase the chance of successful FAI? I only mentioned socialist economic policies.
3Viliam_Bur
I don't think you suggested that; I just wanted to prevent a possible connotation (that I think some people are likely to make, including me). Note: I also didn't downvote your comment - because I think it is reasonable - so probably someone else made that interpretation. Probably influenced by my comment. Sorry for that. This said, I don't think a regime must be a brutal dictatorship to insist that its values must be hardcoded into the AI values. I can imagine nice people insisting that you hardcode there The Universal Declaration of Human Rights, religious tolerance, diversity, tolerance to minorities, preserving cultural heritage, preserving the nature, etc. Actually, I imagine that most people would consider Eliezer less reliable to work on Friendly AI than someone who professes all the proper applause lights.
0AlexMennen
If a government pursued its own AGI project, that could be a danger, but not hugely more so than private AI work. In order to be much more threatening, it would have to monopolize AI research, so that organizations like MIRI couldn't exist. Even then, FAI research would probably be easier to do in secret than making money off of AI research (the primary driver of UFAI risk) would be.

It's easy to see why rationalists shouldn't help develop technologies that speed AI. (On paper, even an innovation that speeds FAI twice as much as it speeds AI itself would probably be a bad idea if it weren't completely indispensable to FAI. On the other hand, the FAI field is so small right now that even a small absolute increase in money, influence, or intellectual power for FAI should have a much larger impact on our future than a relatively large absolute increase or decrease in the rate of progress of the rest of AI research. So we should be more in... (read more)

Motivated reasoning warning: I notice that want it to be the case that economic growth improves the FAI win rate, or at least doesn't reduce it. I am not convinced of either side, but here are my thoughts.

Moore's Law, as originally formulated, was that the unit dollar cost per processor element halves in each interval. I am more convinced that this is serially limited, than I am that FAI research is serially limited. In particular, semiconductor research is saturated with money, and FAI research isn't; this makes it much more likely to have used up any gai... (read more)

1Eliezer Yudkowsky
FAI seems to me to be mostly about serial depth of research. UFAI seems to be mostly about cumulative parallel volume of research. Things that affect this are effectual even if Moore's Law is constant. We could check how economic status affects science funding. ? What does your model claim happens here?
6jimrandomh
Right now the institutional leadership in the US is (a) composed almost entirely of baby boomers, a relatively narrow age band, and (b) significantly worse than average (as compared to comparable institutions and leaders in other countries). When they start retiring, they won't be replaced with people who are only slightly younger, but by people who're much younger and spread across a larger range of ages, causing organizational competence to regress to the mean, in many types of institutions simultaneously. I also believe - and this is much lower confidence - that this is the reason for the Great Stagnation; institutional corruption is suppressing and misrouting most research, and a leadership turnover may reverse this process, potentially producing an order of magnitude increase in useful research done.
5Vaniver
I wonder how much of this estimate is your distance to the topic; it seems like there could be a bias to think that one's work is more serial than it actually is, and other's work more parallelizable. (Apply reversal test: what would I expect to see if the reverse were true? Well, a thought experiment: would you prefer two of you working for six months (either on the same project together, or different projects) and then nothing for six months, or one of you working for a year? The first makes more sense in parallel fields, the second more sense in serial fields. If you imagined that instead of yourself, it was someone in another field, what would you think would be better for them? What's different?)
[-]Shmi50

This question is quite loaded, so maybe it's good to figure out which part of the economic or technological growth is potentially too fast. For example, would the rate of the Moore's law matching the rate of the economic growth, say, 4-5% annual, instead of exceeding it by an order of magnitude, make a difference?

9Eliezer Yudkowsky
Offhand I'd think a world like that would have a much higher chance of survival. Their initial hardware would be much weaker and use much better algorithms. They'd stand a vastly better change of getting intelligence amplification before AI. Advances in neuroscience would have a long lag time before translating into UFAI. Moore's Law is not like vanilla econ growth - I felt really relieved when I realized that Moore's Law for serial speeds had definitively broken down. I am much less ambiguous about that being good news than I am about the Great Stagnation or Great Recession being disguised good news.
0RHollerith
How bad is an advance (e.g., a better programming language) that increases the complexity and sophistication of the projects that a team of programmers can successfully complete? My guess is that it is much worse than an advance picked at random that generates the same amount of economic value, and about half or 2 thirds as bad as an improvement in general-purpose computing hardware that generates an equal amount of economic value.

This question is broader than just AI. Economic growth is closely tied to technological advancement, and technological advancement in general carries great risks and great benefits.

Consider nuclear weapons, for instance: Was humanity ready for them? They are now something that could destroy us at any time. But on the other hand, they might be the solution to an oncoming asteroid, which could have destroyed us for millions of years.

Likewise, nanotechnology could create a grey goo event that kills us all; or it could lead to a world without poverty, without... (read more)

6Eliezer Yudkowsky
To be clear, the question is not whether we should divert resources from FAI research to trying to slow world economic growth, that seems risky and ineffectual. The question is whether, as a good and ethical person, I should avoid any opportunities to join in ensembles trying to increase world economic growth.
5NancyLebovitz
If the ideas for increasing world economic growth can be traced back to you, might the improvement in your reputation increase the odds of FAI?

Sounds like a rather fragile causal pathway. Especially if one is joining an ensemble.

2ialdabaoth
Follow-up: If you are part of an ensemble generating ideas for increasing world economic growth, how much information will that give you about the specific ways in which economic growth will manifest, compared to not being part of that ensemble? How easily leveraged is that information towards directly controlling or exploiting a noticeable fraction of the newly-grown economy? As a singular example: how much money could you get from judicious investments, if you know where things are going next? How usable would those funds be towards mitigating UFAI risks and optimizing FAI research, in ratio to the increased general risk of UFAI caused by the economic growth itself?
2Eliezer Yudkowsky
That's why I keep telling people about Scott Sumner, market monetarism, and NGDP level determinism - it might not let you beat the stock market indices, but you can end up with some really bizarre expectations if you don't know about the best modern concept of "tight money" and "loose money". E.g. all the people who were worried about hyperinflation when the Fed lowered interest rates to 0.25 and started printing huge amounts of money, while the market monetarists were saying "You're still going to get sub-trend inflation, our indicators say there isn't enough money being printed." Beating the market is hard. Not being stupid with respect to the market is doable.
3Epiphany
Perhaps a better question would be "If my mission is to save the world from UFAI, should I expend time and resources attempting to determine what stance to take on other causes?" No matter your level of potential to learn multiple subjects, investing that time and energy into FAI would, in theory, result in a better outcome with FAI - though I am becoming increasingly aware of the fact that there are limits to how good I can be with subjects I haven't specialized in and if you think about it, you may realize that you have limitations as well. One of the most intelligent people I've ever met said to me (on a different subject): "I don't know enough to do it right. I just know enough to get myself in trouble." If you can do anything with the time and effort this ensemble requires of you to make a quality decision and participate in activities, what would make the biggest difference?
1gwern
Not much of a point in nukes' favor since there are so many other ways to redirect asteroids; even if nukes had a niche for taking care of asteroids very close to impact, it'd probably be vastly cheaper to just put up a better telescope network to spot all asteroids further off.
0Rob Bensinger
Nukes and bioweapons don't FOOM in quite the way AGI is often thought to, because there's a nontrivial proliferation step following the initial development of the technology. (Perhaps they resemble Oracle AGI in that respect; subsequent to being created, the technology has to unlock itself, either suddenly or by a gradual increase in influence, before it can have a direct catastrophic impact.) I raise this point because the relationship between technology proliferation and GDP may differ from that between technology development and GDP. More, global risks tied to poverty (regional conflicts resulting in biological or nuclear war; poor sanitation resulting in pandemic diseases; etc.) may compete with ones tried to prosperity. Of course, these risks might be good things if they provided the slowdown Eliezer wants, gravely injuring civilization without killing it. But I suspect most non-existential catastrophes would have the opposite effect. Long-term thinking and careful risk assessment are easier when societies (and/or theorists) feel less immediately threatened; post-apocalyptic AI research may be more likely to be militarized, centralized, short-sighted, and philosophically unsophisticated, which could actually speed up UFAI development. Two counter-arguments to the anti-apocalypse argument: 1. A catastrophe that didn't devastate our intellectual elites would make them more cautious and sensitive to existential risks in general, including UFAI. An AI-related crisis (that didn't kill everyone, and came soon enough to alter our technological momentum) would be particularly helpful. 2. A catastrophe would probably favor strong, relatively undemocratic leadership, which might make for better research priorities, since it's easier to explain AI risk to a few dictators than to a lot of voters. As an alternative to being quite sure that the benefits somewhat outweigh the risks, you could somewhat less confidently believe that the benefits overwhelmingly outweigh t

One countervailing thought: I want AGI to be developed in a high trust, low-scarcity, social-pyshoclogical context, because that seems like it matters a lot for safety.

Slow growth enough and society as a whole becomes a lot more bitter and cutthroat?

Relative to UFAI, FAI work seems like it would be mathier and more insight-based, where UFAI can more easily cobble together lots of pieces. This means that UFAI parallelizes better than FAI. UFAI also probably benefits from brute-force computing power more than FAI. Both of these imply, so far as I can tell, that slower economic growth is good news for FAI; it lengthens the deadline to UFAI and gives us more time to get the job done.

Forgive me if this is a stupid question, but wouldn't UFAI and FAI have identical or near-identical computational abilities/methods/limits and differ only by goals/values?

9knb
An FAI would have to be created by someone who had a clear understanding of how the whole system worked--in order for them to know it would be able to maintain the original values its creator wanted it to have. Because of that, an FAI would probably have to have fairly clean, simple code. You could also imagine a super-complex kludge of different systems (think of the human brain) that work when backed by massive processing power, but is not well-understood. It would be hard to predict what that system would do without turning it on. The overwhelming probability is that it would be a UAI, since FAIs are such a small fraction of the set of possible mind designs. It's not that a UFAI needs more processing power, but that if tons of processing power is needed, you're probably not running something which is provably Friendly.
0TheOtherDave
Yes. The OP is assuming that the process of reliably defining the goals/values which characterize FAI is precisely what requires a "mathier and more insight-based" process which parallelizes less well and benefits less from brute-force computing power.

The Great Stagnation has come with increasing wealth and income disparity.

This is to say: A smaller and smaller number of people are increasingly free to spend an increasing fraction of humanity's productive capacity on the projects they choose. Meanwhile, a vastly larger number of people are increasingly restricted to spend more of their personal productive capacity on projects they would not choose (i.e. increasing labor hours), and in exchange receive less and less control of humanity's productive capacity (i.e. diminishing real wages) to spend on projects that they do choose.

How does this affect the situation with respect to FAI?

I think we're past the point where it matters. If we had a few lost decades in the mid-twentieth century, maybe, (and just to be cognitively polite here, this is just my intuition talking) the intelligence explosion could have been delayed significantly. We are just a decade off from home computers with >100 teraflops, not to mention the distressing trend of neuromorphic hardware (Here's Ben Chandler of the SyNAPSE project talking about his work on HackerNews)With all this inertia, it would take an extremely large downturn to slow us now. Engineering a... (read more)

Any ideas to make FAI parallelize better? Or make there be less UFAI resources without reducing economic growth?

4Eliezer Yudkowsky
If there were a sufficiently smart government with a sufficiently demonstrated track record of cluefulness whose relevant officials seemed to genuinely get the idea of pro-humanity/pro-sentience/galactic-optimizing AI, the social ideals and technical impulse behind indirect normativity, and that AI was incredibly dangerous, I would consider trusting them to be in charge of a Manhattan Project with thousands of researchers with enforced norms against information leakage, like government cryptography projects. This might not cure required serial depth but it would let FAI parallelize more without leaking info that could be used to rapidly construct UFAI. I usually regard this scenario as a political impossibility. Things that result in fewer resources going into AI specifically would result in fewer UFAI resources without reducing overall economic growth, but it needs to be kept in mind that some such research occurs in financial firms pushing trading algorithms, and a lot more in Google, not just in places like universities.
0Benya
To the extent that industry researchers publish less than academia (this seems particularly likely in financial firms, and to a lesser degree at Google), a hypothetical complete shutdown of academic AI research should reduce uFAI's parallelization advantage by 2+ orders of magnitude, though (presumably, the largest industrial uFAI teams are much smaller than the entire academic AI research community). It seems that reducing academic funding for AI only somewhat should translate pretty well into less parallel uFAI development as well.

Could you be confusing the direction of causality here? I suspect that technological growth tends to lead to economic growth rather than the reverse.

[-]knb20

Robin Hanson came to a similar conclusion here, although his concern was emulations.

0[anonymous]
Hanson was talking about a specific line of research, not general growth.

I'm not convinced that slowing economic growth would result in FAI developing faster than UFAI and I think your main point of leverage for getting an advantage lies elsewhere (explained). The key is obviously the proportion between the two, not just slowing down the one or speeding up the other, so I suggest a brainstorm to consider all of the possible ways that slow economic growth could also slow FAI. For one thought: do non-profit organizations do disproportionately poorly during recessions?

The major point of leverage, I think, is people, not the econ... (read more)

8Viliam_Bur
Uhm, it is not that simple. Perhaps selfish people cooperate less, but among altruistic people often the price for cooperation is worshiping the same applause lights. Selfish people optimize for money or power, altruistic people often optimize for status in altruistic community. Selfish people may be more agenty, simply because they know that if they don't work for their selfish benefits, no one else will. Altruistic people often talk about what others should do, what the government should do, etc. Altruistic people collect around different causes, they compete for donor money and public attention, even their goals may sometimes be opposed; e.g. "protecting nature" vs "removing the suffering inherent in nature"; "spreading rationality" vs "spreading religious tolerance"; "making people equal and happy" vs "protecting the cultural heritage". People don't like those who hurt others, but they also admire high-status people and despise low-status people. Geniuses are often crazy. I'm not saying it is exactly the other way round as you said. Just: it's complicated. I scanned through your comment and listed all the counterarguments that immediately came to my mind. If good intentions and intelligence translated to success so directly, then communists wouldn't have killed millions of people, Mensa would rule the world now, and we all would be living in the post-singularity paradise already.

I think this depends on how much you think you have the ability to cash in on any given opportunity. eg, you gaining a ton of money is probably going to help the cause of FAI more than whatever amount of economic growth is generated helps bring about AI. So basically either put your money where your theories are or don't publicly theorize?

1Eliezer Yudkowsky
This is true for non-super-huge startups that donate any noticeable fraction of generated wealth to EA, yes - that amount is not a significant percentage of overall global econ growth, and would be a much larger fraction of FAI funding.

This might come down to eugenics. Imagine that in 15 years, with the help of genetic engineering, lots of extremely high IQ people are born, and their superior intelligence means that in another 15 or so years (absent a singularity) they will totally dominate AGI software development. The faster the economic growth rate the more likely that AGI will be developed before these super-geniuses come of age.

4Eliezer Yudkowsky
Are these high-IQ folk selectively working on FAI rather than AGI to a sufficient degree to make up for UFAI's inherently greater parallelizability? EDIT: Actually, smarter researchers probably count for more relative bonus points on FAI than on UFAI to a greater extent than even differences of serial depth of cognition, so it's hard to see how this could be realistically bad. Reversal test, dumber researchers everywhere would not help FAI over UFAI.
0James_Miller
I'm not sure, this would depend on their personalities. But you might learn a lot about their personalities while they were still too young to be effective programmers. In one future earth you might trust them and hope for enough time for them to come of age, whereas in another you might be desperately trying to create a foom before they overtake you. Hopefully, lots of the variance in human intelligence comes down to genetic load, having a low genetic load often makes you an all around great and extremely smart person, someone like William Marshal, and we soon create babies with extremely low genetic loads. If this is to be our future we should probably hope for slow economic growth.
0mwengler
Extremely high IQ arising from engineering... is that not AI? This is not a joke. UAI is essentially the fear that "we" will be replaced by another form of intelligence, outcompeted for resources by what is essentially another life form. But how do "we" not face the same threat from an engineered lifeform just because some of the ingredients are us? If such a new engineered lifeform replaces natural humanity, is that not a UAI? If we can build in some curator instinct or 3 laws or whatever into this engineered superhuman, is that not FAI? The interesting thing to me here is what we mean by "we." I think it is more common for a lesswrong poster to identify as "we" with an engineered superhuman in meat substrate than with an engineered non-human intelligence in non-meat substrate. Considering this, maybe an FAI is just an AI that learns enough about what we think of as human so that it can hack it. It could construct itself so that it felt to us like our descendent, our child. Then "we" do not resent the AI for taking all "our" resources because the AI has successfully lead us to be happy to see our child succeed beyond what we managed. Perhaps one might say but of course this would be on our list of things we would define as unfriendly. Then we build AI's that "curate" humans as we are now and we are precluding from enhancing ourselves or evolving past some limit we have preprogrammed in to our FAI?
0John_Maxwell
http://lesswrong.com/lw/erj/parenting_and_happiness/94th?context=3

For FAI to beat UAI, sufficient work on FAI needs to be done before sufficient work on AI is done.

If slowing the world economy doesn't change the proportion of work done on things, then a slower world economy doesn't increase the chance of FAI over UAI, it merely delays the time at which one or the other happens. Without specifying how the worlds production is turned down, wouldn't we need to assume that EY's productivity is turned down along with the rest of the world's?

If we assume all of humanity except EY slows down, AND that EY is turning the FAI knob harder than the other knobs relative to the rest of humanity, then we increase the chance of FAI preceding UAI.

I'm not sure that humane values would survive in a world that rewards cooperation weakly. Azathoth grinds slow, but grinds fine.

To oversimplify, there seem to be 2 main factors that increase cooperation, 2 basic foundations for law. Religion and Economic growth. Of this, religion seems to be far more prone-to-volatility. It is possible to get some marginally more intelligent people to point out the absurdity of the entire doctrine and along with the religion, all the other societal values collapse.

Economic growth seems to be a far more promising foundatio... (read more)

If we were perpetually stuck at Roman Empire levels of technology, we'd never have to worry about UFAI at all. That doesn't make it a good thing.

If we all got superuniversal-sized computers with halting oracles, we'd die within hours. I'm not sure the implausible extremes are a good way to argue here.

4cody-bryce
Why do you find the idea of having the level of technology from the Roman empire to be so extreme? It seems like the explosion in technological development and use in recent centuries could be the fluke. There was supposedly a working steam engine in the Library of Alexandria in antiquity, but no one saw any reason to encourage that sort of thing. During the middle ages people didn't even know what the Roman aqueducts were for. With just a few different conditions, it seems like it's within the realm of possibility that ancient Roman technology could have been a nearly-sustainable peak of human technology. Much more feasible would be staying foragers for the life of the species, though.
5asr
Some good ideas were lost when the Roman Empire went to pieces, but there were a number of important technical innovations made in formerly-Roman parts of Western Europe in the centuries after the fall of the empire. In particular, it was during the Dark Ages that Europeans developed the stirrup, the horse collar and the moldboard plow. Full use of the domesticated horse was a Medieval development, and an important one, since it gave a big boost to agriculture and war. Likewise, the forced-air blast furnace is an early-medieval development. The conclusion I draw is that over the timescale of a few centuries, large-scale political disruption did not stop technology from improving.
4CronoDAS
It may even have helped. Consider China...
2gwern
Sure about that? http://richardcarrier.blogspot.com/2007/07/experimental-history.html http://richardcarrier.blogspot.com/2007/08/lynn-white-on-horse-stuff.html
0cody-bryce
Although it's still a point worth making that those technologies were adopted, they were not innovations--they were eastern inventions from antiquity that were adopted. Stirrups in particular are a fascinating tale of progress not being a sure thing. The stirrup predates not only the fall of Rome, but the founding of Rome. Despite constant trade with the Parthians/Sassanids as well as constantly getting killed by their cavalry, the Romans never saw fit to adopt such a useful technology. Like the steam engine, we see that technological adoption isn't so inevitable.
0gwern
It's not clear stirrups would've been helpful to the Romans at all, much less 'such a useful technology'; see the first Carrier link in my reply to asr.
4asr
I am surprised by this claim and would be interested to hear more.
2mwengler
I guess we could have just skipped all the evolution that took us from Chimp-Bonobo territory to where we are and would never have had to worry about UAI. Or Artificial Intelligence of any sort. Heck, we wouldn't have even had to worry much about unfriendly or frienly Natural Intelligence either!
0PaulS
What makes you think that? Technological growth had already hit a clear exponential curve by the time of Augustus. The large majority of the time to go from foraging to industry had already passed, and it doesn't look like our history was an unusually short one. Barring massive disasters, most other Earths must fall at least within an order of magnitude of variation from this case. In any case, we're definitely at a point now where indefinite stagnation is not on the table... unless there's a serious regression or worse.
0CronoDAS
Oh, come on. I'm sure at least a few people would end up with a Fate Worse Than Death instead! ;)
8Eliezer Yudkowsky
Actually I'd be quite confident in no fates worse than death emerging from that scenario. There wouldn't be time for anyone to mess up on constructing something almost in our moral frame of reference, just the first working version of AIXI / Schmidhuber's Godel machine eating its future light cone.
0Will_Sawin
but it can't AIXI the other halting oracles?
0[anonymous]
Nah. At least a few people would end up with a Fate Worse Than Death instead. ;)
[-][anonymous]00

If you are pessimistic about global catastrophic risk from future technology and you are most concerned with people alive today rather than future folk, slower growth is better unless the effects of growth are so good that they outweigh time discounting.

But growth in the poorest countries is good because it contributes negligibly to research and national economies are relatively self-contained, and more growth there means more human lives lived before maybe the end.

Also, while more focused efforts are obviously better in general than trying to affect growth, there is (at least) one situation where you might face an all-or-nothing decision: voting. I'm afraid the ~my solution here~ candidate will not be available.

Katja comments here:

As far as I can tell, the effect of economic growth on parallelization should go the other way. Economic progress should make work in a given area less parallel, relatively helping those projects that do not parallelize well.

0Eliezer Yudkowsky
My model of Paul Christiano does not agree with this statement.
0Paul Crowley
I was fortunate to discuss this with Paul and Katja yesterday, and he seemed to feel that this was a strong argument.
0Eliezer Yudkowsky
...odd. I'm beginning to wonder if we're wildly at skew angles here.
2paulfchristiano
I do think the bigger point is that your argument is a tiny effect, even if its correct, so gets dwarfed by any number of random other things (like better educated people, lower cumulative probability of war) and even moreso by the general arguments that suggest the effects of growth would be differentially positive. But if you accept all of your argument except the last step, Katja's point seems right, and so I think you've gotten the sign on this particular effect wrong. More economic growth means more work per person and the same number of people working in parallel---do you disagree with that? (If so, do you think that its because more economic activity means a higher population, or that it means diverting people from other tasks to AI? I agree there will be a little bit of the latter, but its a pretty small effect and you haven't even invoked the relevant facts about the world---marginal AI spending is higher than average AI spending---in your argument.) So if you care about parallelization in time, the effect is basically neutral (the same number of people are working on AI at any given time). If you care about parallelization across people, the effect is significant and positive, because each person does a larger fraction of the total project of building AI. It's not obvious to me that insight-constrained projects (as opposed to the "normal" AI) care particularly about either. But if they care somewhat about both, then this would be a positive effect. They would have to care several times more about parallelization in time than parallelization in people in order for you to have gotten the sign right.

As long as we are in a world where billions are still living in absolute poverty, low economic growth is politically radicalizing and destabilizing. This can prune world branches quite well on its own, no AI needed. Remember, the armory of apocalypse is already unlocked. It is not important which project succeeds first, if the world gets radiatively sterilized, poisoned, ect before either one succeeds. So, no. Not helpful.

Before you can answer this question, I think you have to look at a more fundamental question, which is simply: why are so few people interested in supporting FAI research/ concerned about the possibility of UFAI?

It seems like there are a lot of factors involved here. In times of economic stress, short-term survival tends to dominate over long-term thinking. For people who are doing long-term thinking, there are a number of other problems that many of them are more focused on, such as resource depletion, global warming, ect; even if you don't think ... (read more)

To be clear, the idea is not that trying to deliberately slow world economic growth would be a maximally effective use of EA resources and better than current top targets; this seems likely to have very small marginal effects, and many such courses are risky. The question is whether a good and virtuous person ought to avoid, or alternatively seize, any opportunities which come their way to help out on world economic growth.

It sounds like status quo bias. If growth was currently 2% higher, should the person then seize on growth-slowing opportunities?

On... (read more)

For economic growth, don't focus on the total number. Ask about the distribution. Increasing world wealth by 20% would have minimal impact on making people's lives better if that increase is concentrated among the top 2%. It would have huge impact if it's concentrated in the bottom 50%.

So if you have a particular intervention in mind, ask yourself, "Is this just going to make the rich richer, or is it going to make the poor richer?" An intervention that eliminates malaria, or provides communication services in refugee camps, or otherwise assists the most disadvantaged, can be of great value without triggering your fears.

0NancyLebovitz
I'm not sure you're right-- if the crucial factors (decent nutrition, access to computing power, free time (have I missed something?)) become more widely distributed, the odds of all sorts of innovation including UFAI might go up.
[-][anonymous]00

If a good outcome requires that influential people cooperate and have longer time-preferences, then slower economic growth than expected might increase the likelihood of a bad outcome.

It's true that periods of increasing economic growth haven't always lead to great technology decision-making (Cold War), but I'd expect an economic slowdown, especially in a democratic country, to make people more willingly to take technological risks (to restore economic growth), and less likely to cooperate with or listen to or fund cautious dissenters, (like people who say we should be worried about AI).

4Eliezer Yudkowsky
One could just as easily argue that an era of slow growth will take technological pessimism seriously, while an era of fast growth is likely to want to go gung-ho full-speed-ahead on everything.
4[anonymous]
Assuming some link between tech and growth, low-growth pessimism seems more likely to be "technology has mattered less and moved slower than we expected", which is a different flavor.
2Luke_A_Somers
A culture in which they go gung-ho full-speed-ahead on everything might build autonomous AI into a robot, and it turns out to be unfriendly in some notable way while not also being self-improving. Seems to me like that would be one of the most reliable paths to getting people to take FAI seriously. A big lossy messy factory recall, lots of media attention, irate customers.

Will 1% and 4% RGDP growth worlds have the same levels of future shock? A world in which production doubles every 18 years and a world in which production doubles every 70 years seem like they will need very different abilities to deal with change.

I suspect that more future shock would lead to more interest in stable self-improvement, especially on the institutional level. But it's not clear what causes some institutions to do the important but not urgent work of future-proofing, and others to not- it may be the case that in the more sedate 1% growth world, more effort will be spent on future-proofing, which is good news for FAI relative to UFAI.

2Eliezer Yudkowsky
We've already had high levels of future shock and it hasn't translated into any such interest. This seems like an extremely fragile and weak transmission mechanism. (So do most transmission mechanisms of the form, "Faster progress will lead people to believe X which will support ideal Y which will lead them to agree with me/us on policy Z.")
[-]D_Alex-20

Eliezer, this post reeks of an ego trip.

"I wish I had more time, not less, in which to work on FAI"... Okay, world, lets slow right down for a while. And you, good and viruous people with good for technological or economic advancement: just keep quiet until it is safe.

[+]mira-70