You might be producing some useful info, but mostly about whether an arbitrary system exibits unlimited exponential growth. If you got 1000 different programmers to each throw together some model of tech progress, some based on completing tasks, some based on extracting resources, some based on random differential equations ect, and see what proportion of them give exponential growth and then stagnation. Actually, there isn't a scale on your model, so who can say if the running out of tasks, or stagnation are next year or in 100000 years. At best, you will be able to tell how strongly outside view priors should favor exp growth over growth and then decay. (Pure growth is clearly simpler, but how much simpler?)
Yeah, that's sorta my hope. The model is too abstract and disconnected from real-world numbers to be able to predict things like "The singularity will happen in 2045" but maybe it can predict things like "If you've had exponential growth for a while, it is very unlikely a priori / outside-view that growth will slow, and in fact quite likely that it will accelerate dramatically. Unless you are literally running out of room to grow, i.e. hitting fundamental physical limits in almost all endeavors."
This was an interesting post, it got me thinking a bit about the right way to represent "technology" in a mathematical model.
I think I have a pretty solid qualitative understanding of how technology changes impact economic production - constraints are the right representation for that. But it's not clear how that feeds back into further technological development. What qualitative model structure captures the key aspects of recursive technological progress?
A few possible threads to pull on:
Very interesting :)
I suspect the model is making a hidden assumption about the lack of "special projects"; e.g. the model assumes there can't be a single project that yields a bonus that makes all the other projects' tasks instantly solvable?
Also, I'm not sure that the model allows us to distinguish between scenarios in which a major part of overall progress is very local (e.g. happens within a single company) and more Hansonian scenarios in which the contribution to progress is well distributed among many actors.
Yeah, I tried to build the model with certain criticisms of the intelligence explosion argument in mind -- for example, the criticism that it assumes intelligence is a single thing rather than a diverse collection of skills, or the criticism that it assumes AGI will be a single thing rather than a diverse collection of more specific AI tools, or the criticism that it assumes takeoff will happen after human level but not before. My model makes no such assumptions, but it still gets intelligence explosion. I think this is an already somewhat interesting result, though not a major update for me since I didn't put much credence in those objections anyway.
Currently the model just models civilization's progress overall, so yeah it can't distinguish between local vs. distributed takeoff. I'm hoping to change that in the future, but I'm not sure how yet.
Eyeballing the graphs you produced, it looks like the singularities you keep getting are hyperbolic growth, which we already have in real life (compare log(world GDP) to your graph of log(projects completed) - their shapes are almost identical).
So far as I can tell, what you've shown is that you almost always get a big speedup of hyperbolic growth as AI advances but without discontinuities, which is what the 'continuous takeoff' people like Christiano already say they are expecting.
AI is just another, faster step in the hyperbolic growth we are currently experiencing, which corresponds to a further increase in rate but not a discontinuity (or even a discontinuity in rate).
So perhaps this is evidence of continuous takeoff still being quite fast.
Yes, thanks! I mostly agree with that assessment,* though as an aside I have a beef with the implication that Bostrom, Yudkowsky, etc. expect discontinuities. That beef is with Paul Christiano, not you. :)
So far the biggest update this has been for me, I think, is that it seems to have shown that it's quite possible to get an intelligence explosion even without economic feedback loops. Like, even with a fixed compute/money budget--or even with a fixed number of scientists and fixed amount of research funding--we could get singularity. At least in principle. This is weird because in practice I am pretty sure I remember reading that the growth we've seen so far can be best explained via an economic feedback loop: Better technology allows for bigger population and economy which allows for more scientists and funding which allows for better technology. So I'm a bit confused, I must say -- my model is giving me results I would have predicted wouldn't happen.
*There have been a few cases where the growth didn't look hyperbolic, but rather like a steady exponential trend that then turns into a singularity. World GDP, by contrast, has what looks like at least three exponential trends in it, such that it is more parsimonious to model it as hyperbolic growth. I think.
I should add though that I haven't systematically examined these graphs yet, so it's possible I'm just missing something--e.g. it occurs to me right now that maybe some of these graphs I saw were really logistic functions rather than hyperbolic or exponential-until-you-hit-limits. I should make some more and look at them more carefully.
It seems like your model doesn't factor in legality. We have a lot more laws that add additional burocracy to technology development today then we had 50 years ago.
I've made a model/simulation of technological progress, that you can download and run on your laptop.
My goal is to learn something about intelligence explosions, takeoff speeds, discontinuities, human-level milestones, AGI vs. tools, bottlenecks, or something else. I'll be happy if I can learn something about even one of these things, even if it's just a minor update and not anything close to conclusive.
So far I've just got a very basic version of the model built. It works, but it's currently unclear what--if anything--we can learn from it. I need to think more about whether the assumptions it uses are realistic, and I need to explore the space of parameter settings more systematically.
I'm posting it here to get feedback on the basic idea, and maybe also on the model so far if people want to download it and play around. I'm particularly interested in evidence/arguments about whether or not this is a productive use of my time, and arguments that some hidden assumption my model makes is problematically determining the results.
If you want to try out the model yourself, download NetLogo here and then open the file in this folder.
How the model works:
The main part of the model consists of research projects, which are lists of various types of task. Civilization completes tasks to complete research projects, and when projects get finished, civilization gets a "bonus" which allows it to do new types of task, and to do some old types faster.
The projects, the lists of tasks needed to complete them, the speeds at which civilization can do the tasks, and the bonuses granted by completing projects are all randomly generated, typically using exponential distributions and often with parameters you can change in the UI. Other important parameters can be changed in the UI also, such as how many task types are "off limits" for technological improvement, and how many task types are "temporarily off limits" until some specified level of technology is reached.
As explained so far, the model represents better technology leading to more research directions (more types of task become available) and faster progress (civilization can do tasks in less time).
Projects are displayed as dots/stars which flicker as work is done on them. When they complete, they turn into big green circles. Their location in the display represents how difficult they are to complete: the x-axis encodes how many tasks are involved, and the y-axis encodes how many different kinds of tasks are involved. To the left of the main display is a graph that tracks a bunch of metrics I've deemed interesting, scaled so that they all have similar heights.
There are several kinds of diminishing returns and several kinds of increasing returns in the model.
Diminishing:
Increasing:
The model also has a simple module representing the "economic" side of things -- i.e. over time, civilization can work on a greater number of projects simultaneously, if you choose. I have a few different settings representing different scenarios:
The "info" tab of the NetLogo file explains things in more detail, if you are interested.
What tends to happen when I run it:
The model tends to produce progress (specifically, in the metric of "projects completed -- see the log plot) somewhere between exponential and superexponential. Sometimes it displays what appears to be a clear exponential trend (a very straight line on the log scale) that fairly rapidly transitions into a singularity (a vertical line on the log scale).
Interestingly, progress in the metric "% of tasks done faster thanks to research" is not typically exponential, much less singularity; it is usually a jumpy but more or less linear march from 0 to 100.
Sometimes progress stagnates, though I've only seen this happen extremely early on--I've never seen steady exponential growth followed by stagnation.
For a while it seemed that progress would typically shoot through the roof around the time that almost all tasks were doable & being improved. This is what Amdahl's Law would predict, I think: Get rid of the last few bottlenecks and progress will soar. However, I now think that's wrong; the growth still happens even if a substantial fraction of tasks are "off-limits," and/or off-limits temporarily. I'm not sure what to think now, but after I give my head a rest I expect ideas will come.
The various parameter settings I've put into the model seem to have surprisingly little effect on all of the above. They affect how long everything takes but rarely do they affect the fundamental shape of the trajectory. In particular, removing the "effort feedback loop" entirely, by choosing "all projects all the time" or "100 doable projects" would (I predicted) slow down progress a lot, but in practice we still seem to get singularities. Of course, I haven't systematically compared the results; this is just the vague impression I get from the handful of different runs I've done.
Doubts I have about the accuracy of the model & ideas for things to add