This seems like it’s mixing together some extremely different things.
Fitting transistors onto a microchip is an engineering process with a straightforward outcome metric. Smoothly diminishing returns is the null hypothesis there, also known as the experience curve.
Shakespeare and Socrates, Newton and Descartes, are something more like heroes. They harvested a particular potential at a particular time, doing work that was integrated enough that it had to fit into a single person’s head.
This kind of work can’t happen until enough prep work has been done to make it tractable for a single human. Newton benefited from Ptolemy, Copernicus, Kepler, and Galileo, giving him nice partial abstractions to integrate (as well as people coming up with precursors to the Calculus). He also benefited from the analytic geometry and algebra paradigm popularized by people like Descartes and Viete. The reason he’s impressive is that he harvested the result of an unified mathematical theory of celestial and earthly mechanics well before most smart people could have - it just barely fit into the head of a genius particularly well-suited to the work, and even so he did have competition in Leibniz.
At best, ...
Interesting. My main thought reading the OP was simply that coordination is hard at scale, and this applies to intellectual progress too. You had orders-of-magnitude increase in number of people but no change in productivity? Well, did you build better infrastructure and institutions to accommodate, or has your indrastructure for coordinating scientists largely stayed the same since the scientific revolution? In general, scaling early is terrible, and will not be a source of value but run counter to it (and will result in massive goodharting).
My cached beliefs from the last time I thought about this is that progress is generally seen in small teams. This is sort of happening naturally as people (in the rationalsphere) tend to cluster into organizations, which are fairly silo'd and/or share their research informally.
This leaves you with the state of "it's hard for someone not in an org to get up to speed on what that org actually thinks", and my best guess is to build tools that are genuinely useful for smallish teams and networks to use semi-privately, but which increase the affordance for them to share things with the public (or, larger networks).
But still? A hundred Shakespeares?
I'd wager there are thousands of Shakespeare-equivalents around today. The issue is that Shakespeare was not only talented, he was successful - wildly popular, and able to live off his writing. He was a superstar of theatre. And we can only have a limited amount of superstars, no matter how large the population grows. So if we took only his first few plays (before he got the fame feedback loop and money), and gave them to someone who had, somehow, never heard of Shakespeare, I'd wager they would find many other authors at least as good.
This is a mild point in favour of explanation 1, but it's not that the number of devoted researchers is limited, it's that the slots at the top of the research ladder are limited. In this view, any very talented individual who was also a superstar, will produce a huge amount of research. The number of very talented individuals has gone up, but the number of superstar slots has not.
I still endorse most of this post, but https://docs.google.com/document/d/1cEBsj18Y4NnVx5Qdu43cKEHMaVBODTTyfHBa8GIRSec/edit has clarified many of these issues for me and helped quantify the ways that science is, indeed, slowing down.
Curated.
While I had already been thinking somewhat about the issues raised here, this post caused me to seriously grapple with how weird the state of affairs is, and to think seriously about why it might be, and (most importantly from my perspective), what LessWrong would actually have to do differently if our goal is to actually improve scientific coordination.
This seems to omit a critical and expected limitation as a process scales up in the number of people involved - communication and coordination overhead.
If there is low hanging fruit, but everyone is reaching for it simultaneously, then doubling the number of researchers won't increase the progress more than very marginally. (People with slightly different capabilities implies that the expected time to success will be the minimum of different people.) But even that will be overwhelmed by the asymptotic costs for everyone to find out that the low-hangin...
Popper points out that successful hypotheses just need to be testable, they don't need to come from anywhere in particular. Scientists used to consistently be polymaths educated in philosophy and the classics. A lot of scientific hypotheses borrowed from reasoning cultivated in that context. Maybe it's that context that's been milked for all it's worth. Or maybe it's that more and more scientists are naive empiricists/inductionists and don't believe in the primacy of imagination anymore, and thus discount entirely other modes of thinking that might lead to the introduction of new testable hypotheses. There are a lot of possibilities besides the ones expounded on in OP.
Assuming the trendline cannot continue seems like the Gambler's Fallacy. Saying we can resume the efficiency of the 1930's research establishment seems like a kind of institution-level Fundamental Attribution Error.
I find the low-hanging-fruit explanation the most intuitive because I assume everything has a fundamental limit and gets harder as we approach that limit as a matter of natural law.
I'm tempted to go one step further and try to look at the value added by each additional discovery; I suspect economic intuitions would be helpful bot...
I've long thought the low-hanging fruit phenomenon applies to music. You can see it at work in the history of classical music. Starting with melodies (e.g. folk songs), breakthroughs particularly in harmony generated Renaissance music, Baroque, then Classical (meaning specifically Mozart etc.), then Romantic, then a modern cult of novelty produced a flurry of new styles from the turn of the 20th century (Impressionism onwards).
But by say 1980 it's like everything had been tried, a lot of 20th century experimentation (viz. atonal) was a dead end a...
The concepts in this post are, quite possibly, the core question I've been thinking about for the past couple years. I didn't just get them from this post (a lot of them come up naturally in LW team discussions), and I can't quite remember how counterfactual this post was in my thinking.
But it's definitely one of the clearer writeups about a core question to people who're thinking about intellectual progress, and how to improve.
The question is murky – it's not obvious whether science is actually slowing down to me, and there are multiple plausi
...Big part of this follows from the
Law of logarithmic returns:
In areas of endeavour with many disparate problems, the returns in that area will tend to vary logarithmically with the resources invested (over a reasonable range).
which itself can be derived from a very natural prior about the distribution of problem difficulties, so, yes, it should be the null hypothesis.
Why did the Gods Of Straight Lines fail us in genome sequencing the last 6 years? What did the involved scientists do to lose their fortune?
Theranos, as I understand it, was promising blood testing of all sorts of biomarkers like blood glucose, and nothing to do with DNA. DNA sequencing is different from measuring concentration - at least in theory, you only need a single strand of DNA and you can then amplify that up arbitrary amounts (eg in PGD/embryo selection, you just suck off a cell or two from the young embryo and that's enough to work with). If you were trying to measure the nanograms of DNA per microliter, that's a bit different.
I don't know anything about RNA sequencing, since it's not relevant to anything I follow like GWASes.
I'm also partial to the low hanging fruit explanation. Unfortunately, it seems to me we can really only examine progress on already established fields. Much harder to tell if there is much left to discover outside of established fields- the opportunities to make big discoveries that establish whole new fields of study. This is where the undiscovered, low hanging fruit would be, i think.
constant growth rates in response to exponentially increasing inputs is the null hypothesis
Yeah, that would be big if true. How sure are you that it's exponential and not something else like quadratic? All your examples are of the form "inputs grow faster than returns", nothing saying it's exponential.
This post posed the question very clearly, and laid out a bunch of interesting possible hypotheses to explain the data. I think it's an important question for humanity and also comes up regularly in my thinking about how to help people do research on questions like AI alignment and human rationality.
Maybe it's not a law of straight lines, its a law of exponentially diminishing returns, and maybe it applies to any scientific endeavour. What we are doing in science is developing mathematical representations of reality. The reality doesn't change, but our representations of it become ever more accurate, in an asymptotic fashion. What about Physics? In 2500 years we go from naive folk physics to Democritus, to Ptolemy, Galileo and Copernicus, to Newton, then Clerk Maxwell, Einstein, Schrodinger, Feynman and then the Standard Model, at every sta...
If you assume that the human "soul" mass cannot be increased over time, does this problem make more sense? Population increase just causes an increase in the proportion of NPC's, while discoveries require something transcendental.
[This post was up a few weeks ago before getting taken down for complicated reasons. They have been sorted out and I’m trying again.]
Is scientific progress slowing down? I recently got a chance to attend a conference on this topic, centered around a paper by Bloom, Jones, Reenen & Webb (2018).
BJRW identify areas where technological progress is easy to measure – for example, the number of transistors on a chip. They measure the rate of progress over the past century or so, and the number of researchers in the field over the same period. For example, here’s the transistor data:
This is the standard presentation of Moore’s Law – the number of transistors you can fit on a chip doubles about every two years (eg grows by 35% per year). This is usually presented as an amazing example of modern science getting things right, and no wonder – it means you can go from a few thousand transistors per chip in 1971 to many million today, with the corresponding increase in computing power.
But BJRW have a pessimistic take. There are eighteen times more people involved in transistor-related research today than in 1971. So if in 1971 it took 1000 scientists to increase transistor density 35% per year, today it takes 18,000 scientists to do the same task. So apparently the average transistor scientist is eighteen times less productive today than fifty years ago. That should be surprising and scary.
But isn’t it unfair to compare percent increase in transistors with absolute increase in transistor scientists? That is, a graph comparing absolute number of transistors per chip vs. absolute number of transistor scientists would show two similar exponential trends. Or a graph comparing percent change in transistors per year vs. percent change in number of transistor scientists per year would show two similar linear trends. Either way, there would be no problem and productivity would appear constant since 1971. Isn’t that a better way to do things?
A lot of people asked paper author Michael Webb this at the conference, and his answer was no. He thinks that intuitively, each “discovery” should decrease transistor size by a certain amount. For example, if you discover a new material that allows transistors to be 5% smaller along one dimension, then you can fit 5% more transistors on your chip whether there were a hundred there before or a million. Since the relevant factor is discoveries per researcher, and each discovery is represented as a percent change in transistor size, it makes sense to compare percent change in transistor size with absolute number of researchers.
Anyway, most other measurable fields show the same pattern of constant progress in the face of exponentially increasing number of researchers. Here’s BJRW’s data on crop yield:
The solid and dashed lines are two different measures of crop-related research. Even though the crop-related research increases by a factor of 6-24x (depending on how it’s measured), crop yields grow at a relatively constant 1% rate for soybeans, and apparently declining 3%ish percent rate for corn.
BJRW go on to prove the same is true for whatever other scientific fields they care to measure. Measuring scientific progress is inherently difficult, but their finding of constant or log-constant progress in most areas accords with Nintil’s overview of the same topic, which gives us graphs like
…and dozens more like it. And even when we use data that are easy to measure and hard to fake, like number of chemical elements discovered, we get the same linearity:
Meanwhile, the increase in researchers is obvious. Not only is the population increasing (by a factor of about 2.5x in the US since 1930), but the percent of people with college degrees has quintupled over the same period. The exact numbers differ from field to field, but orders of magnitude increases are the norm. For example, the number of people publishing astronomy papers seems to have dectupled over the past fifty years or so.
BJRW put all of this together into total number of researchers vs. total factor productivity of the economy, and find…
…about the same as with transistors, soybeans, and everything else. So if you take their methodology seriously, over the past ninety years, each researcher has become about 25x less productive in making discoveries that translate into economic growth.
Participants at the conference had some explanations for this, of which the ones I remember best are:
1. Only the best researchers in a field actually make progress, and the best researchers are already in a field, and probably couldn’t be kept out of the field with barbed wire and attack dogs. If you expand a field, you will get a bunch of merely competent careerists who treat it as a 9-to-5 job. A field of 5 truly inspired geniuses and 5 competent careerists will make X progress. A field of 5 truly inspired geniuses and 500,000 competent careerists will make the same X progress. Adding further competent careerists is useless for doing anything except making graphs look more exponential, and we should stop doing it. See also Price’s Law Of Scientific Contributions.
2. Certain features of the modern academic system, like underpaid PhDs, interminably long postdocs, endless grant-writing drudgery, and clueless funders have lowered productivity. The 1930s academic system was indeed 25x more effective at getting researchers to actually do good research.
3. All the low-hanging fruit has already been picked. For example, element 117 was discovered by an international collaboration who got an unstable isotope of berkelium from the single accelerator in Tennessee capable of synthesizing it, shipped it to a nuclear reactor in Russia where it was attached to a titanium film, brought it to a particle accelerator in a different Russian city where it was bombarded with a custom-made exotic isotope of calcium, sent the resulting data to a global team of theorists, and eventually found a signature indicating that element 117 had existed for a few milliseconds. Meanwhile, the first modern element discovery, that of phosphorous in the 1670s, came from a guy looking at his own piss. We should not be surprised that discovering element 117 needed more people than discovering phosphorous.
Needless to say, my sympathies lean towards explanation number 3. But I worry even this isn’t dismissive enough. My real objection is that constant progress in science in response to exponential increases in inputs ought to be our null hypothesis, and that it’s almost inconceivable that it could ever be otherwise.
Consider a case in which we extend these graphs back to the beginning of a field. For example, psychology started with Wilhelm Wundt and a few of his friends playing around with stimulus perception. Let’s say there were ten of them working for one generation, and they discovered ten revolutionary insights worthy of their own page in Intro Psychology textbooks. Okay. But now there are about a hundred thousand experimental psychologists. Should we expect them to discover a hundred thousand revolutionary insights per generation?
Or: the economic growth rate in 1930 was 2% or so. If it scaled with number of researchers, it ought to be about 50% per year today with our 25x increase in researcher number. That kind of growth would mean that the average person who made $30,000 a year in 2000 should make $50 million a year in 2018.
Or: in 1930, life expectancy at 65 was increasing by about two years per decade. But if that scaled with number of biomedicine researchers, that should have increased to ten years per decade by about 1955, which would mean everyone would have become immortal starting sometime during the Baby Boom, and we would currently be ruled by a deathless God-Emperor Eisenhower.
Or: the ancient Greek world had about 1% the population of the current Western world, so if the average Greek was only 10% as likely to be a scientist as the average modern, there were only 1/1000th as many Greek scientists as modern ones. But the Greeks made such great discoveries as the size of the Earth, the distance of the Earth to the sun, the prediction of eclipses, the heliocentric theory, Euclid’s geometry, the nervous system, the cardiovascular system, etc, and brought technology up from the Bronze Age to the Antikythera mechanism. Even adjusting for the long time scale to which “ancient Greece” refers, are we sure that we’re producing 1000x as many great discoveries as they are? If we extended BJRW’s graph all the way back to Ancient Greece, adjusting for the change in researchers as civilizations rise and fall, wouldn’t it keep the same shape as does for this century? Isn’t the real question not “Why isn’t Dwight Eisenhower immortal god-emperor of Earth?” but “Why isn’t Marcus Aurelius immortal god-emperor of Earth?”
Or: what about human excellence in other fields? Shakespearean England had 1% of the population of the modern Anglosphere, and presumably even fewer than 1% of the artists. Yet it gave us Shakespeare. Are there a hundred Shakespeare-equivalents around today? This is a harder problem than it seems – Shakespeare has become so venerable with historical hindsight that maybe nobody would acknowledge a Shakespeare-level master today even if they existed – but still, a hundred Shakespeares? If we look at some measure of great works of art per era, we find past eras giving us far more than we would predict from their population relative to our own. This is very hard to judge, and I would hate to be the guy who has to decide whether Harry Potter is better or worse than the Aeneid. But still? A hundred Shakespeares?
Or: what about sports? Here’s marathon records for the past hundred years or so:
In 1900, there were only two local marathons (eg the Boston Marathon) in the world. Today there are over 800. Also, the world population has increased by a factor of five (more than that in the East African countries that give us literally 100% of top male marathoners). Despite that, progress in marathon records has been steady or declining. Most other Olympics sports show the same pattern.
All of these lines of evidence lead me to the same conclusion: constant growth rates in response to exponentially increasing inputs is the null hypothesis. If it wasn’t, we should be expecting 50% year-on-year GDP growth, easily-discovered-immortality, and the like. Nobody expected that before reading BJRW, so we shouldn’t be surprised when BJRW provide a data-driven model showing it isn’t happening. I realize this in itself isn’t an explanation; it doesn’t tell us why researchers can’t maintain a constant level of output as measured in discoveries. It sounds a little like “God wouldn’t design the universe that way”, which is a kind of suspicious line of argument, especially for atheists. But it at least shifts us from a lens where we view the problem as “What three tweaks should we make to the graduate education system to fix this problem right now?” to one where we view it as “Why isn’t Marcus Aurelius immortal?”
And through such a lens, only the “low-hanging fruits” explanation makes sense. Explanation 1 – that progress depends only on a few geniuses – isn’t enough. After all, the Greece-today difference is partly based on population growth, and population growth should have produced proportionately more geniuses. Explanation 2 – that PhD programs have gotten worse – isn’t enough. There would have to be a worldwide monotonic decline in every field (including sports and art) from Athens to the present day. Only Explanation 3 holds water.
I brought this up at the conference, and somebody reasonably objected – doesn’t that mean science will stagnate soon? After all, we can’t keep feeding it an exponentially increasing number of researchers forever. If nothing else stops us, then at some point, 100% (or the highest plausible amount) of the human population will be researchers, we can only increase as fast as population growth, and then the scientific enterprise collapses.
I answered that the Gods Of Straight Lines are more powerful than the Gods Of The Copybook Headings, so if you try to use common sense on this problem you will fail.
Imagine being a futurist in 1970 presented with Moore’s Law. You scoff: “If this were to continue only 20 more years, it would mean a million transistors on a single chip! You would be able to fit an entire supercomputer in a shoebox!” But common sense was wrong and the trendline was right.
“If this were to continue only 40 more years, it would mean ten billion transistors per chip! You would need more transistors on a single chip than there are humans in the world! You could have computers more powerful than any today, that are too small to even see with the naked eye! You would have transistors with like a double-digit number of atoms!” But common sense was wrong and the trendline was right.
Or imagine being a futurist in ancient Greece presented with world GDP doubling time. Take the trend seriously, and in two thousand years, the future would be fifty thousand times richer. Every man would live better than the Shah of Persia! There would have to be so many people in the world you would need to tile entire countries with cityscape, or build structures higher than the hills just to house all of them. Just to sustain itself, the world would need transportation networks orders of magnitude faster than the fastest horse. But common sense was wrong and the trendline was right.
I’m not saying that no trendline has ever changed. Moore’s Law seems to be legitimately slowing down these days. The Dark Ages shifted every macrohistorical indicator for the worse, and the Industrial Revolution shifted every macrohistorical indicator for the better. Any of these sorts of things could happen again, easily. I’m just saying that “Oh, that exponential trend can’t possibly continue” has a really bad track record. I do not understand the Gods Of Straight Lines, and honestly they creep me out. But I would not want to bet against them.
Grace et al’s survey of AI researchers show they predict that AIs will start being able to do science in about thirty years, and will exceed the productivity of human researchers in every field shortly afterwards. Suddenly “there aren’t enough humans in the entire world to do the amount of research necessary to continue this trend line” stops sounding so compelling.
At the end of the conference, the moderator asked how many people thought that it was possible for a concerted effort by ourselves and our institutions to “fix” the “problem” indicated by BJRW’s trends. Almost the entire room raised their hands. Everyone there was smarter and more prestigious than I was (also richer, and in many cases way more attractive), but with all due respect I worry they are insane. This is kind of how I imagine their worldview looking:
I realize I’m being fatalistic here. Doesn’t my position imply that the scientists at Intel should give up and let the Gods Of Straight Lines do the work? Or at least that the head of the National Academy of Sciences should do something like that? That Francis Bacon was wasting his time by inventing the scientific method, and Fred Terman was wasting his time by organizing Silicon Valley? Or perhaps that the Gods Of Straight Lines were acting through Bacon and Terman, and they had no choice in their actions? How do we know that the Gods aren’t acting through our conference? Or that our studying these things isn’t the only thing that keeps the straight lines going?
I don’t know. I can think of some interesting models – one made up of a thousand random coin flips a year has some nice qualities – but I don’t know.
I do know you should be careful what you wish for. If you “solved” this “problem” in classical Athens, Attila the Hun would have had nukes. Remember Yudkowsky’s Law of Mad Science: “Every eighteen months, the minimum IQ necessary to destroy the world drops by one point.” Do you really want to make that number ten points? A hundred? I am kind of okay with the function mapping number of researchers to output that we have right now, thank you very much.
The conference was organized by Patrick Collison and Michael Nielsen; they have written up some of their thoughts here.