Our culture is dominated by an ideology of progress, called progressivism, which conflates purportedly inevitable progressions along trendlines (especially ones that amount to an increase in taxable activity, or increased economic mobilization) with the solution of problems and people's lives getting better. Because it's an ideology, progressives worship progress rather than having honest propositional views on it amenable to evidence; enough evidence might cause a personal crisis, but they don't have the virtues of lightness or specificity about it.
In practice progressivism is a job creation scheme for elites. The jobs created have to be constituted to manage problems, rather than solve them; solving problems destroys jobs. Oddly, if you wanted to slow down the rate at which we solve our proximate problems using AI or increase AI capacities, the best option available short of revolution or radical dissidence en masse might be to make "AI Progress" an important positive policy goal and try to persuade elites that it's important to get those metrics up.
Institutional recommendations are shaped by implicit constraints like "don't reduce headcount" and "don't invalidate your department's premise," internalized as limits on what's thinkable: https://benjaminrosshoffman.com/parkinsons-law-ideology-statistics/#diagnosis
Calling for X produces jobs doing the opposite: https://benjaminrosshoffman.com/openai-makes-humanity-less-safe/
Neoclassical (progressive) economics tries to quantify "total value," and then maximize it, which in practice means maximizing transaction volume: https://substack.com/@benhoffman700141/note/c-237461608
A much deeper dive into the same thing: https://benjaminrosshoffman.com/the-domestic-product/
To what extent do you think this manifests in the progress studies movement, in the sense of the cluster(s) coordinated around https://rootsofprogress.org/ or https://worksinprogress.co/ ?
Those seem like attempts to extend the useful life of the current regime by trying to organize around doing more of the things that originally won it legitimacy, rather than to productively criticize or supersede it. Sometimes you should patch up an old thing rather than buy a new one, sometimes this is false economy because the cost of upkeep is higher than the amortized cost of replacement, and sometimes you’re driving around in an explosive death trap or breathing mold every day making you sick when you should really just get a safe new car or house built from scratch.
I would put Tyler Cowen in the same category, accepting things like GDP as the best politically available target to organize around, but trying to persuade people to do good rather than bad things to raise the GDP.
I appreciate the post.
Regarding your questions, this might be a more sophisticated defense of technological determinism: https://80000hours.org/podcast/episodes/allan-dafoe-unstoppable-technology-human-agency-agi/
I don't remember much of what's there, but I remember disagreeing with various apparently important points (I think I had notes on this but lost them).
(Aside from this coming from someone working at an AGI lab, which raises my eyebrows on priors.)
I agree that we can stop or delay the development of some technology. However, AI does not appear to be one of them, in my view, for several reasons:
(i). The x-risk has little to no political salience, there is no such thing as a "scientific consensus"[1] yet, and you can't point to a concrete instance of the problem yet that will convince the uninitiated.[2]
(ii). Every policy that stops superintelligence from being developed will likely come with massive, concrete economic costs.
(iii). Fast global coordination is needed.
No example in your list has all of these, for instance:
You can point to the CAIS statement or similar things, but this is not in the same category as e.g. the consensus around climate change.
For example, you don't need to understand nuclear physics to understand "big fireball bad", but you need more theory to be scared about METR reward hacking results.
“We can’t prevent progress” say the people for some reason enthusiastically advocating that we just risk dying by AI rather than even consider contravening this law.
I have several problems with this, beyond those unsubtly hinted at above.
First, it seems to be willfully conflating “increasing technology understanding and/or tools” with “things getting better”. The word ‘progress’ generally means ‘things getting better’, but here in a debate about whether it is good or not for society to acquire and spread some specific information and tools, we are being asked to label all increases in information and tools as ‘progress’, which is quite the presumption of a particular conclusion.
(Yes the sub-debate here is more narrowly about whether averting technology is feasible not whether it is good, but the bid here to implicitly grant that the infeasible thing is also reprehensible and backward to want (i.e. anti-”progress”) seems unfriendly.)
If we separate the conflated concepts—i.e. distinguish ‘increasing technological information and tools’ from ‘things getting better’—the statement doesn’t seem remotely true for either of them.
First: Preventing things from getting better is a capability humans have had perhaps at least as far back as the Sea Peoples of Bronze Age collapse fame. (If indeed we go ahead and make machines that do in fact destroy humanity, we will also have prevented ‘progress’ in the normal sense.)
But now let’s consider preventing “increasing technology information and tools”, which seems like the more relevant contention. I’m a bit unsure what the position is here, honestly—do people think for instance that the FDA doesn’t slow down the pharmaceutical industry? Do they think that the pharmaceutical industry is too small and insulated from financial incentives for its slowing down to be evidence about AI?
Perhaps we just don’t usually think of the pharmaceutical industry as ‘slowed down’ because we are used to that as the way it operates? Or perhaps this doesn’t count because the point isn’t to slow it down, it’s just to have it proceed at the rate it can do so safely for people, with the slowness as an unfortunate side-effect. In which case, fine—that would also do for AI!
In case this example is for some reason wanting, here are more examples of technologies slowed down to something more like a halt, from a previous post (more detail here also):
Aside from the seeming disconnect with empirical evidence, I’m confused by the theoretical model here. Do people think the rate of technological development can’t be affected by funding, or by the costs of inputs, or by regulation? Or do they think these factors would affect technology, but that this will never in practice happen because the relevant decisionmakers will never have the will?
Do they also think technology cannot be sped up? If so, how is that different?
Do they just mean you can’t fully grind it to a halt, preventing all progress? That may be so, but in that case, slowing it down a lot would generally suffice!