An independent researcher/blogger/philosopher about intelligence and agency (esp. Active Inference), alignment, ethics, interaction of the AI transition with the sociotechnical risks (epistemics, economics, human psychology), collective mind architecture, research strategy and methodology.
Twitter: https://twitter.com/leventov. E-mail: leventov.ru@gmail.com (the preferred mode of communication). I'm open to collaborations and work.
Presentations at meetups, workshops and conferences, some recorded videos.
I'm a founding member of the Gaia Consoritum, on a mission to create a global, decentralised system for collective sense-making and decision-making, i.e., civilisational intelligence. Drop me a line if you want to learn more about it and/or join the consoritum.
You can help to boost my sense of accountability and give me a feeling that my work is valued by becoming a paid subscriber of my Substack (though I don't post anything paywalled; in fact, on this blog, I just syndicate my LessWrong writing).
For Russian speakers: русскоязычная сеть по безопасности ИИ, Telegram group.
Even for those not directly employed by AI labs, there are similar dynamics in the broader AI safety community. Careers, research funding, and professional networks are increasingly built around certain ways of thinking about AI risk. Gradual disempowerment doesn't fit neatly into these frameworks. It suggests we need different kinds of expertise and different approaches than what many have invested years developing. Academic incentives also currently do not point here - there are likely less than ten economists taking this seriously, trans-disciplinary nature of the problem makes it hard sell as a grant proposal.
I agree this is unfortunate, but this also seems irrelevant? Academic economics (as well as sociology, political science, anthropology, etc.) are approximately completely irrelevant to shaping major governments' AI policies. "Societal preparedness" and "governance" teams at major AI labs and BigTech giants seem to have approximately no influence on the concrete decisions and strategies of their employers.
The last economist who influenced the economic and policy trajectory significantly was Milton Friedman perhaps?
If not research, what can affect the economic and policy trajectory at all in a deliberate way (disqualifying the unsteerable memetic and cultural drift forces), apart from powerful leaders themselves (Xi, Trump, Putin, Musk, etc.)? Perhaps the way we explore the "technology tree" (see https://michaelnotebook.com/optimism/index.html)? Such as the internet, social media, blockchain, form factors of AI models, etc. I don't hold too much hope here, but this looks to me like the only plausible lever.
My quick impression is that this is a brutal and highly significant limitation of this kind of research. It's just incredibly expensive for others to read and evaluate, so it's very common for it to get ignored.
I'd predict that if you improved the arguments by 50%, it would lead to little extra uptake.
I think this is wrong. The introduction of the GD paper takes no more than 10 minutes to read and no significant cognitive effort to grasp, really. I don't think there is more than 10% potential of making it any clearer or approachable.
https://gradual-disempowerment.ai/ is mostly about institutional progress, not narrow technical progress.
Undermind.ai I think is much more useful for searching concepts and ideas in papers rather than extracting tabular info a la Elicit. Nominally Elicit can do the former, too, but is quite bad in my experience.
https://openmined.org/ develops Syft, a framework for "private computation" in secure enclaves. It potentially reduces the barriers for data integration both within particularly bureaucratic orgs and across orgs.
Thanks for the post, I agree with it!
I just wrote a post with differential knowledge interconnection thesis, where I argue that it is on net beneficial to develop AI capabilities such as
I discuss whether knowledge interconnection exacerbates or abates the risk if industrial dehumanization on net in a section. It's a challenging question, but I reach the tentative conclusion that AI capabilities that favor obtaining and leveraging "interconnected" rather than "isolated" knowledge are on net risk-reducing. This is because the "human economy" is more complex than the hypothetical "pure machine-industrial economy", and "knowledge interconnection" capabilities support that greater complexity.
Would you agree or disagree with this?
I think the model of commercial R&D lab would often suit alignment work better than a "classical" startup company. Conjecture and AE Studio come to mind. Answer.AI, founded by Jeremy Howard (of Fast.ai and Kaggle) and Eric Ries (Lean Startup) elaborates on this business and organisational model here: https://www.answer.ai/posts/2023-12-12-launch.html.
But I should add, I agree that 1-3 poses challenging political and coordination problems. Nobody assumes it will be easy, including Acemoglu. It's just another one in the row of hard political challenges posed by AI, along with the questions of "aligned with whom?", considering/accounting for people's voice past dysfunctional governments and political elites in general, etc.
Separately, I at least spontaneously wonder: How would one even want to go about differentiating what is the 'bad automation' to be discouraged, from legit automation without which no modern economy could competitively run anyway? For a random example, say if Excel wouldn't yet exist (or, for its next update..), we'd have to say: Sorry, cannot do such software, as any given spreadsheet has the risk of removing thousands of hours of work...?! Or at least: Please, Excel, ask the human to manually confirm each cell's calculation...?? So I don't know how we'd in practice enforce non-automation. Just 'it uses a large LLM' feels weirdly arbitrary condition - though, ok, I could see how, due to a lack of alternatives, one might use something like that as an ad-hoc criterion, with all the problems it brings. But again, I think points 1. & 2. mean this is unrealistic or unsuccessful anyway.
Clearly, specific rule-based regulation is a dumb strategy. Acemoglu's suggestions: tax incentives to keep employment and "labour voice" to let people decide in the context of specific company and job how they want to work with AI. I like this self-governing strategy. Basically, the idea is that people will want to keep influencing things and will resist "job bullshittification" done to them, if they have the political power ("labour voice"). But they should also have alternative choice of technology and work arrangement/method that doesn't turn their work into rubber-stamping bullshit, but also alleviates the burden ("machine usefulness"). Because if they only have the choice between rubber-stamping bullshit job and burdensome job without AI, they may choose rubber-stamping.
It seems that a lot of white collar jobs will become (already becoming) positional goods, such as aristocratic titles, at least for a few years, possibly longer.
AI will do 100% of the "meat" of the job better than almost all humans, and ~equally for every user (prompting won't matter much).
But business will still demand accountability for results, and that the workers can claim that they understand and attest AI outputs (these claims themselves won't be tested, though, nor would it really matter in the grand scheme of things). At the same time, the productivity of these jobs will increase more than businesses can absorb, at least for a few years (and then perhaps fully automated companies will ensue). Thus, fewer total white collar workers are needed.
When the skill doesn't really matter, and the demand decreases, the jobs will become highly contested and the credentials, prestige (pedigree), connections, and "soft skills" (primarily: of passing the interviews) will decide these contents rather than "hard skills" (of which only the skill of understanding sophisticated AI outputs and potentially fix remaining issues with AI outputs will really matter, but the marginal difference between workers who are good and bad at this skill will be relatively small for the company's bottom line, and testing candidates for this skill will be too hard).
The above straightforwardly applies to all "digital"/online/IT/analyst/manager jobs.
I don't buy the takes like Steve Yegge's https://sourcegraph.com/blog/revenge-of-the-junior-developer and similar, with projections of white collar workers becoming 10x, 100x more productive than today. Backlogs are not that deep, and the marginal value of churning through 99% of these backlog issues for companies is ~0.
I also don't believe in Jevon's paradox wonders of increased demand for "digital" work, again at least for a few years (or realistically, 10+ years) until the economy goes through a deeper transformation (including geographically). In the meantime, the economy looks to be already ~saturated (or even oversaturated) with IT/digitalization, marketing, compliance, legal proceedings, analysis, educational materials, and other similar outputs of white collar work.