An independent researcher/blogger/philosopher about intelligence and agency (esp. Active Inference), alignment, ethics, interaction of the AI transition with the sociotechnical risks (epistemics, economics, human psychology), collective mind architecture, research strategy and methodology.
Twitter: https://twitter.com/leventov. E-mail: leventov.ru@gmail.com (the preferred mode of communication). I'm open to collaborations and work.
Presentations at meetups, workshops and conferences, some recorded videos.
I'm a founding member of the Gaia Consoritum, on a mission to create a global, decentralised system for collective sense-making and decision-making, i.e., civilisational intelligence. Drop me a line if you want to learn more about it and/or join the consoritum.
You can help to boost my sense of accountability and give me a feeling that my work is valued by becoming a paid subscriber of my Substack (though I don't post anything paywalled; in fact, on this blog, I just syndicate my LessWrong writing).
For Russian speakers: русскоязычная сеть по безопасности ИИ, Telegram group.
https://openmined.org/ develops Syft, a framework for "private computation" in secure enclaves. It potentially reduces the barriers for data integration both within particularly bureaucratic orgs and across orgs.
Thanks for the post, I agree with it!
I just wrote a post with differential knowledge interconnection thesis, where I argue that it is on net beneficial to develop AI capabilities such as
I discuss whether knowledge interconnection exacerbates or abates the risk if industrial dehumanization on net in a section. It's a challenging question, but I reach the tentative conclusion that AI capabilities that favor obtaining and leveraging "interconnected" rather than "isolated" knowledge are on net risk-reducing. This is because the "human economy" is more complex than the hypothetical "pure machine-industrial economy", and "knowledge interconnection" capabilities support that greater complexity.
Would you agree or disagree with this?
I think the model of commercial R&D lab would often suit alignment work better than a "classical" startup company. Conjecture and AE Studio come to mind. Answer.AI, founded by Jeremy Howard (of Fast.ai and Kaggle) and Eric Ries (Lean Startup) elaborates on this business and organisational model here: https://www.answer.ai/posts/2023-12-12-launch.html.
But I should add, I agree that 1-3 poses challenging political and coordination problems. Nobody assumes it will be easy, including Acemoglu. It's just another one in the row of hard political challenges posed by AI, along with the questions of "aligned with whom?", considering/accounting for people's voice past dysfunctional governments and political elites in general, etc.
Separately, I at least spontaneously wonder: How would one even want to go about differentiating what is the 'bad automation' to be discouraged, from legit automation without which no modern economy could competitively run anyway? For a random example, say if Excel wouldn't yet exist (or, for its next update..), we'd have to say: Sorry, cannot do such software, as any given spreadsheet has the risk of removing thousands of hours of work...?! Or at least: Please, Excel, ask the human to manually confirm each cell's calculation...?? So I don't know how we'd in practice enforce non-automation. Just 'it uses a large LLM' feels weirdly arbitrary condition - though, ok, I could see how, due to a lack of alternatives, one might use something like that as an ad-hoc criterion, with all the problems it brings. But again, I think points 1. & 2. mean this is unrealistic or unsuccessful anyway.
Clearly, specific rule-based regulation is a dumb strategy. Acemoglu's suggestions: tax incentives to keep employment and "labour voice" to let people decide in the context of specific company and job how they want to work with AI. I like this self-governing strategy. Basically, the idea is that people will want to keep influencing things and will resist "job bullshittification" done to them, if they have the political power ("labour voice"). But they should also have alternative choice of technology and work arrangement/method that doesn't turn their work into rubber-stamping bullshit, but also alleviates the burden ("machine usefulness"). Because if they only have the choice between rubber-stamping bullshit job and burdensome job without AI, they may choose rubber-stamping.
If you'd really be able to coordinate globally to enable 1. or 2. globally - extremely unlikely in the current environment and given the huge incentives for individual countries to remain weak in enforcement - then it seems you might as well try to impose directly the economic first best solution w.r.t. robots vs. labor: high global tax rates and redistribution.
If anything, this problem seems more pernicious wrt. climate change mitigation and environmental damage: it's much more distributed, not only in US and China, but Russia and India are also big emitters, big leverage in Brazil, Congo, and Indonesia with their forests, overfishing and ocean pollution everywhere, etc.
With AI, it's basically the question of regulating US and UK companies: EU is always eager to over-regulate relative to the US, and China is already successfully and closely regulating their AI for a variety of reasons (which Acemoglu points out). The big problem of the Chinese economy is weak internal demand, and automating jobs and therefore increasing inequality and decreasing the local purchasing power is the last thing that China wants.
What levels of automation does the AI provide and at what rate is what he suggests to influence directly (specifically, slow down), through economic and political measures. So it's not fair to list that as an assumption.
It would depend on exact details, but if a machine can do something as well or better than a human, then the machine should do it.
It's a question of how to design work. Machine can cultivate better than a human a monoculture mega-farm, but not a small permaculture garden (at least, yet). Is a monoculture mega-farm more "effective"? Maybe, if we take the pre-AI opportunity cost of human labour, but also maybe not with the post-AI opportunity cost of human labour. And this is before factoring in the "economic value" of better psychological and physical health of people who work on small farms vs. those who eat processed food on their couches that is done from the crops grown on monoculture mega-farms, and do nothing.
As I understand, Acemoglu rougly suggests to look for ways to apply this logic in other domain of economy, including the knowledge economy. Yes, it's not guaranteed that such arrangements will stay economical for a long time (but it's also not beyond my imagination, especially if we factor in the economic value of physical and psychological health), but it may set the economy and the society on a different trajectory with higher chances of eventualities that we would consider "not doom".
What does "foster labour voice" even mean?
Unions 2.0, or something like holacracy?
Especially in companies where everything is automated.
Not yet. Clearly, what he suggests could only remain effective for a limited time.
You can give more power to current employees of current companies, but soon there will be new startups with zero employees (or where, for tax reasons, owners will formally employ their friends or family members).
Not that soon at all, if we speak about the real economy. In IT sector, I suspect that Big Techs will win big in the AI race because only they have deep enough pockets (you already see Inflection AI quasi-acquired by MS, Stability essentially bust, etc.). And Big Techs still have huge workforces and it won't be just Nadella or just Pichai anytime soon. Many other knowledge sectors (banks, law) are regulated and also won't shed employees that fast.
Human-complementary AI technologies again sounds like a bullshit job, only mostly did by a machine, where a human is involved somewhere in the loop, but the machine could still do his part better, too.
In my gardening example, a human may wear AI goggles that tell them which plants or animal species do their see or what disease a plant has.
Tax on media platforms -- solves a completely different problem. Yes, it is important to care about public mental health. But that is separate from the problem of technological unemployment. (You could have technological unemployment even in the universe where all social media are banned.)
Tax on media platforms is just a concrete example of how "reforming business models" could be done in practice, maybe not the best one (but it's not my example). I will carry on with my gardening example and suggest "tax on fertiliser": make it so huge that megafarms (which require a lot of fertiliser) become less economical than permaculture gardens. Because without such a push, permaculture gardens won't magically materialise. Acemoglu underscores this point multiple times: it's not a matter of pure technological invention and application of it in a laissez-faire market to switch to a different socioeconomic trajectory. Inventing AI goggles for gardening (or any other technology which makes permaculture gardening arbitrarily convenient) won't make the economy to switch from monoculture mega-farms without an extra push.
Perhaps, Acemoglu also has something in his mind about attention/creator economy and the automation that may happen to them (AI influencers can replace human influencers) when he talks about "digital ad tax", but I don't see it.
John Vervaeke calls attunement "relevance realization".
Undermind.ai I think is much more useful for searching concepts and ideas in papers rather than extracting tabular info a la Elicit. Nominally Elicit can do the former, too, but is quite bad in my experience.