Roman Leventov

An independent researcher/blogger/philosopher about intelligence and agency (esp. Active Inference), alignment, ethics, interaction of the AI transition with the sociotechnical risks (epistemics, economics, human psychology), collective mind architecture, research strategy and methodology.

Twitter: https://twitter.com/leventov. E-mail: leventov.ru@gmail.com (the preferred mode of communication). I'm open to collaborations and work.

Presentations at meetups, workshops and conferences, some recorded videos.

I'm a founding member of the Gaia Consoritum, on a mission to create a global, decentralised system for collective sense-making and decision-making, i.e., civilisational intelligence. Drop me a line if you want to learn more about it and/or join the consoritum.

You can help to boost my sense of accountability and give me a feeling that my work is valued by becoming a paid subscriber of my Substack (though I don't post anything paywalled; in fact, on this blog, I just syndicate my LessWrong writing).

For Russian speakers: русскоязычная сеть по безопасности ИИ, Telegram group

Sequences

A multi-disciplinary view on AI safety

Wikitag Contributions

Comments

Sorted by

Even for those not directly employed by AI labs, there are similar dynamics in the broader AI safety community. Careers, research funding, and professional networks are increasingly built around certain ways of thinking about AI risk.  Gradual disempowerment doesn't fit neatly into these frameworks. It suggests we need different kinds of expertise and different approaches than what many have invested years developing. Academic incentives also currently do not point here - there are likely less than ten economists taking this seriously, trans-disciplinary nature of the problem makes it hard sell as a grant proposal.

I agree this is unfortunate, but this also seems irrelevant? Academic economics (as well as sociology, political science, anthropology, etc.) are approximately completely irrelevant to shaping major governments' AI policies. "Societal preparedness" and "governance" teams at major AI labs and BigTech giants seem to have approximately no influence on the concrete decisions and strategies of their employers.

The last economist who influenced the economic and policy trajectory significantly was Milton Friedman perhaps?

If not research, what can affect the economic and policy trajectory at all in a deliberate way (disqualifying the unsteerable memetic and cultural drift forces), apart from powerful leaders themselves (Xi, Trump, Putin, Musk, etc.)? Perhaps the way we explore the "technology tree" (see https://michaelnotebook.com/optimism/index.html)? Such as the internet, social media, blockchain, form factors of AI models, etc. I don't hold too much hope here, but this looks to me like the only plausible lever.

My quick impression is that this is a brutal and highly significant limitation of this kind of research. It's just incredibly expensive for others to read and evaluate, so it's very common for it to get ignored.

I'd predict that if you improved the arguments by 50%, it would lead to little extra uptake.

I think this is wrong. The introduction of the GD paper takes no more than 10 minutes to read and no significant cognitive effort to grasp, really. I don't think there is more than 10% potential of making it any clearer or approachable.

Undermind.ai I think is much more useful for searching concepts and ideas in papers rather than extracting tabular info a la Elicit. Nominally Elicit can do the former, too, but is quite bad in my experience.

https://openmined.org/ develops Syft, a framework for "private computation" in secure enclaves. It potentially reduces the barriers for data integration both within particularly bureaucratic orgs and across orgs.

Thanks for the post, I agree with it!

I just wrote a post with differential knowledge interconnection thesis, where I argue that it is on net beneficial to develop AI capabilities such as

  • Federated learning, privacy-preserving multi-party computation, and privacy-preserving machine learning.
  • Federated inference and belief sharing.
  • Protocols and file formats for data, belief, or claim exchange and validation.
  • Semantic knowledge mining and hybrid reasoning on (federated) knowledge graphs and multimodal data.
  • Structured or semantic search.
  • Datastore federation for retrieval-based LMs.
  • Cross-language (such as, English/French) retrieval, search, and semantic knowledge integration. This is especially important for low-online-presence languages.

I discuss whether knowledge interconnection exacerbates or abates the risk if industrial dehumanization on net in a section. It's a challenging question, but I reach the tentative conclusion that AI capabilities that favor obtaining and leveraging "interconnected" rather than "isolated" knowledge are on net risk-reducing. This is because the "human economy" is more complex than the hypothetical "pure machine-industrial economy", and "knowledge interconnection" capabilities support that greater complexity.

Would you agree or disagree with this?

I think the model of commercial R&D lab would often suit alignment work better than a "classical" startup company. Conjecture and AE Studio come to mind. Answer.AI, founded by Jeremy Howard (of Fast.ai and Kaggle) and Eric Ries (Lean Startup) elaborates on this business and organisational model here: https://www.answer.ai/posts/2023-12-12-launch.html.

But I should add, I agree that 1-3 poses challenging political and coordination problems. Nobody assumes it will be easy, including Acemoglu. It's just another one in the row of hard political challenges posed by AI, along with the questions of "aligned with whom?", considering/accounting for people's voice past dysfunctional governments and political elites in general, etc.

Separately, I at least spontaneously wonder: How would one even want to go about differentiating what is the 'bad automation' to be discouraged, from legit automation without which no modern economy could competitively run anyway? For a random example, say if Excel wouldn't yet exist (or, for its next update..), we'd have to say: Sorry, cannot do such software, as any given spreadsheet has the risk of removing thousands of hours of work...?! Or at least: Please, Excel, ask the human to manually confirm each cell's calculation...?? So I don't know how we'd in practice enforce non-automation. Just 'it uses a large LLM' feels weirdly arbitrary condition - though, ok, I could see how, due to a lack of alternatives, one might use something like that as an ad-hoc criterion, with all the problems it brings. But again, I think points 1. & 2. mean this is unrealistic or unsuccessful anyway.

Clearly, specific rule-based regulation is a dumb strategy. Acemoglu's suggestions: tax incentives to keep employment and "labour voice" to let people decide in the context of specific company and job how they want to work with AI. I like this self-governing strategy. Basically, the idea is that people will want to keep influencing things and will resist "job bullshittification" done to them, if they have the political power ("labour voice"). But they should also have alternative choice of technology and work arrangement/method that doesn't turn their work into rubber-stamping bullshit, but also alleviates the burden ("machine usefulness"). Because if they only have the choice between rubber-stamping bullshit job and burdensome job without AI, they may choose rubber-stamping.

If you'd really be able to coordinate globally to enable 1. or 2. globally - extremely unlikely in the current environment and given the huge incentives for individual countries to remain weak in enforcement - then it seems you might as well try to impose directly the economic first best solution w.r.t. robots vs. labor: high global tax rates and redistribution.

If anything, this problem seems more pernicious wrt. climate change mitigation and environmental damage: it's much more distributed, not only in US and China, but Russia and India are also big emitters, big leverage in Brazil, Congo, and Indonesia with their forests, overfishing and ocean pollution everywhere, etc.

With AI, it's basically the question of regulating US and UK companies: EU is always eager to over-regulate relative to the US, and China is already successfully and closely regulating their AI for a variety of reasons (which Acemoglu points out). The big problem of the Chinese economy is weak internal demand, and automating jobs and therefore increasing inequality and decreasing the local purchasing power is the last thing that China wants.

Load More